filename stringlengths 65 180 | title stringlengths 4 2.07k | authors listlengths 0 8.02k | abstract stringlengths 1 25.1k ⌀ | body stringlengths 0 753k ⌀ | references listlengths 0 2.07k |
|---|---|---|---|---|---|
8e61a8f096dd4b4d81ec7911f9498144_Vesical tumor within an inguinal bladder hernia A case report_10.1016_j.eucr.2021.101680.xml | Vesical tumor within an inguinal bladder hernia: A case report | [
"Binjawhar, Abdulrahman",
"Alsufyani, Abdullah",
"Aljaafar, Mohammed",
"Alsunbul, Abdulrahman",
"AlMuaiqel, Muaiqel",
"Alghamdi, Abdullah M."
] | Bladder herniation through the inguinal canal is a rare condition, accounting for only 1–4% of all inguinal hernias. Most patients are asymptomatic or have atypical symptoms.
65-year-old male who presented with gross hematuria, right inguinal swelling. Diagnosed preoperatively by CT scan to have bladder neoplasm within a right inguinal vesical hernia. Patient. Underwent TURBT and hernia repair.
Inguinal bladder hernia is rare and occurs more commonly in obese male patients above 50 years of age. CT scan is the radiological modality choice to confirm the diagnosis prior intervention to avoid intraoperative complications such as bladder injury. | Introduction Bladder herniation through the inguinal canal is a rare condition, accounting for only 1–4% of all inguinal hernias. The majority of cases are observed in obese male patients older than 50 years. 1 Most patients are asymptomatic, which leads to difficulty in diagnosing a bladder herniation, with less than 7% of cases being diagnosed preoperatively. Some patients may present with atypical symptoms, such as voiding when manual compression is applied over the inguinal area. 2 A literature search revealed hundreds of inguinal hernia cases containing the bladder, of which only 21 contained a bladder neoplasm. We report the case of a patient who presented with a vesical tumor within a right inguinal hernia. 1 Case presentation A 65-year-old obese male, known to have benign prostate enlargement, presented to our outpatient clinic with a chief complaint of gross hematuria for 1 month. The hematuria was associated with urinary frequency, urgency, nocturia, a feeling of incomplete emptying, and a right inguinal swelling that spontaneously regressed after voiding or manually regressed on compression while voiding. He had no relevant past surgical history and was a non-smoker. Physical examination revealed a BMI of 35, and abdominal examination revealed a right inguinal bulge, which was approximately 6 cm, compressible, and non-tender. Digital rectal examination revealed an enlarged prostate, which felt benign and did not contain any suspicious nodules. All laboratory findings were within normal limits. Urine culture and sensitivity and urine cytology were negative. Preoperative contrast-enhanced computed tomography (CT) of the abdomen and pelvis revealed a 4 × 6 cm polypoid enhancing lesion within a herniated urinary bladder, and no evidence of abdominopelvic metastases ( Figs. 1 and 2 ). After preoperative clearance for anesthesia, the patient underwent transurethral resection of bladder tumor (TURBT), followed by hernia repair by the general surgery team. Intraoperative cystoscopy showed a normal urethra, prostate, and bladder. The right lateral wall of the bladder was pulled inside the right inguinal canal. Manual compression of the right inguinal swelling revealed multiple papillary tumors. Resection of the tumors in a herniated bladder via endoscopic surgery is challenging. Therefore, we kept applying manual compression over the right inguinal area to fully access and completely resect the tumors. Specimens were sent for histopathological examination. A 3-way Foley catheter was inserted, and a mesh repair for the inguinal hernia was performed by the general surgery team. The operative procedure was smooth, and there were no intra- or postoperative complications. On postoperative day 2, the patient was doing well and had a clear urine output; therefore, his Foley catheter was removed, and he was discharged. The patient was seen in the outpatient clinic again after 2 weeks. Post-operative histopathology showed invasive subepithelial connective tissue, suggestive of a high-grade urothelial carcinoma. Hence, the patient was started on intravesical Bacillus Calmette-Guerin (BCG), and was planned for cystoscopic surveillance. Discussion Bladder herniation through the inguinal canal is a rare condition, accounting for only 1–4% of all inguinal hernias. Studies have reported that 11.2% of bladder inguinal hernias were associated with urological malignancies, of which 23.5% were associated with various complications. 1 , 3 Bladder outlet obstruction, benign prostatic hypertrophy, bladder underactivity, weak abdominal fascia, obesity, male gender, and advanced age have been identified as risk factors for vesical herniation through the inguinal canal. 1 Of all cases reported, most bladder inguinal hernias were identified in obese male patients above 50 years of age, with some reports stating that males are 10 times more affected than females. Vesical herniation has also been more commonly reported in the right side, as seen in this case. 4 , The majority of patients are asymptomatic, although atypical or nonspecific symptoms, such as urinary frequency, urgency, nocturia, and hematuria, may occasionally be reported, as were by our patient. 2 1 , 5 , 4 Considering the low incidence and nonspecific presentation of inguinal bladder hernias, preoperative diagnosis is often extremely challenging. It has been reported that only 7% of all cases are diagnosed preoperatively, whereas the intraoperative and postoperative diagnostic rates are 77% and 16%, respectively. A preoperative diagnosis can be made through history, physical examination, and imaging (by ultrasonography, cystography, and/or CT), which can help prevent the occurrence of intraoperative complications, such as bladder injury and leakage. 3 , 1 , 5 , A literature search revealed hundreds of inguinal hernia cases containing the bladder, of which only 21 cases contained a bladder neoplasm. 4 We reported a case of a 65-year-old male who presented with gross hematuria, right inguinal swelling, and a feeling of incomplete emptying. Through a combination of medical history, physical examination, and radiological and intraoperative findings, we perioperatively diagnosed the patient with a bladder neoplasm within a right inguinal vesical hernia. Most of the cases described in the literature were also diagnosed perioperatively. CT scans have been used to accurately diagnose and stage the tumor, 4 as were utilized in our case. We identified multiple papillary tumors on cystoscopy and performed TURBT. Resection of the tumor in a herniated bladder via endoscopic surgery is challenging. Therefore, we applied manual compression over the right inguinal swelling to fully access and completely resect the tumor. This was followed by a repair of the inguinal hernia by the general surgery team. 5 Conclusion Inguinal bladder hernia is rare and occurs more commonly in obese male patients above 50 years of age. An inguinal hernia that reduces after voiding should raise the suspicion of an inguinal bladder hernia, and associated hematuria should necessitate evaluation for the presence of a genitourinary tumor. CT scan is the radiological modality choice to confirm the diagnosis prior intervention to avoid intraoperative complications such as bladder injury. Funding sources None. Declaration of competing interest The authors have no conflicts of interest to declare. Acknowledgements None. | [
"ELKBULI",
"TASKOVSKA",
"NAMBA",
"OZMAN",
"PAPATHEOFANI"
] |
fce118e749fd42d1bcc45c2851cfc0be_Dynamics in Transcriptomics Advancements in RNA-seq Time Course and Downstream Analysis_10.1016_j.csbj.2015.08.004.xml | Dynamics in Transcriptomics: Advancements in RNA-seq Time Course and Downstream Analysis | [
"Spies, Daniel",
"Ciaudo, Constance"
] | Analysis of gene expression has contributed to a plethora of biological and medical research studies. Microarrays have been intensively used for the profiling of gene expression during diverse developmental processes, treatments and diseases. New massively parallel sequencing methods, often named as RNA-sequencing (RNA-seq) are extensively improving our understanding of gene regulation and signaling networks. Computational methods developed originally for microarrays analysis can now be optimized and applied to genome-wide studies in order to have access to a better comprehension of the whole transcriptome. This review addresses current challenges on RNA-seq analysis and specifically focuses on new bioinformatics tools developed for time series experiments. Furthermore, possible improvements in analysis, data integration as well as future applications of differential expression analysis are discussed. | 1 Introduction Profiling of gene expression via high-throughput methods has been achieved for the first time in 1992 with the development of Differential Display protocols [1] followed in 1995 by the implementation of complementary DNA microarrays [2] . Subsequently, several other large scale techniques were developed like Serial Analysis of Gene Expression (SAGE) [3] , Massive Parallel Signature Sequencing (MPSS) [4] , Cap Analysis Gene Expression (CAGE) [5] and tiling arrays [6] . Finally, the breakthrough of RNA-seq [7] technology now offers scientist greater power, lower costs and new tools to better understand a wide spectrum of scientific and complex medical problems [8] . RNA-seq allows the assessment of the whole transcriptome (known and novel transcripts), including: allele specific expression, gene fusions, non coding transcripts such as long non coding RNAs (lncRNA), enhancer RNAs (eRNA) and the possibility to detect alternatively spliced variants (reviewed in [9,10] ). Compared to microarrays approach, RNA-seq data is highly reproducible and allows the identification of alternative splice variants as well as novel transcripts [11] . Expression or tiling microarrays and capture arrays are still used intensively in biology and medicine for specialized tasks and diagnosis [12] due to the standardized protocols and gold standard bioinformatics analysis. Several RNA-seq protocols for differential expression or detection of novel transcripts have been developed and can be classified into two main methods: enrichment of messenger RNA (mRNA) or depletion of ribosomal RNA (rRNA). For eukaryote genomes, the most common and so far standardized protocol is the selection of poly(A +) transcripts (mRNA) via oligo-dT beads enriching non rRNA fractions. The second category consists of the depletion of ribosomal RNA [13] . Several of these protocols, have been compared and reviewed in regards to different applications [14,15] . When studying dynamic biological processes [16] such as development or drug responses, datasets have to be captured continually in a Time Course (TC) experiment. Therefore, these data are sampled at several Time Points (TP) in order to recapitulate the whole regulatory network involved, identifying possible regulators and genes switches responsible e.g. for cyclic behavior or correct differentiation of cells. TC experiments can be classified into three groups [17] : i) Single-time series investigating only one condition. Here, all time points are compared to the first one, which is considered as control. This approach requires fewer samples, but will not properly control for e.g. varying temperature in the incubator, as the control was not sampled over time. ii) Multi-time series accessing several conditions simultaneously. The TC data sets are compared to a control TC. This approach allows to better control the experiment, due to the fact that controls are sampled over the time in parallel across the samples. Alternatively, the comparison can be performed directly between the different condition TCs. The drawback of this approach is higher costs, as more samples have to be sequenced and analyzed. iii) Periodicity and cyclic TC consisting of single or multiple time series. A cyclic event of interest (e.g. cell cycle of proliferating cells) is investigated for reoccurring expression patterns and their differences between conditions. As at least two full cycles should be sampled for each condition, a large number of total samples are required to perform such experiments. Furthermore, differentiating between phases within the cyclic event might be challenging and may lead to “mixed datasets” due to non-uniform cell identities of mixed cell populations. Therefore, synchronization of cells prior the experiment is of importance to avoid “mixed datasets”. As the complexity of the obtained data is increased by at least one dimension per TP of each sample, specific algorithms and methods are required to analyze TC experiments. Some have already been successfully implemented for microarray data. However, only few have been adapted for RNA-seq data (reviewed in [18] ). In the following sections of this review, we will discuss current challenges and available methods as well as promising improvements and extensions of RNA-seq Time Course experiments. 2 Methods Time course experiments follow the same workflow as static RNA-seq experiments, starting with preprocessing and normalization of the data, followed by differential gene expression (DEG) and downstream analysis by clustering and network construction ( Fig. 1 ). In this review, we are only considering the analysis of RNA-seq TC data, therefore assuming that the data was already pre-processed (quality controlled, mapped and if necessary read count files created). We only consider whole population RNA-seq data, not including single cell RNA-seq approaches. For a complete overview and comparison of sequencing platforms as well as available tools for mapping reads the reader is referred to [19,20] . 2.1 Biases/Challenges 2.1.1 Experimental Design Well known biases, such as GC content, gene/fragment length or batch effects [19] are currently assessed during the quality control step using QC tools like FastQC (available online under http://www.bioinformatics.babraham.ac.uk/projects/fastqc ). Time course experiments introduce additional experimental and computational challenges that have to be addressed and will be further discussed. As in other sequencing experiments, the experimental design is of utmost importance. Setting the sampling rate by defining the number of replicates per time point (TP) and the number of TP is still dictated by relatively high sequencing costs. In the case of microarray experiments, under-sampling has been shown to cause aggregation of effects due to insufficient temporal resolution [21] . Some tools are already available to facilitate sample size calculation for RNA-seq data [22,23] . These methods calculate a sample size of 20 to 79 or between 8 and 40 in order to detect differential expression (for the detection of a log fold change of 2 and power of 80%). However, such number of samples is for several experiments not feasible and most of these approaches do not consider multi-factor experiments. Recent estimations of power and sample size for RNA-seq have been performed on different datasets. This work revealed that 10 replicates on a 10,000$ budget restrain already yield maximum predictive power, a number of replicates that nevertheless could be still to high for static and especially time course experiments [24] . Moreover, choosing a feasible method to analyze data is depending on the experimental setup. This depends on whether it is a long or short time course (< 5 TP) experiment, or whether the time course was sampled uniformly and on how many replicates are needed for reliable and robust final statistics evaluation. Depending on the system investigated, it might also be necessary to synchronize the data in order to accomplish a uniform starting point to exclude phase (e.g. cell cycle, development, circadian rhythm) or patient specific (e.g. age or diseases) differences and therefore improve normalization and DEG analysis. So far no gold standard method is established for RNA-seq data analysis, though for some specific applications guidelines have been recently published [25] . The sequencing depth is usually not posing a problem (unless when rare or novel transcripts have to be detected, which require a 100–200 × coverage for human or mouse genomes). A protocol of 100 bp paired end library preparation coupled with a minimum of three replicates should be established as minimum requirement for powerful statistics of DEG analysis [26] . When having to make a trade-off between sequencing depth and biological samples, Liu and colleagues showed that adding more replicates is increasing predictive power of detecting DEGs to a greater extend than sequencing depth [27] . The quality of the raw data is of importance for the subsequent bioinformatics analysis. Therefore, a good experimental design including a statistically relevant number of controls and replicates are essential for the quality control, mapping and normalization steps. Erroneous designs, including no replicates, will result in less powerful statistics, an increase of false positive candidates and will cause unnecessary and enormous costs in downstream analysis and validation experiments. Possible attempts to improve data quality are mentioned in the discussion of this review. 2.1.2 Analysis Several methods/tools have been developed for microarrays (e.g. lumi [28] , affy [29] ) or static RNA-seq (e.g. edgeR [30] or DESeq2 [31] ) analysis. The most recent tools are able to solve the problems of differences in sequencing depth (library size), outliers and batch effects introduced by library preparation protocols, sequencing platform and technical variability between sequencing runs [32] . Even if some tools developed for static experiments can be used for TC data, one major issue is that they do not consider correlations of genes between previous and subsequently TP. Indeed, random patterns, overall time trends in expression or time shifts are therefore not taken into account for normalization, noise correction and differential expression steps. For example, a drug treatment could induce a slower metabolism of a cell population, resulting in a delay or change in the establishment of gene expression patterns. Such delay effects can be recognized only when using all TP data for analysis. 2.2 Differential Gene Expression Methods for Static RNA-seq Data Analysis Most established methods for DEG analysis are parametric using count-based input and apply their own normalization approaches to raw data. The majority of parametric methods apply a negative binomial model to the read counts in order to account not only for the technical variance but also address the biological variance. Previously, Poisson distributions [11] were used to correct for the technical variance. The one-parameter distribution is not able to describe biological variance, which is higher than a calculated mean expression making the Poisson distribution unsuitable. Therefore a negative binomial distribution is used, adding a dispersion parameter to be more flexible accounting for biological variance and appropriately identifying DEGs [31,33,34] . Several non-parametric methods like NOISeq [35] , or more recently NPEBseq [36] and LFCseq [37] offer an alternative way to normalize and model expression data, which are not fitting with negative binomial or Poisson distributions. Nevertheless, these methods are usually more computationally exhausting and need a higher number of replicates to perform equally well [26,38] . Major methods perform equally well in normalizing the data [39] , but show significant differences in the number of DEGs identified, in accuracy and in power. In this review, we will not discuss each method in detail and we will not make a statement regarding which method to use. These methods were designed for a specific context and might be more appropriate for certain experiments. In conclusion, there is no overall best method for all types of analysis. However, we would like to emphasize the importance of considering the following aspects when choosing a method for analyzing the data to meet the experimental design: - How many replicates are needed for this method? - Is a simple two-way comparison sufficient or is a more complex multi-factor model needed for DEG analysis? - Is it desirable to detect differentially expressed RNA isoforms as well? 2.3 Differential Gene Expression Methods for TC RNA-seq Data Analysis Time Series experiments have been extensively conducted in the past using microarrays, providing algorithms such as spline fitting [40,41] , Bayes statistics [42,43] or Gaussian processes [44,45] to account for temporal aspects of DEG. Moreover, algorithms detecting periodic patterns have been also developed (e.g. Lomb–Scargle periodograms [46] ). Most of them have been implemented into pipelines such as STEM [46] , maSigPro [47] , BETR [48] , TIALA [49] and platforms for researchers like PESTS [50] . To date, there are only five tools available to implement RNA-seq TC data for DEG analysis that we would like to describe in more detail ( Table 1 . Of Note, more detailed explanations of standard statistic models and tests can be found in text books [51,52] and detailed information about new approaches are described in the corresponding literature cited). Next maSigPro [53] is an updated version of maSigPro, an R package on Bioconductor ( http://www.bioconductor.org ) initially developed for microarray TC experiments. The updated version allows the analysis of RNA-seq TC data as well. It uses generalized linear models instead of a linear model in order to allow the modeling of count data. This is achieved by fitting to a negative binomial distribution followed by a polynomial regression. In order to be detected as DEG, the difference of Log Likelihood Ratio of the hypotheses has to be greater than a user defined significance threshold. This ensures a best-fit model for each gene by only keeping significant coefficients. Though, Next maSigPro does not offer built-in normalization methods, the package is equipped with functions for clustering and visualization of processed data. In a comparison with edgeR package, Next maSigPro can control better the False Discovery Rate (FDR). Candidates identified by both approaches or solely by Next maSigPro have highly significant and well-fitted models, while the majority of the candidates selected only by edgeR do not pass the second significance threshold step. The small number of DEG not pre-selected by Next maSigPro has a high variance as well as a little fold change. One first drawback of the pipeline is that the threshold for DEG detection is not set automatically according to the data but it is a user defined threshold, making it more challenging to indirectly determining a FDR. Furthermore, the user has to define the number of clusters, whereas it would be better if the number of clusters would be determined based on the actual data. Finally, replicates are not merged with error bars in the output graph but data points are plotted one after each other. DyNB [54] uses negative binomial likelihood distribution to model count data taking a temporal correlation of genes into account. It is also correcting for time shifts between replicates and time-series by Gaussian processes introducing time-scaling factors. Normalization is performed by variance estimation and rescaling of counts similar to DESeq [55] , but on the previously calculated Gaussian process function rather then directly on the samples. In the next step DyNB uses a Markov-Chain-Monte-Carlo (MCMC) sampling algorithm for marginal likelihoods that enables the DEG analysis. A comparison of the DyNB and DESeq candidates showed that the DyNB outperforms DESeq for the detection of weakly expressed or high noise level genes as well as genes affected by variable differentiation efficiency. A drawback is the implementation in MATLAB® (The MathWorks Inc.), thereby making it less accessible to a broad range of users. Additional drawbacks are: long running times due to MCMC sampling; genes not expressed in one condition are removed; the test output is a Bayes factor calculated by the ratios of hypothesis probabilities, which is less intuitive than the more common p-value. Finally, according to Jeffreys et al. [56] , a Bayes Factor value higher than 10 is referring to a strong evidence of differential expression, though this threshold might not hold true for all types of datasets and users will have to adapt filtering to identify their candidates of interest. TRAP's [57] is a method that aims to identify and analyze differentially activated biological pathways. In a first step, reads are mapped to a reference genome by the Tophat [58] software and further processed to estimate the expression by Cufflink [59] . In the second step, the DEG analysis is performed by the Cuffdiff software [60] , generating a FPKM (“reads per kilobase of transcript per million reads mapped”) output file for each sample. The novelty is the downstream analysis, by directing DEG candidates from the Tophat/Cufflinks/Cuffdiff pipeline into a KEGG analysis [61,62] . This approach offers three options: One Time Point pathway analysis, Time Series pathway analysis or Time Series clustering. The one time point analysis identifies significant pathways for each time point separately, whereas the Time Series pathway analysis takes all TP into account. For pathway analysis two methods are performed and their p-values combined: Over-representation Analysis (ORA) using the Gene Ontology (GO) [63] database and a Signaling Pathway Impact Analysis (SPIA) [63] . Briefly, ORA identifies significant pathways by hyper-geometric tests that compares the ratios of DEGs to the complete number of genes on a total and pathway level. SPIA takes the effect of other genes in a pathway into account. This is achieved by calculating a perturbation factor of fold change of upstream genes divided by the fold change of downstream genes. Additionally, it introduces a time-lag factor for Time Series analysis. For Time Series Clustering, each gene is assigned to a label at each time point, depending on whether the log-fold change of FPKM is either positively/negatively above a threshold or otherwise categorized as constant. Clusters are generated by grouping genes with the same label and further analyzed by ORA using ratios of pathway genes to total genes and all genes in the cluster. Users can directly start the downstream analysis by providing Cufflink/Cuffdiff data avoiding the time demanding preprocessing steps. The main pipeline is performing a pairwise comparison of TPs. Of notes, it is not making use of the time series parameter of Cuffdiff, but only takes the temporal character in later analysis into account. For the analysis itself, a possible complication is the conversion of gene name Identifiers to match the ones used in the pathway files. Moreover only the first of possible several gene name identifiers for a given pathway is used to find matches among candidates. In our opinion, the major drawback of the pipeline, similar to DyNB, is that the genes that are not expressed in one condition are excluded from further analysis. This is due to an infinite log fold change ratio caused by non-expressed genes, which are assigned zero as expression level. SMARTS [64] is designed to create dynamic regulatory networks based on time series data from multiple samples by iteratively creating models extending the DREM method [65] . First, samples are synchronized to a common biological time scale by pairwise alignment followed by sampling of points. This allows a continuous representation, correction of alignment parameters and a computation of an error metric in order to create a weighted alignment. A second alignment error is calculated between samples to create a matrix for an initial clustering by spectral clustering or affinity propagation for cases with two or more clusters, respectively. Clustering is calculated on the basis of all genes and contains noise. SMARTS takes advantage of the fact that a certain condition is only affecting a small number of genes that are regulated by an even smaller number of transcription factors (TFs) and up-stream pathways. This in turn, reduces the dimensionality of the data. The clustering is the basis for a first regulatory model that is iteratively adapted to create a final clustering of groups that are co-expressed and regulated throughout the time-series. To iteratively improve the regulatory models, static protein–DNA interaction data, such as DNA-binding motifs or ChIP-seq data, is used to define the path of each gene by modeling the transition between time points applying an Input–Output Hidden Markov Model framework. The regulatory model converges into a final clustering that identifies split time points where a subset of genes that have previously been co-expressed diverge into another path. The resulting graph offers a view of gene sets and their path throughout the timeline illustrating the differences in TF at splits that are most likely responsible for the differences in expression and regulation of subsequent time points. In our opinion, the only drawback of this tool is the requirement of prior knowledge of TF binding to genes of interest used as input to the pipeline. EBSeq-HMM [66] is an extension of the EBSeq package [67] accounting for ordered data (e.g. such as time, space, gradients) by applying an auto-regressive Hidden Markov Model (HMM). EBSeq-HMM identifies dynamic processes (genes that are neither unchanged nor sporadically expressed) and classifies genes according to their state (up/down/unchanged) into expression paths taking dependencies to prior time points into account. The analysis is based on two steps: first, the conditional distribution of data at each time point followed by the transition of states over time. Parameter estimation for the conditional distribution is performed using a beta-negative-binomial model. Second, an additional implementation to correct for the uncertainty of read counts of genes with several isoforms is offered. Subsequently, a state for each gene at each time point is determined applying a Markov-switching auto-regressive model to account for the dependencies of expression and state of the previous state. Finally, all the states of a gene are combined and classified into an expression path. The developers also tested EBSeq-HMM together with existing static methods and Next maSigPro on simulated and case study data. On the simulated data EBSeq-HMM performed with greater power and F1 scores (a score to access a test's accuracy) but had a higher false discover rate (FDR) of 4.5% in comparison to a maximum of 0.5% compared to the other methods. On clinical data, EBSeq-HMM had a 90% overlap of identified genes with other methods and outperformed these on genes with subtle and consistent changes over time. However, the authors did not make any statement about the genes, which EBSeqHMM was not able to identify. When using EBSeqHMM, the user has to keep in mind that its purpose is to identify dynamic genes; in theory it also identifies constant genes and clusters them accordingly. Practically, in order to be constant, the previous and following TP have to have the exact same mean expression value, resulting that most genes will be classified as up or down regulated at affected TPs and hiding possible non DEG time intervals of genes. 2.4 Downstream Analysis DEG analysis may result in hundreds of putative candidates, if not more, a number that cannot be experimentally validated. Therefore, scientists tried to reduce the number of candidates by searching for expression patterns and shared pathways to narrow down essential candidates. This field has been extensively researched and improved over the last two decades offering a great abundance of tools, leading to new scientific questions and simplifying their validation. 2.4.1 Clustering Methods The purpose of clustering is to statistically group samples according to a certain treat, e.g. for gene expression, to reduce complexity and dimensionality of the data, predict function or identify shared regulatory mechanisms. Depending on the data structure a fitting clustering method has to be used to account for the specific data (reviewed in [68] ). Considerations should include: - Was the data transformed or does it consist of read counts? - How is it distributed? - Is the data originating from static, short or long TC experiments? A plethora of clustering methods have been published, many of them available as R packages on the CRAN Task View page ( http://cran.r-project.org/web/views/Cluster.html ), the Bioconductor website ( http://www.bioconductor.org ) or in other scripting/programming languages made available on the publishers' web sites. However, we cannot discuss the whole spectrum of these methods. Therefore, we would like to point out certain methods which are specific for TC experiments employed for microarray [69–71] and RNA-seq data [72,73] and refer to the afore mentioned reviews for the selection of a fitting method. 2.4.2 Functional Enrichment Analysis and Network Construction To gain new insights into complex data, one of the most common methods used is functional enrichment analysis (FEA). FEA identifies candidates sharing biological function or pathway by statistical over-representation using annotated databases such as Gene Ontology [63] or KEGG [61,62] and can easily be performed using available free web interfaces or R packages such as DAVID [74] , WebGestalt [75] , PANTHER [76] or FGNet [77] , Finally, several commercial software also exist, such as Ingenuity [78] or Metacore [79] . Other options are the investigation of direct and indirect protein–protein interactions via the STRING database [80] or via Cytoscape applications [81] . Detailed descriptions, comparison and overview of FEA tools can be found in recently published reviews [82–84] . 2.5 Discussion In the last few years, many algorithms were developed to increase the quality and methodology of existing approaches. A usual procedure is to extend, adapt or update an existing established method. For example, edgeR was updated by multifactor experiments [85] and observation weights factor [34] to more robustly account for outliers. Combining existing methods and new strategies could offer a great improvement in quality of analysis, in static as well as in TC experiments. Here, we present novel advancements in the field that might offer improvements to existing methods and pipelines. Major issues at the level of mapping and the quantification of reads are: ambiguous (overlapping genes), multi-alignment (repeats) and exon-junction reads, which are usually discarded at the counting step. Recent approaches such as GIIRA [86] , ORMAN [87] and Rcounts [88] account for multi-mapping reads by introducing a maximum-flow optimization, minimum-weighted set cover problem of partial transcripts and weighting alignment scores, respectively. These recent improvements allow a better quantification of genes and isoforms, as well as the investigation of repeat elements, which was up do date not very feasible. On the isoform level, WemIQ [89] applies a weighted-log-likelihood expectation maximization for each gene region separately to improve quantification of isoforms and gene expression. Samples that differ highly in read counts (extreme high counts) create a bias at the normalization step due to the adjustment to a common scale that is calculated over all samples. This problem is addressed by the RAIDA algorithm [90] , which accounts for differences in abundance levels rather than modifying the read counts for normalization. Further studies of the SEQC/MAQC—III Consortium elucidated the negative influence of lowly expressed genes on the DEG detection [19,91,92] . Therefore, filtering out genes with low expression might offer another possibility to increase predictive power. Another problematic aspect in analysis arises when working with small sample size (less than 4 replicates per TP). In such cases, for RNA-seq experiments, the calculation of the dispersion factor of negative binomial methods is less accurate. Therefore, a new shrinkage estimation [93] has been introduced in order to analyze data with few replicates (4 or less), which was incorporated into a new tool sSeq [33] . Moreover, resampling of at least three biological replicates per time point was shown to improve the identification of oscillating genes without increasing false positives rates [94] . Recently, a new adapted exact test has been developed to increase power in order to detect DEGs for experimental designs containing only two replicates. This R package is also able to identify differentially expressed genes that are not abundant [95] . As there is no best fitting method for DEG analysis so far, we recommend using several tools and compare and combine the results in order to obtain confident candidates. To increase precision, sensitivity and reduce the detection of false positives candidates, a combination of statistical tests should be applied. The PANDORA algorithm [96] combines p-values, using one of six possible methods, which have been weighted based on the performance of each statistical test. On the other hand, multiple testing and combination of results involve an increase in time and resources needed to run the analysis, which might outweigh the gain in the power of the statistics. In the beginning of multi-Omics analysis, RNA-seq data was used to improve results of other approaches when the initial method reached it limits. With further advancement and availability of technologies, scientists started to combine several Omics data to ask new scientific questions and to add additional layers of information to their data. Further, a great increase and expansion of databases such as ENCODE [97] , Cancer Genome Atlas ( http://cancergenome.nih.gov ), GEO [98] , KEGG [61,62] and analysis platforms have also facilitated the access to multi-Omics analysis. Nevertheless, the integration of several Omics datasets still harbors several challenges such as quality assurance, data/dimension reduction and clustering/classification of combined data sets [99] , which have to be properly addressed and taken into account when designing experiments and performing analysis. In the following paragraph we would like to highlight methods that combine static or TC RNA-seq experiments with other Omics data. These tools can be categorized on whether they are multi-staged or meta-dimensional approaches, performing different Omics analysis sequentially or combining several data types into a single analysis [99,100] . In the past decade, great efforts were undertaken to develop and improve tools combining microarrays and ChIP-seq data (e.g: ChIP Array [101] , EMBER [102] for static experiments, and for TC experiments [103,104] ). Up to date, there are several multi-stage tools available to analyze RNA-seq and ChIP-seq, e.g. INsPeCT [105] and metaseq [106] , but only few integrated meta-dimensional approaches e.g. Beta [107] , CMGRN [108] and Ismara [109] . Nevertheless, none of the mentioned methods offer specific TC algorithms for analysis, and most tools either aim to identify targets of transcription factors (TFs) and create Gene Regulatory Networks (GRN), whereas others use methylation or histone modification data to predict regulatory functions [110] . Different approaches and tools for the integration of other Omics data have been extensively reviewed for proteomics [111] , metabolomics [112] and phenotypic data [113] . Indeed, re-analyzing externally obtained data using the same pipelines used for in-house produced data sets is the best approach in order to guarantee comparable results. In general, more powerful algorithms, which so far have not been implemented due to technical infeasibilities, become more and more available. Nevertheless, the optimization through parallelization and cloud computing is a major goal for the development of such new tools. As the amount of data produced in each experiment is massively increasing, improved pipelines and algorithms are in demand in order to supply the users with a good trade-off between accuracy and resources needed for their analysis. 3 Conclusion and Perspectives Recently, two approaches emerged, namely co-expression analysis and single cell RNA-seq, that are very promising to improve DEG analysis and offer new application fields such as the study of subpopulations. The assumption behind co-expression analysis is that genes in the same pathway very likely share regulatory mechanisms and therefore should have the similar expression patterns. This allows the identification of biological entities that are involved in the same biological processes and has already successfully been applied to microarray data [114] . Moreover, microarray co-expression data has been also integrated with other data types such as microRNA [115] or phenotypic [116] data and been used for differential co-expression to identify biomarkers [117] . It has further been shown that co-expression analysis is able to improve sensitivity of RNA-seq DEG analysis [118] and more recently to outperform existing clustering approaches [119] . Similarities and differences of co-expression networks in microarrays and RNA-seq as well as factors driving variance at each stage of co-expression analysis have already been investigated [120] . However, no gold standard for RNA-seq co-expression analysis has been established. Single-cell RNA-seq, in contrast to population sequencing, enables to access the heterogeneity of gene expression in cells which otherwise is averaged out or even lost for small subpopulations of cells in bulk experiments. This heterogeneity in expression arises due to differences in kinetics of response to a certain condition, treatment or cell fate decisions of each cell. Single-cell RNA-seq allows studying the subpopulation of interest and investigating mechanisms explaining differences between subpopulations, which might offer advances in drug development, personalized medicine or the creation of differentiation networks. Improvement in protocols and sequencing lead to new methods at a rapid rate: STRT [121] , CEL-Seq [122] , Smart-seq [123] , Quartz-seq [124] and microfluidic platforms [125] , enabling scientists to ask new questions. Nevertheless, protocols and methods for single-cell sequencing are not yet completely optimized and still harbor uncertainties such as noise, sequencing and normalization biases as well as proper tools for analysis. There is great effort to address these problems. It has been recently reported that explicit calculation of gene expression levels using External RNA Controls Consortium spike in controls [126,127] improved normalization and noise reduction [128] . Finally, up to date the lack of validated genome-wide data slows down the development of new algorithms and models can only approximate the real extent of regulation or networks [129] . There are tools to simulate expression data incorporating noise, such as SimSeq [130] , but still this noise estimation does not completely capture a biological situation and again is just an estimation of the whole picture. As more and more genome-wide experiments are conducted, networks created and candidates validated, the data of several sources could be compiled into a database offering frameworks for model validation. To conclude, in the last decades a plethora of new models, system and networks were created, with the caveat of over-generalization of results in order to fit hypotheses and models. By combining high-throughput data, scientists are now able to correct for this over-generalization by filling gaps with complementary data, allowing fine-tuning and dissection of existing models and networks as well as the upcoming of new intuitive, integrative and explorative tools. Further, the integration of several kinds of Omics data remains the biggest challenge [131] as we have to understand the limitations of each technique before conducting a joint analysis [111] and to develop several tools according to the specific data types and underlying genomic models for powerful integrative analysis [99] . Acknowledgments We would like to thank Tobias A. Beyer and Jian Yu for discussion and helpful comments on the manuscript. This work was supported by a core grant from ETH-Z ( PP12/BIOL.160 ) (supported by Roche). D.S. is supported by a PhD fellowship from the ETH-Z foundation ( ETH-05 14-2 ). | [
"LIANG",
"SCHENA",
"VELCULESCU",
"BRENNER",
"SHIRAKI",
"ISHKANIAN",
"NAGALAKSHMI",
"VANDIJK",
"WANG",
"ROY",
"MARIONI",
"BLOW",
"WILHELM",
"CUI",
"ZHAO",
"BARJOSEPH",
"OH",
"OH",
"SU",
"BUERMANS",
"BAY",
"LI",
"HART",
"CHING",
"GARGIS",
"SONESON",
"LIU",
"DU",
... |
3ab7b3a4cec840bcab84683a129280a1_Deep learning analysis of the inverse seesaw in a 3-3-1 model at the LHC_10.1016_j.physletb.2020.135931.xml | Deep learning analysis of the inverse seesaw in a 3-3-1 model at the LHC | [
"Cogollo, D.",
"Freitas, F.F.",
"de S. Pires, C.A.",
"Oviedo-Torres, Yohan M.",
"Vasconcelos, P."
] | Inverse seesaw is a genuine TeV scale seesaw mechanism. In it active neutrinos with masses at eV scale requires lepton number be explicitly violated at keV scale and the existence of new physics, in the form of heavy neutrinos, at TeV scale. Therefore it is a phenomenologically viable seesaw mechanism since its signature may be probed at the LHC. Moreover it is successfully embedded into gauge extensions of the standard model as the 3-3-1 model with the right-handed neutrinos. In this work we revisit the implementation of this mechanism into the 3-3-1 model and employ deep learning analysis to probe such setting at the LHC and, as main result, we have that if its signature is not detected in the next LHC running with energy of 14 TeVs, then, the vector boson
Z
′
of the 3-3-1 model must be heavier than 4 TeVs. | 1 Introduction Seesaw mechanisms [1–4] are seem as the simplest proposals to solve the long-standing problem of the smallness of the neutrino masses. Recently researchers have focused their investigations on phenomenologically viable seesaw mechanisms, as inverse seesaw one [4] , since their signatures may be probed at the LHC [5] . The distinguishable aspect of the inverse seesaw (ISS) mechanism is the fact that it is a genuine TeV scale seesaw mechanism and according to the original idea [4] its implementation requires the addition of six new neutrinos ( with N i R , S i L ) to the standard model particle content composing the following bilinear terms i = 1 , 2 , 3 [6] , where (1) L = − ν ¯ L m D N R − S ¯ L M N R − 1 2 S ¯ L μ S L C + H.c. , , m D M and μ are generic complex mass matrices. These terms can be arranged in the following 3 × 3 neutrino mass matrix in the basis 9 × 9 , ( ν L , N L C , S L ) Considering the hierarchy (2) M ν = ( 0 m D T 0 m D 0 M T 0 M μ ) . , the diagonalization of this μ < < m D < < M mass matrix provides the following effective neutrino mass matrix for the standard neutrinos: 9 × 9 The double suppression by the mass scale connected with (3) m ν = m D T ( M T ) − 1 μ M − 1 m D . M turns it possible to have such scale much below than that one involved in the canonical seesaw mechanism [1–3] . It happens that standard neutrinos with mass at sub-eV scale are obtained for at electroweak scale, m D M at TeV scale and μ at keV scale. In this case all the new six neutrinos may develop masses around TeV scale or less, and their mixing with the standard neutrinos is modulated by the ratio . The core of the ISS mechanism is that the smallness of the neutrino masses is guaranteed by assuming that the m D M − 1 μ scale is small and, in order to bring heavy neutrino masses down to TeV scale, it has to be at the keV scale. In this regard it was showed in [7] that the with right-handed neutrinos (331RHN) S U ( 3 ) C × S U ( 3 ) L × U ( 1 ) N [8] has the main ingredients for realizing the ISS mechanism. However, a probe of the ISS mechanism in 331RHN at the LHC is missing. The proposal of this work is to complete this job and probe the ISS in 331RHN at the LHC. For this purpose we review the model, the mechanism, and employ deep learning to probe the signature of the mechanism at the LHC by means of the production of these new neutrinos and their detection in the form of leptons as final products. This work is organised as follows: in Sec. 2 we revised the implementation of the ISS into the 331RHN and present the charged and neutral currents of interest for our analysis. In Sec. 3 we perform our analysis by applying deep learning techniques to probe both the ISS and the 331RHN. In Sec. 4 we present our conclusions. 2 Some essential points of the model and of the mechanism In order to implement the ISS mechanism into the 3311RHN we have to add three left-handed neutral fermions in the singlet form to the original leptonic content of model, (4) L a L = ( ν a l a ν a C ) L ∼ ( 1 , 3 , − 1 3 ) ; where (5) l R a ∼ ( 1 , 1 , − 1 ) , N L a ∼ ( 1 , 1 , 0 ) , which corresponds to three families of leptons. a = 1 , 2 , 3 For completeness reasons, we present the quark content. As it is well known, in the quark sector, two families must transform as anti-triplet. This is so to cancel anomalies. Here we make the following choice: where (6) Q i L = ( d i − u i d i ′ ) L ∼ ( 3 , 3 ⁎ , 0 ) , u i R ∼ ( 3 , 1 , 2 3 ) , d i R ∼ ( 3 , 1 , − 1 3 ) , d i R ′ ∼ ( 3 , 1 , − 1 3 ) , while the third family will transform as triplet, i = 1 , 2 (7) Q 3 L = ( u 3 d 3 T ) L ∼ ( 3 , 3 , 1 3 ) , u 3 R ∼ ( 3 , 1 , 2 3 ) , d 3 R ∼ ( 3 , 1 , − 1 3 ) , T R ∼ ( 3 , 1 , − 2 3 ) . The scalar sector keeps the original content, η = ( η 0 η − η ′ 0 ) ∼ ( 1 , 3 , − 1 3 ) , ρ = ( ρ + ρ 0 ρ ′ + ) ∼ ( 1 , 3 , 2 3 ) , χ = ( χ 0 χ − χ ′ 0 ) ∼ ( 1 , 3 , − 1 3 ) . The gauge sector is composed by the standard ones, and the photon W μ ± , Z μ plus five new ones A μ and U μ 0 , U μ 0 † , W μ ′ ± . Z μ ′ This particle content allows the following Yukawa interactions, where (8) − L Y = f i j Q ¯ i L χ ⁎ d j R ′ + ( f 33 Q ¯ 3 L χ T R ) + g i a Q ¯ i L η ⁎ d a R + h 3 a Q ¯ 3 L η u a R + g 3 a Q ¯ 3 L ρ d a R + h i a Q ¯ i L ρ ⁎ u a R + y a L ¯ a L ρ e a R − 1 2 G a b ϵ l m n ( L a L ) l c ‾ ρ m ⁎ ( L b L ) n + G a b ′ L ¯ a L χ ( N b L ) C + 1 2 ( N L ) C ‾ μ N L + H.c. , , a , b = 1 , 2 , 3 and i , j = 1 , 2 . For the sake of simplicity, we consider charged leptons in a diagonal basis. Observe that the last line of this lagrangian includes the terms that trigger the ISS mechanism. l , m , n = 1 , 2 , 3 As usual, we assume that only , η 0 and ρ 0 develop vacuum expectation values (VEVs) other than zero and we consider the following expansions around the VEVs: χ ′ 0 (9) η 0 , ρ 0 , χ ′ 0 → 1 2 ( v η , ρ , χ ′ + R η , ρ , χ ′ + i I η , ρ , χ ′ ) . With this set of VEVs, the last line of the Yukawa Lagrangian above provides the following mass terms for the neutrinos: where the (10) L ν m a s s = ν ¯ R m D ν L + ν ¯ R M N L + 1 2 ( N L ) c ‾ μ N L + H.c. matrices are defined as 3 × 3 (11) M a b = G a b ′ v χ ′ 2 with (12) m D a b = G a b v ρ 2 and M a b being Dirac mass matrices, with this last one being antisymmetric. m D a b Considering the basis we can write S L = ( ν L , ( ν C ) L , N L ) in the following form L ν m a s s with the mass matrix (13) L ν m a s s = 1 2 ( S L ) c ‾ M ν S L + H.c. , having the texture, M ν This is the mass matrix that characterize the ISS mechanism. The hierarchy (14) M ν = ( 0 m D T 0 m D 0 M T 0 M μ ) . provides a seesaw relation for the masses of the standard neutrinos. In order to see this it is useful to define the matrices, M ≫ m D ≫ μ so that we have the following block matrix where (15) M D 6 × 3 = ( m D 3 × 3 0 3 × 3 ) , M R 6 × 6 = ( 0 3 × 3 M 3 × 3 T M 3 × 3 μ 3 × 3 ) , is supposed invertible, M R This last matrix can be block diagonalized. For this purpose let us definife the matrix (16) M ν 9 × 9 = ( 0 3 × 3 M D 3 × 6 T M D 6 × 3 M R 6 × 6 ) W , such that, (17) W ≃ ( 1 − 1 2 ( M D ) † [ M R ( M R ) † ] − 1 M D ( M D ) † [ ( M R ) † ] − 1 − ( M R ) − 1 M D 1 − 1 2 ( M R ) − 1 M D ( M D ) † [ ( M R ) † ] − 1 ) where (18) W T M ν 9 × 9 W = ( m l i g h t 3 × 3 0 3 × 6 0 6 × 3 m h e a v y 6 × 6 ) , and m l i g h t = − M D T M R − 1 M D . When we plug m h e a v y = M R and M D in M R − 1 we obtain the canonical inverse seesaw mass expression for the standard neutrinos: m l i g h t (19) m l i g h t = m D T ( M ) − 1 μ ( M T ) − 1 m D Observe that the matrix in Eq. (18) is not diagonal. It is a block diagonal matrix. The diagonalization of the mass matrix in Eq. (16) is done through the unitary matrix , such that V = W U , with V T M ν 9 × 9 V = m d i a g U defined as: with (20) U = ( U P M N S 0 0 U R ) , being the PNMS matrix that diagonalizes U P M N S while m l i g h t diagonalizes U R , and m h e a v y is the diagonal mass matrix with nine eigenvalues. m d i a g The explicit form of V is (21) V ≃ ( [ 1 − 1 2 ( M D ) † [ M R ( M R ) † ] − 1 M D ] U P N M S ( M D ) † [ ( M R ) † ] − 1 U R − ( M R ) − 1 M D U P N M S [ 1 − 1 2 ( M R ) − 1 M D ( M D ) † [ ( M R ) † ] − 1 ] U R ) . In the end of the day we have with (22) U T W T M ν W U = ( m ν 0 0 m R ) , and m ν = d i a g ( m 1 , m 2 , m 3 ) . m R = d i a g ( m 4 , . . . . , m 9 ) The matrix V connects the flavor basis = S L = ( ν L , ( ν C ) L , N L ) T with the physical one which we call ( ν L , ζ L ) T where n L = ( n i L 0 , n k L 1 ) T with n i L 0 and i = 1 , 2 , 3 with n k L 1 . The relation between flavor and mass eigenstates, k = 1 , 2 , . . . , 6 , is given explicitly by. S L = V n L (23) ν a L = { U P M N S − 1 2 ( M D ) † [ M R ( M R ) † ] − 1 M D U P M N S } a i n i L 0 + { ( M D ) † [ ( M R ) † ] − 1 U R } a k n k L 1 ; (24) ζ b L = { [ − ( M R ) − 1 M D ] U P M N S } b i n i L 0 + { U R − 1 2 ( M R ) − 1 M D ( M D ) † [ ( M R ) † ] − 1 U R } b k n k L 1 . For simplicity, we will define the matrix V in the following form: (25) V = ( V ν ν V ν N V N ν V N N ) . Returning to , on substituting m l i g h t and m D = G 2 v ρ , we obtain M = G ′ 2 v χ ′ (26) m l i g h t = ( G T ( G ′ T ) − 1 μ ( G ′ ) − 1 G ) v ρ 2 v χ ′ 2 . Remember that G is an anti-symmetric matrix implying that one eigenvalue of the neutrino mass matrix in Eq. (26) is null. Solar, reactor, accelerator and atmospheric neutrino experiments have determined [9] , Moreover, the current status of neutrino physics allows that at least one of the three neutrinos may be massless. (27) Δ m 21 2 ≃ 7.59 × 10 − 5 eV 2 , Δ m 31 2 ≃ 2.43 × 10 − 3 eV 2 , sin 2 ( 2 θ 12 ) ≃ 0.86 , sin 2 ( 2 θ 23 ) ≃ 0.92 , sin 2 ( 2 θ 13 ) ≃ 0.092 . Returning to our model, in it the masses of the active neutrinos are obtained by diagonalizing in Eq. m l i g h t (26) which involves many free parameters in the form of Yukawa couplings G and . With such a large set of free parameters, there is a great deal of possible solutions that lead to the correct neutrino mass spectrum and mixing in Eq. G ′ (27) . However due to the non-unitarity of the mixing matrix any set of values for the entries in V ν ν G and that do the job must obey the following constraints G ′ [10] , where (28) | η | < ( 2.0 × 10 − 3 3.5 × 10 − 5 8.0 × 10 − 3 3.5 × 10 − 5 8.0 × 10 − 4 5.1 × 10 − 3 8.0 × 10 − 3 5.1 × 10 − 3 2.7 × 10 − 3 ) , . η = 1 2 ( M D ) † [ M R ( M R ) † ] − 1 M D To simplify our job we consider . Thus, the constraint v η = v ρ = v implies v η 2 + v ρ 2 = ( 246 GeV ) 2 GeV. It is supposed that v = 174 lies around TeV. Here we assume 5 TeV. We also consider v χ ′ keV where μ = 0.3 I I is the identity matrix. Regarding the Yukawa couplings G and , we consider the scenario where G ′ is diagonal but non-degenerate. Moreover, for the sake of simplicity, we neglect CP phases and consider the charged leptons in a diagonal basis. All this considered, as illustrative case we take, G ′ and (29) G ′ = ( g 11 ′ 0 0 0 g 22 ′ 0 0 0 g ′ 33 ) ≃ ( 0.019 0 0 0 0.07 0 0 0 0.04 ) , (30) G = ( 0 g 12 g 13 − g 12 0 g 23 − g 13 − g 23 0 ) ≃ ( 0 4.26 × 10 − 3 4.97 × 10 − 3 − 4.26 × 10 − 3 0 6.62 × 10 − 3 − 4.97 × 10 − 3 − 6.62 × 10 − 3 0 ) . With these set of values for G , and for the values of the VEVs G ′ v , and v χ ′ μ presented above, the diagonalization of the mass matrix in Eq. m l i g h t (26) furnishes with (31) m 1 ≃ 0 , m 2 ≈ 8.7 × 10 − 3 eV , m 3 ≈ 4.8 × 10 − 2 eV , This (32) U P M N S ≃ ( 0.80 0.58 0.12 − 0.48 0.52 0.70 0.34 − 0.62 0.70 ) . implies in the following mixing angles U P M N S , θ 12 = 36 o and θ 23 = 45 o which recover the experimental values in Eq. θ 13 = 7 o (27) . Let us check if the values for G and above are in accordance with non-unitarity constraint G ′ [10] . On substituting the set of values of G and in G ′ η yields, which respect the bounds in Eq. (33) η = ( 9.6 × 10 − 6 1.0 × 10 − 5 3.0 × 10 − 6 1.0 × 10 − 5 4.3 × 10 − 5 3.4 × 10 − 5 3.0 × 10 − 6 3.4 × 10 − 5 4.5 × 10 − 5 ) , (28) . Regarding the six new neutrinos, by diagonalizing in Eq. m h e a v y = M R (15) , our illustrative example yields ( ), with masses ∼ 373.28 GeV, ( n 1 L 1 , n 6 L 1 ) with masses ∼ 220.84 GeV and ( n 2 L 1 , n 5 L 1 ) with masses around ∼ 96.32 GeV. “The mass degeneracy of sterile neutrinos is due to the small value of the lepton number violating parameter n 3 L 1 , n 4 L 1 μ ”. So we developed the basic aspects of the implementation of the ISS mechanism within the 331RHN and presented an illustrative example that recovers the current experimental results involving neutrino oscillation. Our wish now is to probe this scenario at the LHC. We do this by means of the production of pairs of heavy neutrinos, , and their subsequent detection in the form of leptons as main final products. The main contributions for the processes we study are those intermediated by the standard charged gauge boson n i L 1 and W ± . The neutral and charged currents of interest are presented below. For previous studies of the signature of the inverse seesaw mechanism in other scenarios, see Refs.: Z ′ [11] . We present, first, the charged current with which are composed by the following terms, W ± (34) L n ℓ W = − g 2 ∑ a = 1 3 ∑ i = 1 3 ℓ ¯ a L γ μ [ U P M N S − 1 2 ( M D ) † [ M R ( M R ) † ] − 1 M D U P M N S ] a i n i L 0 W μ − − g 2 ∑ a = 1 3 ∑ k = 1 6 ℓ ¯ a L γ μ [ ( M D ) † [ ( M R ) † ] − 1 U R ] a k n k L 1 W μ − + H.c. The neutral current interactions with have two contributions. The first one is Z ′ with (35) L n n Z ′ = − G g 2 cos θ W [ ∑ i , j = 1 3 n 0 ¯ i L ( V ν ν † V ν ν ) i j γ μ n j L 0 + ∑ i = 1 3 ∑ m = 1 6 n 0 ¯ i L ( V ν ν † V ν N ) i m γ μ n m L 1 + ∑ k = 1 6 ∑ j = 1 3 n 1 ‾ k L ( V ν N † V ν ν ) k j γ μ n j L 0 + ∑ k = 1 6 ∑ m = 1 6 n 1 ‾ k L ( V ν N † V ν N ) k m γ μ n m L 1 ] Z μ ′ , and, G = 1 − 2 sin 2 θ W 3 − 4 sin 2 θ W with (36) L n n Z ′ = F g 2 cos θ W [ ∑ i , j , b = 1 3 n 0 ¯ i L ( V N ν ) b i ⋆ ( V N ν ) b j γ μ n j L 0 + ∑ i , b = 1 3 ∑ m = 1 6 n 0 ¯ i L ( V N ν ) b i ⋆ ( V N N ) b m γ μ n m L 1 + ∑ k = 1 6 ∑ b , j = 1 3 n 1 ‾ k L γ μ ( V N N ) b k ⋆ ( V N ν ) b j n j L 0 + ∑ k , m = 1 6 ∑ b = 1 3 n 1 ‾ k L ( V N N ) b k ⋆ ( V N N ) b m γ μ n m L 1 ] Z μ ′ , . F = 2 cos 2 θ W 3 − 4 sin 2 θ W This is the set of interactions that matter for us here. In the first line of Eq. (34) we have the mixing matrix . Due to the smallness of the second term, see values in Eq. ( 1 − 1 2 ( M D ) † [ M R ( M R ) † ] − 1 M D ) U P M N S = V ν ν (33) , we take . V ν ν ≃ U P M N S In the second line of Eq. (34) there appear the mixing matrix . Our illustrative example yields, ( ( M D ) † [ ( M R ) † ] − 1 U R ) = V ν N Such pattern of mixing is due to the simple choice of the parameters (37) V ν N ≃ ( − 1.4 × 10 − 3 2.8 × 10 − 3 0 0 − 2.8 × 10 − 3 1.4 × 10 − 3 0 3.7 × 10 − 3 − 5.5 × 10 − 3 5.5 × 10 − 3 − 3.7 × 10 − 3 0 2.2 × 10 − 3 0 − 6.3 × 10 − 3 6.3 × 10 − 3 0 − 2.2 × 10 − 3 ) ; and G ′ μ . In the next section we are going to probe the signature of this mechanism by producing the lightest new neutrinos, and n 3 L 1 , at the LHC. Observe that as n 4 L 1 and ( V ν N ) 13 are null, then these neutrinos do not form charged currents with the electrons. For this reason the analysis done in the next section is based on the production of these neutrinos and their final products in the form of muons. ( V ν N ) 14 Concerning neutral currents, we also explore the direct production of and its subsequent decay into a pair of Z ′ or n 3 L 1 . The interactions that generate these processes are the last terms of the Eqs. n 4 L 1 (35) and (36) . Our illustrative example yields the following values for the mixing matrix , V N N that along with Eq. (38) V N N ≃ ( 0 0 7.0 × 10 − 1 7.0 × 10 − 1 0 0 − 7.0 × 10 − 1 0 0 0 0 − 7.0 × 10 − 1 0 7.0 × 10 − 1 0 0 7.0 × 10 − 1 0 − 3.91 × 10 − 5 − 5.68 × 10 − 5 − 7.0 × 10 − 1 7.0 × 10 − 1 5.68 × 10 − 5 3.91 × 10 − 5 7.0 × 10 − 1 1.10 × 10 − 5 3.91 × 10 − 5 − 3.91 × 10 − 5 − 1.10 × 10 − 5 − 7.0 × 10 − 1 − 1.10 × 10 − 5 − 7.0 × 10 − 1 − 5.68 × 10 − 5 5.68 × 10 − 5 7.0 × 10 − 1 1.10 × 10 − 5 ) , (37) allows us to perform the analysis for this production. Before go into the analysis, with the charged and neutral currents at hand, first thing to do is to check if our illustrative example obeys the rare lepton flavor violation (LFV) process constraint. Such process is allowed by the second coupling in Eq. μ → e γ (34) . The branching ratio for the process mediated by these six heavy neutrinos is given by [12] , where B R ( μ → e γ ) ≈ α W 3 sin 2 ( θ W ) m μ 5 256 π 2 m W 4 Γ μ × | ∑ k = 1 6 ( V ν N ) e k ( V ν N ) μ k I ( m n k L 1 2 m W 2 ) | 2 , (39) I ( x ) = − 2 x 3 + 5 x 2 − x 4 ( 1 − x ) 3 − 3 x 3 ln x 2 ( 1 − x ) 4 . In the above branching ratio expression we use , α W = g 2 4 π = 3.3 × 10 − 2 , sin 2 ( θ W ) = 0.231 Mev, m μ = 105 Gev, m W = 80.385 Mev. The present values of these parameters are found in Γ μ = 3 × 10 − 16 [13] . Our illustrative example provides . This is very close to the current bound that is B R ( μ → e γ ) ≈ 1.4 × 10 − 13 B R ( μ → e γ ) < 4.2 × 10 − 13 [14] . So, this case may be confirmed or excluded at the next running of the MEG experiment. 3 Analysis of the production mechanism and main channels There are two major production channels for the neutrinos. The first one is via vector gauge boson n i L 1 , which can be produced through the s-channel in a proton-proton collision. In the particular case of our illustrative example, the W ± can further decay into a W ± μ lepton and the neutrinos . On the other hand, the n i L 1 can decay into n i L 1 μ and . Then this channel can have as final product 3 leptons plus missing energy ( W ± ) or 2 muons and 2 jets ( μ ± μ ∓ ℓ ± ν ℓ ). μ ± μ ∓ j j The second production mechanism for the neutrinos is through the direct production of the n i L 1 and its subsequent decay into a pair of Z ′ . The final state for this type of channel will appear as pair of high boosted muons, pair of leptons and missing transverse energy ( n i L 1 ) or pair of high boosted muons and 4 light jets. We investigate both channels and explore the phenomenological features of this model and how the signatures of the μ ± μ ∓ ℓ ∓ ν ℓ ℓ ± ν ℓ can appear at the listed final states at the LHC. n i L 1 1 1 We remark that it is also possible that the sterile neutrinos can also be produced via vector boson fusion. However such process is subleading because of the phase space suppression. In view of this such channel is not considered. To do so, we generate an UFO [15] file using the FeynRules [16] . This UFO file is latter used by the MadGraph5 [17] package to produce the hard scattering processes we want to investigate. All the hard scattering processes are further pass to Pythia version 8.1 [18] and Delphes [23] in order to hadronize and include the detector effects to make the data from of Monte-Carlo pseudo-events be as close as possible to the data produced by the LHC at 14 TeV. 3.1 channel p p → μ ± μ ∓ e ± ν e As mentioned earlier, this is one of the main production mechanisms for the production of and is displayed in n i L 1 Fig. 1 . To investigate this channel we generate 450000 events with 14 TeV centre of mass energy. To stay safely away from infrared and colinear divergences, we apply the basic cuts of Eq. (40) at the generation level (40) p T ℓ > 20 GeV , p T j , b > 30 GeV , | η j , b | < 3.0 , | η ℓ | < 2.7 , Δ R j j , b b , ℓ ℓ > 0.01 . We focus our investigation in the production of the lightest new neutrinos. Thus, we are going to analyze the channel with the decay chain for the neutrino p p → W ± → μ ± n 3 L 1 ( n 4 L 1 ) , This choice allows us to reconstruct, with a good accuracy, the full decay chain generated by the n 3 L 1 ( n 4 L 1 ) → μ ± W ∓ , W ± → e ± ν e . . Another reason for this choice stems from the fact that in our model the couplings between n i L 1 or W ± , n 3 L 1 and W ± , n 4 L 1 μ are relatively large, allowing a sizable cross section for the production at the LHC. As consequence for this choice we have as main irreducible background the channels: (41) Z W ± → μ + μ − e ± ν e Z t t ¯ → μ + μ − e + ν e b ( e − ν ¯ e b ¯ ) Z t b ¯ → μ + μ − e + ν e b b ¯ For the event selection we impose the following criteria: (42) one electron (positron) with p T e > 25 GeV , and E̸ T > 15 GeV ; (43) a pair of μ with p T μ > 25 GeV each , ; (44) a pair of μ with p T μ > 25 GeV each and reconstructed object W ± . After we impose the selection criteria described in Eqs. (40) – (44) , we are able to analyze the kinematics (dimension-full) and angular (dimension-less) observables from the final state particles produced by this channel. This analysis has the purpose of increase the significance of detecting at the next LHC run. We choose the following observables: n 3 L 1 ( n 4 L 1 ) In Table 2 we present the distributions for the observables of our analysis, and in Figs. 2–4 we display the respective distributions. One naive approach is a simple cut and count analysis using the reconstructed from the final state muon and reconstructed n 3 L 1 ( n 4 L 1 ) W boson. However, due to the number of events for the background remained after the selection, even when we impose a cut window around the mass predicted for the , buries completely our signal. To overcome this problem we make use of a Deep learning algorithm trained to distinguish the signal over the main irreducible background using the observables described before. We present the details of the architecture and training methodology in the section n 3 L 1 ( n 4 L 1 ) 3.3 . 3.2 channel Z ′ Another production mechanism for the is through the production and subsequent decay of n i L 1 , see Z ′ Fig. 5 . To investigate this channel we apply the same workflow where we generate 450000 events with 14 TeV and the same basic generation cuts described in Eq. (40) . We then pass the hard scattering events through Pythia and Delphes to finally select the events based on the following selection criteria: (45) a pair of electron and positron with p T e > 25 GeV , and E̸ T > 15 GeV (46) a pair of μ with p T μ > 25 GeV each , (47) a pair of μ with p T μ > 25 GeV each and two reconstructed W ± . The bosons are reconstructed from the final state electrons and the W ± . In our simulations we set the value for the E̸ T mass to 4 TeV and Z ′ to 96.31 GeV which are consistent with the current estimate limits n 3 L 1 ( n 4 L 1 ) [21,22] for the expected mass. In Figs. Z ′ 6 we display the cross section for a given range of mass against the Z ′ ones for n i L 1 and 100 TeV. The region explored in this paper offers a sizeable cross section for the production of a s = 14 , 20 and its subsequent decay into Z ′ . n i L 1 For the main irreducible background we have: • Z t t ¯ → μ + μ − e + ν e b e − ν ¯ e b ¯ • Z W + W − → μ + μ − e + ν e e − ν ¯ e This channel contains six leptons as final state particles, 4 visible ( ) and 2 invisible ( μ + , μ − , e + , e − ), which opens up the number of observables we can use to distinguish the signal over background. We choose the following dimension-full and dimensionless variables, see ν e , ν ¯ e Table 4 , and in Figs. 7–9 we display the respective distributions. 3.3 Deep learning analysis: methods and results After we select the events and gather the kinematic and angular information we can feed this information into a Neural Network (NN) designed to proper separate signal over background. Due to the simplicity of the data-set of our events, which store the information from the events as tables where each row corresponds to an event entry and the columns are the observables, we decide to work with a fully connected NN. However, we still have to choose some important parameters for the NN: number of layers, number of neurons, kernel initializes, etc. The decision of choose the correct parameters directly reflect the efficiency of our NN, which can be translated into significance of discovery, or not, of the particles predicted by the model. This selection is often refereed as hyperparameter optimization. A first approach is to use “brute force” to tune the hyperparameters by using a grid search, but the number of combinations and the computational time to test each one of them increases exponentially. More efficient ways beyond grid search are random sampling or using gaussian process algorithms to learn the best hyperparameters. Another way to tackle this problem is to use genetic/evolutionary algorithms, as in Ref. [19] . To test the different architectures, as well the modifications and fining tuning of the parameters, we set up an evolutionary algorithm to test the different combinations of parameters by creating a set of populations. In our case we restrict the population to 25 models, and keep the top 5 models with highest accuracy, after 5 rounds (generations) we obtain the top 3 architectures sorted by accuracy and we select the best one to continue our analysis. This full process takes around 2 hours in a NVIDIA GTX 1070 GPU. We use Tensorflow 2.0 [24] to build, train and evaluate our models. The best architecture and hyperparameters found by our genetic algorithm consist of a 5 layers NN each one with 512 neurons with a Rectified Linear Unit (a.k.a. ReLU) activation function with the exception of the top layers which consist of a layer with 4 neurons, one for each channel analysed ( , μ n 3 L 1 ( μ n 4 L 1 ) ), and a Z t b ¯ , W Z , t t ¯ Z sigmoid as activation function. We also found that initial random weights for the layers sampled from normal distribution and L2 regularization with a value of gives the best significance. We also found a similar architecture for the channel 10 − 7 , with the only difference that at the top layer we have 3 neurons, one for each channel ( n 3 L 1 n ¯ 3 L 1 ( n 4 L 1 n ¯ 4 L 1 ) ). n 3 L 1 n ¯ 3 L 1 ( n 4 L 1 n ¯ 4 L 1 ) , t t ¯ Z , W W Z Our data sets consist of tables where each row corresponds to an event entry and the columns are the kinematics and angular distributions we described in the sections. Due to the selection criteria 1 and 3 we impose into the signal and backgrounds events, we ended up with an imbalanced number of events for each channel, this can lead the DNN model to over-fit towards the majority class, which turns the model unable to make correct predictions for the classes we are interested. To overcome this problem we balance the original data set using Synthetic Minority Over-sampling Technique (SMOTE) [25] , we first dived the original data set into 80% to generate the balance data set and 20% to use our validation set. (See Table 5 .) We can evaluate the performance of our NN by look into the signal efficiency over the background rejection. The left panel of Fig. 10 show the signal efficiency and the background rejection for both channels analysed while the right panel gives us the normalized number of entries for a given NN prediction score. A simple figure to evaluate how good is the signal-background separation is the area under the ROC curve, AUC. The closer AUC is to one, the better we should expect the backgrounds can be cleaned up for a giving signal efficiency. We are interested in obtaining not only the acceptance and rejection factors, but mainly the statistical significance of the signal. To do so we can use the predictions made by our NN to estimate the number of events expected and from the number of events for each of the analysed channels get the estimate Asimov significance, which depends on the integrated luminosity and systematic uncertainties which are often disregarded in machine learning studies. The Asimov estimate of significance [20] , a well-established approach to evaluate likelihood-based tests of new physics taking into account the systematic uncertainty on the background normalization, can then be used for a more careful estimate of the signal significance at the training and testing phases of construction of the classifier. The formula of the Asimov signal significance is given by where, for a given integrated luminosity, (48) Z A = [ 2 ( ( s + b ) ln [ ( s + b ) ( b + σ b 2 ) b 2 + ( s + b ) σ b 2 ] − b 2 σ b 2 ln [ 1 + σ b 2 s b ( b + σ b 2 ) ] ) ] 1 / 2 , s is the number of signal events, b is the number of background events, and the uncertainty associated with the number of background events is given by . In σ b Fig. 11 we plot the estimate Asimov significance dependency over the classification score assigned by the NN. Despite the relative higher cross-section for the process and the 99% accuracy achieved by the NN, the overwhelm irreducible background we have for this channel dominates the uncertainties for the Asimov significance. This imposes a bigger challenge to one who intend to probe such particle using this channel alone. Meanwhile, the process p p → W → μ n i L 1 offers a new window to probe not only the p p → Z ′ → n i L 1 n ¯ i L 1 but the aforementioned n i L 1 boson. The smaller backgrounds cross-section and the 100% accuracy achieved by the NN allow us to safely probe this channel and estimate higher significance using current LHC luminosity. Combining all these factors if the Z ′ is not discovery in this channel, we can exclude this model with a Z ′ mass below 4 TeV using current LHC luminosity. However, from Z ′ Fig. 6 we still have a wide range of mass to explore and use the analysis we developed so far as main guideline to constrain the parameters of the 331RHN. (See Table 6 .) We can project the Asimov significance for a range of luminosity values. In Fig. 12 we have the projected significance with 1% systematic error versus the expected luminosity. The bands correspond to the projected systematic uncertainties. Due to the systematic dominance over the channel, we can only achieve 3 W → μ n i L 1 σ significance at 3000 fb −1 ; yet, the projected significance for the shows a better perspective with 10.5 Z ′ → n i L 1 n ¯ i L 1 σ of significance using the RUN-2 luminosity and around 33 σ at 3000 fb −1 showing the sensitivity power not only of the analysis we developed, but the channel as well. Z ′ → n i L 1 n ¯ i L 1 4 Conclusions In this work we revisited, in details, the implementation of the inverse seesaw mechanism into the 3-3-1 model with right-handed neutrinos and, then, probed their signatures, in the form of heavy neutrinos, at the LHC by means of deep learning techniques. The spectrum of mass for these new neutrinos may vary from some hundreds of GeVs up to TeV scale. Our analysis considered the production of such neutrinos by means of the processes and p p → W ± → μ ± n ( 3 , 4 ) L 1 → μ ± μ ∓ e ± ν e . We applied deep learning techniques in conjunction with evolutionary algorithms in our analysis and concluded that the second process is much more efficient than the first one. As main result we have that the second process allows we probe not only the signal of the ISS mechanism, but also the model in question, i.e., the 331RHN. According to our analysis if the p p → Z ′ → n ( 3 , 4 ) L 1 n ( 3 , 4 ) L 1 → μ + μ − e + e − ν e ν ¯ e is not discovered in this channel, we can exclude within 6 Z ′ σ at 95% of confidence level this model with a mass below 4 TeV using current LHC luminosity. Z ′ Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements D. Cogollo is partly supported by the Brazilian National Council for Scientific and Technological Development ( CNPq ), under grants 436692/2018-0 . Y.M. Oviedo-Torres acknowledges the financial support from CAPES under grants 88887.485509/2020-00 . C. Pires is partly supported by the Brazilian National Council for Scientific and Technological Development ( CNPq ), under grants No. 304423/2017-3 . F.F. Freitas is supported by the project From Higgs Phenomenology to the Unification of Fundamental Interactions PTDC/FIS-PAR/31000/2017 grant BPD-32 (19661/2019), and P. Vasconcelos was partly supported by the Brazilian National Council for Scientific and Technological Development ( CNPq ). | [
"GELLMANN",
"YANAGIDA",
"MOHAPATRA",
"MAGG",
"MOHAPATRA",
"MA",
"FOOT",
"MOHAPATRA",
"MOHAPATRA",
"KERSTEN",
"MA",
"CARCAMOHERNANDEZ",
"BHUPALDEV",
"DAS",
"FREITAS",
"MOHAPATRA",
"NG",
"HUMBERT",
"HAN",
"LEE",
"COGOLLO",
"DIAS",
"DIAS",
"SINGER",
"FOOT",
"MONTERO",
... |
f05e2b840fd04b21bdf7aa1f7c871fd3_The vulnerability of ecosystem structure in the semi-arid area revealed by the functional trait netw_10.1016_j.ecolind.2022.108894.xml | The vulnerability of ecosystem structure in the semi-arid area revealed by the functional trait networks | [
"Gao, Dexin",
"Wang, Shuai",
"Wei, Fangli",
"Wu, Xutong",
"Zhou, Sha",
"Wang, Lixin",
"Li, Zidong",
"Chen, Peng",
"Fu, Bojie"
] | The ecosystems were characterized by complex, nonlinear interactions determined by different plant functional traits. The characteristics of the multiple relationships between ecosystem functional traits affected the vulnerability to drought. A three-level network analysis on instead of the network metrics, relationships among inter-components, and essential traits was conducted in dryland ecosystems of China. The new network of functional traits included leaf, root, and biomass components was constructed to simulate different aridity conditions. Results show that the multiple relationships of functional traits that co-regulated ecosystem biomass differ along an aridity gradient. The highest network modularity and degree centrality were observed in the semi-arid ecosystems indicating low integration and high sensitivity of semi-arid ecosystems (269% and 23.7% higher than in dry sub-humid site, and 142% and 51.1% higher than arid sites). The leaf quantity strongly affected the connection between functional traits at the semi-arid zone. The semi-arid areawas found to have relatively low resistance to environmental change because of low integration and high sensitivity of the ecosystem structure at that site. An increase of degree centrality of the root traits and trade-off relationships between roots and leaves indicated greater allocation of resources by vecgetation to underground components by the arid ecosystems to increase water absorption. The study reveals the complex relationships between leaf, root, and biomass components, and the essential traits of the ecosystem. It enhanced understanding of the vulnerability of semi-arid ecosystems to environmental change. | 1 Introduction Ecosystems, characterized as nonlinear, complex, openly dissipative systems, have been impacted by the effects of accelerating and intensifying global change ( Ruddell and Kumar, 2009a ). The multiple functional traits co-regulated ecosystem function of growth, reproduction, and survival under environmental change ( He et al., 2020 ), including the traits of leaves and roots ( Choat et al., 2018 ). The interaction of different functional traits with each other to produce complex relationships ( He et al., 2020 ) is reflected in the collaborative response of different functional traits to environmental changes ( Reich et al., 2020; Lian et al., 2021; Wang et al., 2018; Mina et al., 2021; Liu et al., 2022 ). The complex relationships between functional traits determines the vulnerability of an ecosystem responding to changing environment ( Berdugo et al., 2020; Mina et al., 2021; Yuan et al., 2021; Felipe-Lucia et al., 2020 ). Hence, there is an urgent need to elucidate the complex relationships between different functional traits to understand the response of ecosystems to global change. The traditional analysis focused on particular bioindicators of ecosystems ( He et al., 2020; Lawson and Vialet-Chabrand, 2019; Liu et al., 2020; Stocker et al., 2019 ), where pairwise interactions play key roles in that approach ( Anderegg and Venturas, 2020; Liu et al., 2019; Proulx et al., 2005 ). The frequent neglect of the complexity of the relationships between multiple factors ( Lian et al., 2021; Vicente-Serrano et al., 2020 ) has led to prediction and simulation errors ( Rigden et al., 2020 ). Network theory communicates information about the whole ecosystem via a holistic, system-wide, and tractable approach ( Pocock et al., 2016; Salata and Grillenzoni, 2021; Wang et al., 2021a; Wang et al., 2021b ). The multiple relationships between functional traits can be represented as links between different nodes of a network. Networks have been used to within components of plant leaves ( He et al., 2020 ). The networks suggested multiple relationships between functional traits of plant leaves, which varied with climate gradients However, the picture of the complex relationships between the aboveground and belowground components remains unclear, as well as their relationships with ecosystem biomass. The analysis of network metrics and the keystones of networks would help to reveal the multiple relationships between productivity and the components of aboveground and underground of an ecosystem under changing environment ( Felipe-Lucia et al., 2020; Yuan et al., 2021; Ruddell and Kumar, 2009a ). Drylands occupy 41% of the global terrestrial area contributing 35% of the interannual variation of the carbon cycle ( Prăvălie, 2016; Wang et al., 2021a; Wang et al., 2021b ). Dryland ecosystems face a risk of increasing drought ( Dai, 2013; Huang et al., 2021 ), which will increase the probability of widespread plant mortality under global warming ( Liu et al., 2019 ). The fact that ecosystems rapidly collapse and undergo abrupt changes after the arid threshold is exceeded indicates that drought stress impacts ecosystems in mutiple ways ( Berdugo et al., 2020 ). Drought affects multiple ecosystem factors, including biomass and the legacy of biomass and phenology ( Hoover et al., 2021 ). The characteristics of the multiple relationships between functional traits determine the vulnerability of ecosystems to drought ( Grime, 2006; Ruddell and Kumar, 2009b; Wang et al., 2018 ; T. Wang et al., 2021a; Wang et al., 2021b; Yuan et al., 2021 ). Grasses are among the major types of vegetation in drylands ( Wang et al., 2014 ) and fluctuate more frequently than other types of vegetation ( Chen et al., 2019 ). The functional traits of grassland plants could reflect the characteristics of all the vegetation, including both aboveground and belowground vegetation, and the relationship of that vegetation to ecosystem productivity. The elucidatation of network of plant functional traits in grassland ecosystems is important to clarify the response of the ecosystem to the increasing stress of drought. China is one of the countries that have the most extensive drylands in the world ( Li et al., 2021 ), and its dryland ecosystems have been severely affected by climate change. Here, we conducted a study of Chinese grasslands that covered dry sub-humid, semi-arid, and arid areas. At these sites with different degrees of aridity, we analyzed the network among different functional traits of leaves, roots, and aboveground and underground biomass to understand (1) the network characteristics along drought gradient; (2) the interrelationships between the ecosystem components of leaves, roots, and biomass; and (3) the dominant functional traits of the ecosystem network. 2 Material and methods 2.1 Study areas The drylands of China cover about 660 × 104 km 2 and account for 52% of China’s land area ( Li et al., 2021 ). From east to west in this area, the precipitation decreases from 421 to 158 mm y −1 , and the average temperature increases from − 1.9 to 8.9 °C ( Wang et al., 2019 ). Grass is one of the major types of vegetation in China’s drylands ( Wang et al., 2019 ), including mainly steppe meadows, typical grasslands, and desert grassland. The dominant species are Leymus chinensis , Stipa baicalensis , Vicia sepium , Stipa baicalensis , Agropyron cristatum in the dry sub-humid area; Leymus chinensis , Stipa krylovii , Cleistogenes squarrosa , Stipa krylovii , and Artemisia lavandulifolia in the semi-arid regions; and Sarcozygium xanthoxylon , Oxytropis aciphylla , Nitraria tangutorum , and Reaumuria soongarica in the arid regions. Most of the grasslands of the Chinese drylands are located in Inner Mongolia ( Hu et al., 2015 ). The study was conducted in the grasslands of Inner Mongolia and included dry sub-humid, semi-arid, and arid areas. We selected three sites in the different aridity areas along a transect spanning 1700 km from east to west (Dry sub-humid area: Erguna, semi-arid area: Baiyin, arid area: Xilingol). The other three sites were selected to prove the characteristics of the functional trait network of ecosystems associated with different degrees of aridity(Dry sub-humid area: Hailar, semi-arid area: Xilinhot, arid area: Sisu) ( Fig. 1 ). The conditions at the sites could be considered to be natural because the sites were protected from disturbance by human activities. At each site, we established square plots, each with an area of 45 × 45 m 2 . Four subplot was selected within 10-m intervals of the large plot. Five square subplots with areas of 1 × 1 m 2 were then selected within each subplot. For each subplot, more than four dominant species were collected to assess the functional traits of the study sites (n > 20). 2.2 Measurements of functional traits The collective functional traits of the different dominant species were used to represent the functional traits of the ecosystem (n > 20). The fourteen functional traits of biomass, leaf, and root were measured ( Table 1 ). The functional traits included components of the aboveground and belowground portions of the plants. The component of biomass was also measured because it represented the important function of ecosystem productivity ( Table 1 ). Measurements of the functional traits of fresh plants were conducted as soon as each whole plant was collected. The functional traits of fresh plants included the weight of the fresh biomass as well as the functional traits of fresh leaves and roots. Fresh plants were then put in a drying oven at 60 °C to dry 72 h, and the weight of the dry biomass was measured. The weights of biomass and leaves were measured using a portable laser leaf area meter. The number of leaves was quantified by visual observation. The length, radius, width, and area of each leaf were measured with a portable scanner. The length and radiation of roots were measured with a digital caliper ( Cadotte et al., 2015; Pérez-Harguindeguy et al., 2013 ). 2.3 Construction and analyses of ecosystem functional trait network A network of leaf, root, and biomass components was constructed to indicate their complex relationships. The network consisted of nodes and lines representing functional traits and their pairwise relationships, respectively. The Pearson correlation coefficient was used to quantify the relationships among functional traits in the way that it is typically used in ecological studies ( Gao et al., 2021a , 2021b). Functional traits across subplots were measured to indicate variation of plant characteristics because there were few changes in environmental conditions. The significance of the Pearson correlation coefficient suggested how strongly pairwise functional traits were related at this site and the extent to which the response of functional traits to environmental changes was collaborative. A threshold type I error rate ( p ) was used to quantify the trai–trait relationships. The pairwise relationship among traits was determined to be significant when p was<0.05 ( Yuan et al., 2019 ). An adjacency matrix of trait–trait relationships was then established ( Pocock et al., 2016 ), and the network was constructed by visualizing the adjacency matrix ( Fig. 2 b). Relationships with significant correlations ( p < 0.05) and highly significant correlations ( p < 0.01) were differentiated in this study to indicate the degree of correlation between functional traits. The relationships of different functional traits with each other help to regulate ecosystem function ( He et al., 2020 ). A positive correlation between two functional traits indicates a synergistic relationship between them, whereas a negative correlation implies a trade-off relationship between two functional traits ( Felipe-Lucia et al., 2020 ). Positive and negative correlations in the adjacency matrix were also assumed to indicate synergistic and trade-off relationships, respectively ( Felipe-Lucia et al., 2020 ). Three levels of the ecosystem functional network were analyzed to reveal characteristics of complex relationships of dryland ecosystems, including the network metrics of modularity and degree centrality, as well as the distribution of relationships between components ( Fig. 2 c, Table 2 ). The relationship between pairwise traits means the connection between them and is an indication of a collaborative variation in response to environmental change. Modularity indicates the partitioning of ecosystem functional traits into groups with intense interactions among functional traits within the module but loose interactions with with the traits of other modules, i.e., little integration of the ecosystem. The degree centrality of the network refers to the effect of a few nodes in controlling the connections of the network. High degree centrality of an ecosystem functional trait networks indicates that the connection of the functional trait networks is controlled by a few functional traits, whereas low centrality indicates homogeneity of functional traits ( Ruddell and Kumar, 2009b ). The modularity and degree centrality of the network were calculated following Felipe-Lucia et al. (2020) as metrics of the integration and connection of the ecosystem, respectively ( Ruddell and Kumar, 2009b ). The distribution of strong and weak relationships indicated the intensity of inter-component and intra-component interactions in dryland ecosystems. The positive and negative relationships were distinguished to indicate the synergy and trade-off relationships among functional traits ( Felipe-Lucia et al., 2020 ). The degree centrality of functional traits could indicate the effect of the control of nodes on the connection of the ecosystem functional trait networks ( Proulx et al., 2005 ). The essential factors and processes were identified by analysis of the degree centrality of functional traits ( Ruddell and Kumar, 2009a ). 2.4 Data analyses Network analyses were conducted at one of the two dry sub-humid, semi-arid, and arid sites ( Fig. 1 ). The other dry sub-humid, semi-arid, and arid sites were selected to verify the functional networks ( Figs. S1, S2 ). The variation of functional traits along the aridity gradient was also analyzed to validate the functional network. Three levels of the network were analyzed to verify the characteristics of the ecosystem functional trait networks. More than twenty values of each functional trait (n > 20) were used to analyze the correlation among functional traits. Pearson correlation analysis was performed with Python ( https://www.python.org/ ). Ucinet software was used to visualize and analyze the ecosystem network ( http://www.analytictech.com/archive/ucinet.htm ). 3 Results 3.1 High network modularity and degree centrality in the semi-arid ecosystem The network indicate ecosystem components correlated with each other at all three sites (Fig. 3 and S1). The aboveground biomass was jointly regulated by leaf area, leaf weight, root diameter, root length, and underground biomass at the Erguna site. The aboveground biomass was also regulated by the width, radius, and weight of leaves through indirect relationships. The network of ecosystem functional traits at the Baiyin and Xilingol sites also indicated that the ecosystem biomass as jointly determined by different functional traits of leaf and roo through both direct and indirect relationships. Network modularity was a metric indicating the integration and connection of ecosystem functional traits. The modularity of the functional trait networks at the Baiyin site was 269% and 24% higher than the analogous modularities at the Erguna site and Xilingol site, respectively. The result suggested the integration and connection of ecosystem at Baiyin was lower than that of Erguna and Xilingol site. The network of functional traits was separated into two modules at the semi-arid site (Figs. 3 and S2). The degree centrality of the functional trait networks was a metric indicating the effect of the control of a few traits on the complex relationships between functional traits. The degree centrality of the functional trait networks at the Baiyin site was 142% and 51% higher than the analogous centrality at the Erguna and Xilingol sites, respectively ( Fig. 3 ). Thus the relationships among different functional traits were controlled by a few traits at Baiyin compared with Erguna and Xilingol site. 3.2 The relationship among ecosystem intra-components and inter-components The strong relationships were concentrated on inter-components, whereas weak relationships were concentrated on intra-components ( Fig. 4 j). The range of changes in the number of relationships between intra-components was 0–20% from Erguna to Xilingol sites, whereas the range of changes in the number of relationships was 36–77% between inter-components ( Fig. 4 j). The relationship of inter-components at the Baiyin site was weak; the number of relationships was 188% and 41% less than the number of relationships at the Erguna and Xilingol sites, respectively ( Figs. 3, 4 j). Interaction between the leaf and biomass components occurred only in the relationship between leaf quantity and the dry weight of aboveground biomass at the Baiyin site ( Fig. 5 a, b, c). Synergistic relationships between roots and leaves were apparent at the Erguna site. But there were trade-off relationships between root diameter and leaf weight and leaf length at the Baiyin site and trade-offs of root diameter with leaf area, weight, radius, length, and width at the Xilingol site ( Fig. 5 a, d, g). The trade-off relationship between roots and leaves thus increased along the aridity gradient. 3.3 The essential nodes of ecosystem functional trait networkss The degree centrality of the functional traits was a metric of the effect of traits in controlling the relationships among functional traits, and it revealed different characteristics at the three different aridity sites ( Fig. 5 ). The highest degree centrality of functional traits was 152% and 120% higher at the Baiyin site than at the Erguna and Xilingol sites, respectively ( Fig. 5 ). The trait with the highest degree centrality of the network was leaf weight at the Erguna site, leaf quantity and dry weight of aboveground plant at the Baiyin site, and root diameter at the Xilingol site ( Fig. 5 ). The degree centrality of leaf width and weight decreased from the Erguna site to the Baiyin site, whereas the degree centrality of leaf quantity increased. The degree centrality of above-ground biomass weight decreased from the Baiyin to the Xilingol sites, but the fresh weight of underground biomass and root diameter increased from the Baiyin to Xilingol sites ( Fig. 5 ). 4 Discussion Traditional analyses of functional trait have focused on particular bioindicator or components of an ecosystem ( Aguirre-Gutiérrez et al., 2020; Anderegg and Venturas, 2020; Liu et al., 2019 ). Various attributes have been analyzed to evaluate the response of the ecosystem to water availability stress ( Forzieri et al., 2020; Jiao et al., 2021; Liu et al., 2020; Stocker et al., 2019 ). This study suggest that complex relationships between different functional traits. The number of significant relationships between ecosystem components changes across environments ( He et al., 2020 ), and of the nature of pairwise interactions varied with environmental changes ( Gao et al., 2021a; Pan et al., 2021; Wei et al., 2020 ). The differences in the structure of the ecosystem functional trait networks at the three sites with different aridity levels indicate that the relationships between the functional traits of leaves, roots, and biomass varied from the dry sub-humid to arid sites. 4.1 Low integration and connection of ecosystems at the semi-arid site Network modularity is a metric indicating the integration of ecosystem functional traits ( Felipe-Lucia et al., 2020 ), and it affect ecosystem stability ( Yuan et al., 2021 ). Modularity at the Baiyin site was the highest among the three different aridity sites ( Fig. 3 b and S2). The two modules of the ecosystem functional traits network in Baiyin site were linked only by leaf quantity and aboveground biomass. Increasing water availability stress at the semi-arid area could cause the decoupling of multiple ecosystem processes ( Delgado-Baquerizo et al., 2013 ). This result has also been observed in ecohydrological process networks, in which the interactions of ecohydrology are substantially decoupled during drought ( Ruddell and Kumar, 2009a ). The strong relationships were most frequent among intra-components and were more stable for intra-components than inter-components along the aridity gradient ( Fig. 4 j). This result indicates that the relationships of intra-components were less sensitive than those of inter-components along the aridity gradient. The more frequent dissociation of inter-component relationships caused the relationships within ecosystems to be partitioned into two modules in the semi-arid ecosystems (Fig. 3 and S1). The weakening of the relationship between inter-components decreased the connectivity and integration of the ecosystems at the semi-arid site compared to the dry sub-humid and arid sites ( Fig. 4 j). This result has also been identified in a microbial network, in which microbial interactions decreased with reduced water availability ( Wang et al., 2018 ). However, an ecosystem can regulate the relationships between different components to adapt to increases in water availability stress ( Yuan et al., 2021 ). The modularity and centrality of the functional network decreased from the Baiyin to the Xilingol sites ( Fig. 3 ). The correlation among the aboveground and belowground components of the ecosystems increases from Baiyin to Xilingol sites ( Figs. 3 and 4 ). The arid ecosystem allocated more resources to the underground components, and the ratio of belowground to aboveground components increased from the semi-arid to arid sites ( Fig. 3 c). The high degree centrality of the roots also supports this pattern ( Fig. 5 ), which indicates the effect of roots in controlling the ecosystem functional network. The relationship between roots and leaves transitioned from synergy to trade-off with increase of water availability stress ( Fig. 4 a,d,g). The decrease of photosynthesis with higher water stress caused competition for the allocation of carbon resources between aboveground and underground biomass ( Gao et al., 2021a; Hasibeder et al., 2015 ). The increase of correlation among the aboveground and belowground components resulted in a decrease of modularity and centrality of the functional network from the semi-arid to the arid site ( Fig. 3 ). The good correspondence between the characteristic of modularity and resilience to ecosystem drought at study sites has been evidenced by higher values of modularity at a semi-arid site versus dry sub-humid and arid sites ( Huang et al., 2021 ). The decrease of correlation among functional traits (high modularity) lowers the collaborative response of different functional traits to environmental change ( Gao et al., 2021b; Lawson and Vialet-Chabrand, 2019 ) and might be the reason for the lower drought resilience of ecosystems in semi-arid versus dry sub-humid and arid sites ( Mina et al., 2021 ). 4.2 Strong control of essential factors on network connection in the semi-arid ecosystem The degree centrality of the functional trait networks could indicate the effect of traits in controlling the connections of the functional trait networks (Xin Gao et al., 2011 ). The higher network degree centrality at the Baiyin site versus the Erguna and Xilingol sites ( Fig. 3 b) indicate that the connection among ecosystem functional traitswas controlled by a few functional traits in semi-arid ecosystem. The highest and average value of the centrality of functional traits was also higher at the Baiyin site than at the Erguna and Xilingol sites ( Fig. 5 ). This result is consistent with the higher network modularity at the semi-arid site. The functional trait with the highest centrality was the leaf weight at the Erguna site, leaf quantity at the Baiyin site, and root diameter at the Xilingol site ( Fig. 5 ). These results indicate that the relationships between functional traits were controlled mainly by leaf weight at the dry sub-humid site, leaf quantity at the semi-arid site, and root diameter at the the arid site. This pattern was caused by the decrease of leaf area from the dry sub-humid site to the semi-arid site ( Figs. 3 and 5 ), which could decrease the transpiration of plants ( Ahmed et al., 2017 ). The fact that biomass was determined mainly by leaf quantity explains the effect of leaf quantity on ecosystem connection at the semi-arid site ( Funk et al., 2021 ). The biomass was correlated mainly with aboveground biomass at the semi-arid sites ( Fig. 3 a and 4a,d,f). The response of leaf quantity and aboveground biomass to environmental change would cause variation of the carbon cycle. Thus an ecosystem with higher network degree centrality might be more sensitive to water stress associated with environmental change. This greater sensitivity might be the reason for the major contribution of semi-arid ecosystems to interannual variability of the global carbon cycle ( Ahlstrom et al., 2015; Poulter et al., 2014 ). Because of increasing water availability stress, there has been competition for carbon resources between aboveground and underground plant biomass ( Hasibeder et al., 2015 ). The arid ecosystem allocated more resources to the underground biomass, and the ratio of belowground to aboveground biomass increased from the semi-arid to arid sites ( Fig. 3 c). The increase of the trade-off between components of roots and leaves from the dry sub-humid to the arid site was consistent with this competition ( Fig. 4 d,e,f). The controlling effect of roots thus increased from the semi-arid to the arid sites and was greater than other functional traits in the arid ecosystem. 4.3 The implications of network analysis concerning the multiple factors of ecosystems Ecosystem functional trait networks revealed the complex relationships between different functional traits. Ecosystem functions (such as plant biomass) were affected by both direct and indirect relationships between functional traits ( Fig. 4 a.). The responses of ecosystems to environmental changes are regulated by multiple factors that are related to different functional traits ( Reich et al., 2020; Wang et al., 2010 ), and neglect of those relationships would cause prediction errors ( Rigden et al., 2020 ). The characteristics of the relationships between ecosystem functional traits differed between the three aridity sites. The high modularity and degree centrality of the network at the semi-arid site, as well as the weaker relationship between inter-components, suggest that there was less connection and integration of the ecosystem at the semi-arid site. The high value of degree centrality of functional traits at the semi-arid site is consistent with this assessment. This result suggests that the sensitivity and vulnerability of the ecosystem was greatest at the semi-arid site. The characteristics of the interactions among ecosystem functional traits were indicated by the network matrices, the distribution of the relationships among intra-components and inter-components, and the essential factors of the networks. However, the network of ecosystem functional traits might differ at a hyper-arid site. This study was conducted in grasslands; forest ecosystems are more complex ( Hamrick, 2004; Sanaei et al., 2021; Yi et al., 2021 ). Farmland ecosystems are affected by anthropogenic effects ( Allen et al., 2005; Tong et al., 2009; van der Velde et al., 2010 ). Climate change effects might therefore differ in a forest or farmland ecosystem from those we report here for grasslands. The network analysis used in this study may be applied to other vegetation/climate types to better understand the interactions of ecosystem functional traits. 5 Conclusions The interactions of the functional traits created an ecosystem functional trait networks. The network revealed the complex relationships between functional traits differed at the three different aridity sites. The modularity of the network was highest at the semi-arid site. The lower number of relationships between inter-components at the semi-arid site versus the dry sub-humid and arid sites indicates that the relationships among inter-components were weak at the semi-arid site. This result indicates that the connection and integration of the functional network was lower at the semi-arid site versus the dry sub-humid and arid site and corresponds to the low resistance of ecosystems at the semi-arid sites. The degree centrality of the network was higher at the semi-arid site than at the dry sub-humid and arid sites. The fact that the value of the degree centrality of functional traits was also highest at the semi-arid site means that the relationships between functional traits were strongly controlled by a few traits at the semi-arid site. The ecosystem was thus sensitive and vulnerable at the semi-arid site. The degree centrality of roots was highest at the arid site, and the fact that the trade-offs between leaves and roots increased from the dry sub-humid to the arid site indicates that the ecosystem tended to regulate the interaction of functional traits to increase water absorption. This study used a functional trait networks to reveal the characteristics of the complex relationships between ecosystem functional traits at sites characterized by different degrees of aridity. The relationships between ecosystem components and essential traits were revealed by the functional trait networks. The study helped to reveal why semi-arid ecosystems are vulnerable to environmental changes. CRediT authorship contribution statement Dexin Gao: Conceptualization, Methodology, Data curation, Writing original draft. Shuai Wang: Conceptualization, Methodology. Fangli Wei: Visualization, Investigation. Xutong Wu: Supervision. Sha Zhou: Supervision. Lixin Wang: Supervision. Zidong Li: Software, Validation. Peng Chen: Software, Validation. Bojie Fu: Writing, review & editing. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements This work was supported by National Natural Science Foundation of China ( 41991230 ), “the Fundamental Research Funds for the Central Universities”, Bayannur Ecological Governance and Green Development Academician Expert Workstation (YSZ2018-1), and the Science and Technology Project of Inner Mongolia Autonomous Region (NMKJXM202109). The data supporting the finding of this study are produced by field campaigns, and we appreciate all people contributing to this project. We thank Prof. Nianpeng He of Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences for suggestion of method of functional trait networks. We also thank the editors and reviewers for suggestion to improve this manuscript. Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.ecolind.2022.108894 . Appendix A Supplementary data The following are the Supplementary data to this article: Supplementary data 1 Supplementary data 2 Supplementary data 3 | [
"AGUIRREGUTIERREZ",
"AHLSTROM",
"AHMED",
"ALLEN",
"ANDEREGG",
"BERDUGO",
"CADOTTE",
"CHEN",
"CHOAT",
"DAI",
"DELGADOBAQUERIZO",
"FAN",
"FELIPELUCIA",
"FORZIERI",
"FUNK",
"GAO",
"GAO",
"GRIME",
"HAMRICK",
"HASIBEDER",
"HE",
"HOOVER",
"HU",
"HUANG",
"JIAO",
"LAWSON",
... |
1e2bfeb204e2498cb5739bf65207b668_Stand-alone ALIF versus TLIF in patients with low back pain A propensity-matched cohort study with _10.1016_j.bas.2023.102713.xml | Stand-alone ALIF versus TLIF in patients with low back pain – A propensity-matched cohort study with two-year follow-up | [
"Toma, Ali A.",
"Hallager, Dennis W.",
"Bech, Rune D.",
"Carreon, Leah Y.",
"Andersen, Mikkel Ø.",
"Udby, Peter M."
] | Introduction
Instrumented lumbar fusion by either the anterior or transforaminal approach has different advantages and disadvantages. Few studies have compared PatientReported Outcomes Measures (PROMs) between stand-alone anterior lumbar interbody fusion (SA-ALIF) and transforaminal lumbar interbody fusion (TLIF).
Research question
This is a register-based dual-center study on patients with severe disc degeneration (DD) and low back pain (LBP) undergoing single-level SA-ALIF or TLIF. Comparing PROMs, including disability, quality of life, back- and leg-pain and patient satisfaction two years after SA-ALIF or TLIF, respectively.
Material and methods
Data were collected preoperatively and at one and two-year follow-up. The primary outcome was Oswestry Disability Index (ODI). The secondary outcomes were patient satisfaction, walking ability, visual analog scale (VAS) scores for back and leg pain, and quality of life (QoL) measured by the European Quality of Life-5 Dimensions (EQ-5D) index score. To reduce baseline differences between groups, propensity-score matching was employed in a 1:1 fashion.
Results
92 patients were matched, 46 S A-ALIF and 46 TLIF. They were comparable preoperatively, with no significant difference in demographic data or PROMs (P > 0.10). Both groups obtained statistically significant improvement in the ODI, QoL and VAS-score (P < 0.01), but no significant difference was observed (P = 0.14). No statistically significant differences in EQ-5D index scores (P = 0.25), VAS score for leg pain (P = 0.88) and back pain (P = 0.37) at two years follow-up.
Conclusion
Significant improvements in ODI, VAS-scores for back and leg pain, and EQ-5D index score were registered after two-year follow-up with both SA-ALIF and TLIF. No significant differences in improvement. | 1 Introduction Back pain is a leading cause of disability globally ( Hoy et al., 2012 ; Wu et al., 2020 ). In patients with debilitating low back pain (LBP), spine surgery with instrumented lumbar fusion aims to reduce segmental instability and pain. A posterior approach and transforaminal lumbar interbody fusion (TLIF) or an anterior approach by stand-alone anterior lumbar interbody fusion (SA-ALIF) is commonly performed, but the optimal surgical technique remains controversial. Both surgical procedures aim to obtain fusion, decompress neurological structures and restore optimal anatomical alignment ( Goyal et al., 2009 ; Hackenberg et al., 2005 ). Surgically this is obtained by removing the disc and exposing the adjacent endplates, followed by inserting an interbody implant to facilitate interbody arthrodesis. A TLIF technique comprises a posterior approach with complete unilateral facetectomy, discectomy, and endplate preparation allowing for the insertion of a cage ( Mobbs et al., 2015 ; Spiker et al., 2019 ). The technique can be associated with postoperative lower back pain due to extensive muscle dissection and retraction and the risk of damaging the nearby nerve root when inserting the TLIF ( Reisener et al., 2020 ). The anterior approach to the spine used in SA-ALIF grants access to the entire disc, making a nearly complete discectomy possible. This allows inserting a larger cage with improved restoration of the local disc angle, lumbar lordosis ( Hsieh et al., 2007 ; Lightsey et al., 2022 ) and improved sagittal balance compared to TLIF ( Mobbs et al., 2015 ). SA-ALIF is also possibly advantageous in terms of muscle and nerve root damage, less perioperative blood loss, reduced surgical time, and shorter length of stay ( Strube et al., 2012 ; Szadkowski et al., 2020 ). On the other hand, the risks of an anterior approach include serious visceral and vascular injuries ( Szadkowski et al., 2020 ; Mobbs et al., 2013 ), damage to the sympathetic plexus, and retrograde ejaculation ( Christensen et al., 1997 ; Wood et al., 2010 ; Kain et al., 1993 ; Phan et al., 2017 ). Another potential complication is post-operative subsidence and subsequent loss of disc height. However, subsidence rates following SA-ALIF, are generally very low and turn out not to impact clinical outcomes or fusion significantly ( Rao et al., 2017 ). Only a few studies offer insight in terms of differences in long-term patient-reported outcome measures (PROMs), which we consider critical for patient counseling ( Bassani et al., 2020 ; Kuang et al., 2017 ; Adogwa et al., 2016 ). Specifically, for the L5/S1 level, there is only one single surgeon experience utilizing a mini-open ALIF approach ( Bassani et al., 2020 ). The aim of our study is to compare two-year follow-up with PROMs in a cohort of patients who underwent single-level lumbar fusion at L5-S1 with either SA-ALIF or TLIF. 2 Materials and methods Data from the Danish national surgical spine database DaneSpine ( Danespine ) were extracted for this dual-center register-based study. Pre- and postoperative questionnaires, surgical data, and baseline demographics were retrieved. Adult patients (age>18) who had undergone one-level SA-ALIF or TLIF at L5-S1 between January 1st , 2010 and December 31st , 2018 at the Spine Center of Southern Denmark or Zealand University Hospital were included. Exclusion criteria were incomplete ODI-scores pre-operatively or at two-year follow-up. Baseline data included patient age, sex, body mass index (BMI), smoking status, use of analgesics, duration of symptoms, and previous spine surgery. The primary outcome was the Oswestry Disability Index (ODI), which ranges from 0 (no disability) to 100 (maximal disability) ( Comins et al., 2020 ; Fairbank et al., 2000 ), and an improvement of at least 12.8 points was considered to be the minimal clinically important difference (MCID) ( Copay et al., 2008 ). The secondary outcomes were patient satisfaction, walking ability, visual analog scale (VAS) scores for back and leg pain ( Briggs et al., 1999 ), Euro-Qol-5D (EQ-5D) ranging from −0.596 to 1, with higher scores indicating better quality of life ( Brooks, 1996 ; Dolan, 1997 ). 2.1 Statistical analysis Data analysis was performed in R version 4.2.1. TLIF cases were matched to SA-ALIF cases using closest neighbor propensity-score matching on age, sex, smoking status, body mass index (BMI), baseline ODI, VAS, and EQ5D scores. Categorical data are presented with frequencies (%) and compared using the Pearson Chi test. Continuous data are reported as a mean ± standard deviation (SD). Pre- and postoperative continuous data differences are compared using paired t -test with Welch correction, whereas differences between groups are compared using unpaired tests. The significance level was set at 0.05. 3 Results 317 patients with single-level SA-ALIF (132) or TLIF (185) at L5/S1 were identified. Baseline data and two-year follow-up were available for 143 patients. Finally, 92 patients (46 S A-ALIF and 46 TLIF) were propensity matched. Baseline data were comparable. Only previous spine surgery was more frequent in the TLIF group ( Table 1 ). Both groups showed statistically significant improvement in ODI at two years −15 (95%CI -20; −10) for SA-ALIF and −10 (95%CI -16; −5) for TLIF groups, respectively. We found lower ODI scores for SA-ALIF, but no significant difference between the groups after two years −6 (95%CI -15; 2, p = 0.14) ( Fig. 1 ). In addition, we observed no significant group differences in EQ-5D scores or VAS scores for leg or back pain. PROMs are presented in Table 2 . Although previous spine surgery status was associated with a worse preoperative ODI score of 52 vs. 41, p < 0.01, we found no significant association with ODI change or two-year follow-up scores (p = 0.13 and p = 0.23, respectively). Patient satisfaction at two-year follow-up was also not significantly different, with 26 satisfied patients (58%) in the SA-ALIF group and 22 patients (49%) in the TLIF group. 8 patients (18%) and 12 patients (27%), respectively, were dissatisfied and 11 patients (24%) in each group were undecided. The rate of satisfied patients corresponded to the proportion of patients exceeding MCID for ODI, with 59% in the SA-ALIF and 48% in the TLIF group. For back pain, 72% of SA-ALIF and 67% of TLIF patients reported at least some improvement at two years, whereas 53% and 64% reported at least some improvement for leg pain, respectively. Functional outcomes other than PROMs are presented in Table 3 . 4 Discussion From our propensity-matched analyses, we found statistically significant improvement in ODI, EQ-5D, and VAS back and leg pain at two years. Although they favored SA-ALIF, the differences were relatively small and statistically non-significant. Two-thirds of the patients in each group reported at least some improvement in back pain and about half reached an improvement in ODI exceeding MCID of 12.8 points. These results indicate no superiority of either technique concerning functional outcomes after two years. This is in line with the few previous studies reporting ODI outcomes for TLIF vs. SA-ALIF ( Bassani et al., 2020 ; Kuang et al., 2017 ; Adogwa et al., 2016 ; Rathbone et al., 2023 ). However, two of these report considerably larger ODI improvements in both groups: From 50 to 25 for SA-ALIF and 52 to 24 for TLIF ( Kuang et al., 2017 ) and 65 to 15 and 78 to 21, respectively ( Bassani et al., 2020 ). Their study design differs considerably from the present since they are both single surgeon retrospective reports and their cohorts are more selective, with the exclusion of patients with BMI >28 or 30, previous surgery ( Kuang et al., 2017 ) and diabetes ( Bassani et al., 2020 ). The results are thus less generalizable. The multi-institutional register study by Adogwa et al. (2016) is more comparable, although previous spine surgery was exclusion criteria and surgery was not exclusively at the L5/S1 level. They found a decrease in ODI scores much comparable to our results, with 47 to 32 in both groups. Despite these comparable scores, they report that 70% and 79% of patients, respectively, improve to a level that meets the patients' expectations at one year. Our study marked a difference between 58% and 49%, respectively, highlighting the importance of managing patients’ expectations and that the difference in the wording of the follow-up questions can impact the results. The strength of the current study is the registry-based propensity-matched cohort that allows us to identify substantial PROM-data with two-year follow-up from patients undergoing fusion at the L5/S1 level specifically. Although, as a register-based study, there is an inherent risk of selection bias regarding which patients are registered and who are willing or able to respond to questionnaires. Here complete data were available for 45% of eligible patients. However, this was not related to surgical procedure, as completion was 42% in the SA-ALIF vs. 48% in the TLIF group, p = 0.30. For statistically non-significant results, type-2 errors must be considered. In our case, we compared groups of 46 patients, which was insufficient to show the statistical significance of a mean difference in ODI at two-year follow-up of 6 points. However, since 6 points are less than half the specified MCID for the ODI, we consider the difference not clinically important. Furthermore, observational studies have an increased risk of residual confounding compared to randomized trials. Propensity score matching, utilized in this study, is an attempt to reduce confounding by mimicking randomization, but it is not a perfect tool. A profound disadvantage is that it is only possible to match potential confounders, which are accounted for. We matched on important potential confounders previously shown to be associated with outcomes after spine surgery, i.e. age, sex, smoking status, BMI, as well as baseline ODI, VAS, and EQ5D scores ( Fairbank et al., 2000 ; Briggs et al., 1999 ; Brooks, 1996 ). Despite this matching, 18% more patients in the TLIF group had undergone previous spine surgery compared to the ALIF group, but we found no indication of confounding from previous surgery status with regard to ODI change or follow-up scores (p > 0.10). It is important to note the clinical symptoms before surgery. LBP with radiculopathy and MRI-verified spinal stenosis were undoubtedly significant factors influencing the decision to proceed with surgery. Our results indicate that patients, who are considered candidates for either technique by the treating surgeons, could have similar long-term functional outcomes independent of the chosen approach. However, since we were unable to account for the reasoning behind each surgeon's choice of approach for each specific patient, this could have led to an unrecognized difference in prognosis between the groups called confounding by indication. Thus, we encourage confirmation from randomized controlled trials. 5 Conclusion In this dual-center propensity score-matched registry-based study on prospectively collected data, we found significant improvement in ODI, EQ5D and VAS for back and leg pain at two-year follow-up for SA-ALIF and TLIF at L5/S1 with no significant differences between the groups. It is, however, important to inform patients of possible suboptimal outcomes that may be associated with each of the two types of surgery. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | [
"ADOGWA",
"BASSANI",
"BRIGGS",
"BROOKS",
"CHRISTENSEN",
"COMINS",
"COPAY",
"DANESPINE",
"DOLAN",
"FAIRBANK",
"GOYAL",
"HACKENBERG",
"HOY",
"HSIEH",
"KAIN",
"KUANG",
"LIGHTSEY",
"MOBBS",
"MOBBS",
"PHAN",
"RAO",
"RATHBONE",
"REISENER",
"SPIKER",
"STRUBE",
"SZADKOWSKI"... |
1a23080d1f804048b2ed18f0baa52996_Inflammation subtypes in psychosis and their relationships with genetic risk for psychiatric and car_10.1016_j.bbih.2022.100459.xml | Inflammation subtypes in psychosis and their relationships with genetic risk for psychiatric and cardiometabolic disorders | [
"Zhang, Lusi",
"Lizano, Paulo",
"Guo, Bin",
"Xu, Yanxun",
"Rubin, Leah H.",
"Hill, S. Kristian",
"Alliey-Rodriguez, Ney",
"Lee, Adam M.",
"Wu, Baolin",
"Keedy, Sarah K.",
"Tamminga, Carol A.",
"Pearlson, Godfrey D.",
"Clementz, Brett A.",
"Keshavan, Matcheri S.",
"Gershon, Elliot S.",
... | Cardiometabolic disorders have known inflammatory implications, and peripheral measures of inflammation and cardiometabolic disorders are common in persons with psychotic disorders. Inflammatory signatures are also related to neurobiological and behavioral changes in psychosis. Relationships between systemic inflammation and cardiometabolic genetic risk in persons with psychosis have not been examined. Thirteen peripheral inflammatory markers and genome-wide genotyping were assessed in 122 participants (n = 86 psychosis, n = 36 healthy controls) of European ancestry. Cluster analyses of inflammatory markers classified higher and lower inflammation subgroups. Single-trait genetic risk scores (GRS) were constructed for each participant using previously reported GWAS summary statistics for the following traits: schizophrenia, bipolar disorder, major depressive disorder, coronary artery disease, type-2 diabetes, low-density lipoprotein, high-density lipoprotein, triglycerides, and waist-to-hip ratio. Genetic correlations across traits were quantified. Principal component (PC) analysis of the cardiometabolic GRSs generated six PC loadings used in regression models to examine associations with inflammation markers. Functional module discovery explored biological mechanisms of the inflammation association of cardiometabolic GRS genes. A subgroup of 38% persons with psychotic disorders was characterized with higher inflammation status. These higher inflammation individuals had lower BACS scores (p = 0.038) compared to those with lower inflammation. The first PC of the cardiometabolic GRS matrix was related to higher inflammation status in persons with psychotic disorders (OR = 2.037, p = 0.001). Two of eight modules within the functional interaction network of cardiometabolic GRS genes were enriched for immune processes. Cardiometabolic genetic risk may predispose some individuals with psychosis to elevated inflammation which adversely impacts cognition associated with illness. | 1 Introduction Psychotic disorders represent a spectrum of severe mental illnesses with clinical and etiologic heterogeneity ( Garver, 1997 ). Immune and inflammatory dysregulation has been implicated in patients with psychotic disorders ( Pathmanandavel et al., 2013 ) and linked to symptoms, brain structural alternations, and cognitive impairment ( Bishop et al., 2022 ). Alterations of CRP, multiple proinflammatory cytokines, and vascular markers (e.g., IL1, IL1RA, IL2R, IL4, IL6, IL8, IL10, IL12, TNFα, TGFβ, IFNγ, and VEGFA) have all been identified in case-control studies ( Goldsmith et al., 2016 ; Lizano et al., 2016 , 2021 ; Miller et al., 2011 ). Some (e.g., IL1RA, sIL2R, IL6, VEGF, and CRP) also appear to decrease after antipsychotic treatment ( Bishop et al., 2022 ). Emerging studies have explored inflammation subgrouping approaches based on the aggregation of multiple peripheral markers and multivariate patterns of inflammation dysregulation ( Fillman et al., 2016 ; Hoang et al., 2022 ; Lizano et al., 2021 ). Genetic studies represent a promising approach for enhancing our mechanistic understanding of the potential etiologies of these immune and inflammatory alterations in individuals with psychosis ( McGrath et al., 2013 ; Bishop et al., 2022 ). Large-scale genome-wide association studies (GWAS) of disease risk have revealed an enrichment for immune system genes amongst loci associated with schizophrenia (SCZ) risk ( Schizophrenia Working Group of the Psychiatric Genomics Consortium, 2014 ). Subsequent studies have advanced our understanding of the immunogenetic architectures of SCZ ( Lin et al., 2016 ) and the possible shared genetic etiology with autoimmune diseases ( Pouget et al., 2019 ). Multiple gene candidates within and outside the major histocompatibility complex (MHC) regions have significant associations with both genetic liability to psychosis and immune and inflammatory processes ( Pouget et al., 2016 ; Sekar et al., 2016 ). These findings have collectively led to a hypothesis that genetic liability for psychotic disorders is related to immune dysregulation. However, the links between genetic risk for psychosis and other psychiatric conditions with measures of peripheral inflammation have not been extensively explored. Only two prior studies examined correlations between genetic risk scores (GRSs) of mental illnesses and peripheral inflammatory markers ( Maj et al., 2020 ; Morgan et al., 2017 ). These findings provided some biological insights into the relationships of genetic liability to SCZ, bipolar disorder (BD), and Alzheimer's disease with peripheral alterations of individual inflammatory markers, including as CRP, clusterin, C1 inhibitor, and ghrelin. Cardiometabolic diseases, including coronary heart disease, obesity, diabetes, and dyslipidemia, are highly prevalent in persons with psychotic disorders and have been associated with poor cognitive and functional outcomes and reduced life expectancy in this population ( DE HERT et al., 2009 ; Saha et al., 2007 ; Perry et al., 2019 ; Hagi et al., 2021 ). Excess cardiometabolic risks are commonly attributed to side effects from antipsychotic drugs used to treat these illnesses, unhealthy lifestyle, poor access or engagement with healthcare, or other socioeconomic factors ( Correll et al., 2014; Smith et al., 2020 ). However, these risks have also been reported in antipsychotic-naïve patients with first-episode psychosis ( Correll et al., 2014 ; Perry et al., 2016 ; Garcia-Rizo et al., 2017 ) and their first-degree relatives ( Fernandez-Egea et al., 2008 ), suggesting a genetic etiology independent of treatment effects. Extensive evidence has established a role of immune and inflammation alterations in the pathogenesis of cardiometabolic diseases, involving abnormal lipid and glucose metabolism and increased adiposity ( DeMarco et al., 2010 ; Donath et al., 2019 ). GWASs of risk for cardiovascular and metabolic conditions have revealed multiple disease risk loci linked to inflammatory processes ( Kraja et al., 2014 ; Mauersberger et al., 2021 ). Recent studies leveraging large-scale genetic findings have explored the shared genetic etiology and pleiotropy between SCZ and cardiometabolic conditions. Liu et al. (2020) identified 21 pleiotropic genes shared between SCZ and cardiometabolic diseases. So et al. (2019) investigated the genetic associations of SCZ and BD with 28 cardiometabolic traits and reported relationships between elevated triglycerides (TG) and SCZ risk. Polygenic associations with SCZ also indicated abnormal adipokine profile and glucose metabolism, visceral adiposity, and increased waist-to-hip ratio. Among numerous genetic variants and biological pathways found to be shared between SCZ and cardiometabolic traits, some were related to immune function and inflammation. Overall, prior studies collectively suggest that genetic risk factors for psychiatric illnesses and cardiometabolic disorders may be related to elevated inflammation in psychotic disorders. Genetic studies, to date, however, have not directly tested this hypothesis, nor have they examined the relationship between genetic risk for psychiatric or cardiometabolic diseases and inflammatory dysregulation. Furthermore, it remains to be determined whether this is a general association amongst all patients or limited to a subgroup. Thus, we performed, to our knowledge, the first study exploring multivariate signatures of peripheral inflammation in persons with psychotic illnesses (Psychosis) and healthy controls (HC) and their relation to summarizing genetic risk for cardiometabolic and psychiatric illness. We hypothesized that elevated inflammation would be associated with higher psychiatric and cardiometabolic GRSs, worse psychosis symptoms and lower cognitive performance. 2 Methods and materials 2.1 Study participants This study included 122 participants (n = 86 Psychosis, n = 36 HC) enrolled through the Chicago site of the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B–SNIP) consortium ( Tamminga et al., 2013 ). These participants were a subgroup of self-identified white/European ancestry from the multiracial cohort previously characterized for peripheral inflammation measures and their relationships to neurobiological phenotypes and clinical and medication variables ( Lizano et al., 2021 ). The rationale for examining persons of white/European ancestry is to ensure the appropriateness of genetic risk scoring, which was based on the results of large-scale GWAS primarily conducted in European subjects, not representative of other ancestry groups ( Lewis and Vassos, 2020 ). All participants provided written informed consent and blood samples (see Supplemental Methods for inclusion criteria). Persons with psychotic disorders had consensus diagnoses of SCZ, schizoaffective disorder (SAD), or BD with psychotic features based on the Structured Clinical Interview for DSM-IV. Further details on inclusion/exclusion criteria and participant assessments are available at Tamminga et al. (2013) . Positive and Negative Syndrome Scale (PANSS), Young Mania Rating Scale (YMRS), and Montgomery Åsberg Depression Rating Scale (MADRS) were administered to assess symptom severity. Cognitive performance was assessed using the Brief Assessment of Cognition in Schizophrenia (BACS) ( Keefe et al., 2004 ). Details on medication history collection are available in Supplemental Methods. 2.2 Inflammation subtyping based on peripheral inflammatory markers Serum inflammatory markers assays and analyses were performed as previously reported ( Lizano et al., 2021 ) (see Supplemental Methods and Table S1 for details). Briefly, serum concentrations of 13 inflammatory and microvascular markers (selected based on meta-analyses of implications in psychosis) were measured by using the customized V-Plex sandwich immunoassays and the Sector 6000 Microplate ELISA System from Meso Scale Diagnostics (MSD, Rockville, MD) [CRP, Flt1, IFNγ, IL1β, IL6, IL8, IL10, IL12/IL23p40, TNFα, TNFβ, VEGF, VEGFD] and solid-phase sandwich ELISA (Beckton, Dicknson and Company BD Biosciences, San Jose, CA) [C4a] and passed quality control. An unsupervised exploratory factor analysis was performed using the inflammatory markers to uncover the underlying factor structure of the markers and to provide factor loadings for each participant. Specifically, a multivariate linear regression of the markers was firstly fitted on covariates (hemolysis score, storage days, sample set, sex, age, and ancestry) to adjust for these potential confounders, then principal component analysis (PCA) was performed for the residual from the regression model. This resulted in a five-factor model representing ∼70% cumulative variance of inflammatory markers. Hierarchical clustering was performed using these inflammation factors and identified an optimal clustering solution with the first inflammation factor, representing seven elevated markers (CRP, IFNγ, IL1β, IL8, IL10, TNFα, and VEGF). The clustering solution revealed higher and lower inflammation subtypes based on Silhouette coefficients of 0.59, maximized gap statistic, and minimized connectivity index (see Lizano et al. (2021) for details). The stability and consistency of additional clustering performance metrics was confirmed, which were nearly identical in this subsample of the previously reported group (e.g., confirmed two-cluster solution, Silhouette values of 0.58 versus 0.59, maximized gap statistic values of 0.32 versus 0.34). The dichotomous higher and lower inflammation status and continuous inflammation loading score were defined as the primary and secondary inflammatory outcomes for subsequent association analyses described herein. 2.3 Genotyping, single-trait genetic risk scoring, and genetic correlation analysis Genotyping was performed with the Illumina Infinium PsychChip array (Illumina Inc., San Diego, CA, USA) on blood-based DNA followed by quality control (QC) with PLINK 1.9 ( Purcell et al., 2007 ). Details on QC and imputation procedures are summarized in Supplemental Methods. Post-imputation QC removed poorly imputed SNPs (information score <0.5), missingness >0.1, and MAF <0.05, resulting in 4,322,238 high-quality common SNPs. A multidimensional scaling (MDS) analysis was performed among a list of relatively independent SNPs after clumping based on the 1000 Genomes population data ( Clarke et al., 2017 ). The first five MDS principal components (PCs) were applied as population substructure covariates for subsequent analyses. Three psychiatric traits, including SCZ (n = 306,011) ( Ripke et al., 2020 ), BD (n = 413,466) ( Mullins et al., 2021 ), and major depressive disorder (MDD, n = 807,553) ( Howard et al., 2019 ), and six cardiometabolic traits, including coronary artery disease (CAD, n = 184,305) ( Nikpay et al., 2015 ), type-2 diabetes (T2D, n = 898,130) ( Mahajan et al., 2018 ), low-density lipoprotein (LDL, n = 196,475), high-density lipoprotein (HDL, n = 196,475), TG (n = 196,475) ( Willer et al., 2013 ), and waist-to-hip ratio adjusted for BMI (WHR, n = 142,762) ( Shungin et al., 2015 ) were selected for genetic risk scoring. The corresponding GWAS summary statistics files were downloaded from the consortium data repositories as training sets (see Supplemental Table S2 for study information). The genetic risk score (GRS) of each trait for each participant was defined as the sum of the effect allele dosage across independent GWAS significant SNPs (p < 5e −8 ) weighted by the effect size as a quantification of the genetic risk conferred for cardiometabolic or psychiatric illnesses. Each GRS was constructed across independent GWAS significant SNPs. The most significant association in each linkage disequilibrium (LD) block (r 2 ≥ 0.1) within a 500 kb window was obtained from the clumping procedure by using PRSice-2 software ( Choi et al., 2020 ). Exploratory post-hoc analyses examined GRSs calculated under other p-value thresholds (i.e., P T = 1e −7 to P T = 0.5). Genetic correlations among GWAS summary statistics for cardiometabolic and psychiatric traits were performed with linkage disequilibrium score regression (LDSC) ( Bulik-Sullivan et al., 2015 ) (see Supplemental Methods). 2.4 Statistical analyses Demographic and clinical characteristics of participant groups were compared using Fisher's exact test for categorical variables and two-sample t -test for continuous variables. To examine the association between higher/lower inflammation level (dependent variable) and single-trait GRS among all participants, a total of nine logistic regression models were fitted with each GRS as an independent variable while accounting for population substructure and psychosis vs control status. Multiple testing correction was performed with the false discovery rate (FDR) approach by calculating q-values. Analyses were performed using R Statistical Software v4.0.2. 2.5 Multivariate cardiometabolic polygenic scoring and inflammation associations Based on inflammation associations with single-trait GRS and significant genetic correlations across traits identified with LDSC, PCA was performed on the matrix of GRS values for six cardiometabolic traits (standardized z-score) combined. This resulted in six PC loadings. The first PC was defined as cardiometabolic GRS and was examined for associations with inflammation outcomes. This was achieved by fitting logistic regression models for the dichotomous inflammation level (primary outcome), and linear regression models for continuous inflammation factor 1 (secondary outcome) while accounting for psychosis vs control status and population substructure. Empirical p values were calculated with 10,000-time permutation procedures for primary inflammation outcome to account for overfitting. Sensitivity analyses were performed to determine the influence of DSM diagnoses (SCZ vs SAD vs BD), cardiometabolic diagnoses and medication status as dichotomous covariates (Yes/No) on the associations of inflammation with cardiometabolic GRS. Exploratory post-hoc analyses examined inflammation relationships with the other five PCs identified from the cardiometabolic GRS matrix. Separate linear regression models of cardiometabolic GRS in relation to the seven inflammatory markers that significantly loaded on inflammation factor 1 (CRP, IFNγ, IL1β, IL8, IL10, TNFα, and VEGF) were fitted to investigate the impact of each inflammatory marker on the association of cardiometabolic genetics and inflammation phenotypes. To ascertain the clinical implications of cardiometabolic genetics and peripheral inflammation, the associations with diagnoses, psychosis symptoms (PANSS total score), cognitive performance (BACS score), depression symptoms (MADRS), and mania symptoms (YMRS) were examined with regression analyses. 2.6 Exploratory functional module detection Functional enrichment analysis and module discovery were performed among cardiometabolic GRS genes (six traits combined) by using the online HumanBase toolkit ( https://hb.flatironinstitute.org/ ) to explore the biological implications of the cardiometabolic genetic risk and inflammation relationship identified in primary analyses (see Supplemental Methods). 3 Results 3.1 Participant characteristics Clinical and demographic characteristics are presented in Table 1 . Age and sex had similar distributions between Psychosis and HC groups. Persons with psychotic disorders were more likely to report a hypertension diagnosis ( p = 0.010) and more concurrent cardiovascular medications ( p = 0.021). There was a trend of more cardiometabolic conditions and medications reported by the Psychosis group than the HC group. There was a trend for a greater proportion of participants categorized as having higher inflammation levels in the Psychosis (n = 33, 38%) as compared to the HC group (n = 8, 22.2%) ( p = 0.097). Similarly, the inflammation factor 1 scores trended higher in the Psychosis than the HC group (p = 0.098). 3.2 Genetic correlations between psychiatric and cardiometabolic traits and single-trait GRS associations with inflammation Significant genetic correlations were identified across six cardiometabolic traits. CAD, T2D, LDL, TG, and WHR had positive correlations with each other while HDL was negatively correlated with the other five traits ( Fig. 1 A; Supplemental Table S3 ). No significant genetic correlations were identified between SCZ and any cardiometabolic trait. In contrast, MDD was positively correlated with CAD, T2D, LDL, TG, WHR and negatively correlated with HDL. A significant positive correlation was observed between BD and WHR. Fig. 1 B presents the logistic regression results of higher inflammation level and single-trait GRS comprised of GWAS-significant SNPs among all participants. Across cardiometabolic traits, high inflammation level was associated with CAD (R 2 = 0.110, p = 0.002), T2D (R 2 = 0.058, p = 0.025), and TG (R 2 = 0.081, p = 0.032). After adjustments for multiple testing, the inflammation associations remained significant with CAD ( q = 0.016) and TG ( q = 0.032). No significant associations were identified between higher inflammation level and GRSs for any of the psychiatric illnesses examined herein (see Supplemental Table S4 for results among all participants and stratified by Psychosis and HC groups). 3.3 Multivariate cardiometabolic GRS and inflammation relationships Principal component analysis using GRSs for CAD, T2D, LDL, TG, HDL, and WHR resulted in six PCs (see Supplemental Tables S5 and S6 for eigenvalues and eigenvectors). The scree plot ( Fig. 2 A) shows the percentage of variance explained by each PC. The PC1 accounted for 28.4% of the variance in the six-trait cardiometabolic GRS matrix. The heatmap ( Fig. 2 B) depicts the contribution of each cardiometabolic trait to a given GRS PC. PC1 had significant contribution of GRSs for CAD, LDL, TC and HDL (PCA loading>0.5 for each GRS) and lesser contributions of T2D and WHR (PCA loading <0.25). Fig. 2 C summarizes the logistic regression results of higher and lower inflammation status predicted by each GRS PC. Higher inflammation level was significantly associated with PC1 GRS (OR = 1.760, p = 0.001). Post-hoc analyses examining the other five GRS PCs did not reveal significant associations with inflammation ( Fig. 2 C) or result in better model performance when compared to the model including PC1 (see Supplemental Table S7 ). Thus, PC1 was defined as multivariate cardiometabolic GRS for subsequent association analyses with inflammation. Fig. 3 illustrates the results of regression analyses of cardiometabolic GRS and inflammation outcomes among all participants and further stratified by Psychosis and HC groups. The significant association of higher inflammation level and high cardiometabolic GRS was driven primarily by the Psychosis group (OR = 2.037, 95% CI [1.295, 3.206], empirical p = 0.001). No significant associations between inflammation status and cardiometabolic GRS were identified in HCs (OR = 0.875, 95% CI [0.335, 2.287], empirical p = 0.802) ( Fig. 3 A & C). A significant positive association between inflammation factor 1 and cardiometabolic GRS was also observed in the Psychosis group but not in the HC group or among all participants ( Fig. 3 B & C). Examining the influence of DSM diagnoses as well as cardiometabolic diagnoses and medication use did not alter or confound the inflammation associations with cardiometabolic GRS (see Supplemental Results). See Supplemental Results and Table S9 for individual markers and cardiometabolic GRS at other P T values. 3.4 Functional module discovery of cardiometabolic GRS genes with immune-inflammatory implications Of 683 SNPs included in cardiometabolic GRS, 396 gene IDs were mapped using the Ensembl database. These were included in functional module discovery analyses. A total of 250 genes were assigned to one of eight cohesive functional modules based upon a comembership score ≥0.9. The top three gene ontology (GO) pathways enriched within the gene list of cardiometabolic GRS are listed corresponding to the module assignment in Fig. 4 (see Supplement I for the mapped gene list and full results of functional enrichment). Two out of eight functional modules (M5, M8) containing 40 and 14 genes with 66 and three overrepresented GO terms, respectively, were comprised of immune or inflammation associated processes and pathways. 3.5 Cardiometabolic genetic risk and inflammation associations with clinical outcomes Across diagnostic groups (SCZ, SAD, BD, and HC), there were no differences in enrichment for higher inflammation level ( p = 0.277) or cardiometabolic GRS ( p = 0.942). Among persons with psychotic disorders, higher inflammation status was associated with lower BACS scores (beta = −0.586, p = 0.038) with a trend toward higher PANSS total scores (beta = 7.315, p = 0.076). Higher inflammation factor 1 was also associated with lower BACS scores (beta = −0.174, p = 0.033) but not with PANSS total scores (beta = 0.770, p = 0.527). Neither MADRS nor YMRS were associated with inflammation measures. There were no significant associations of cardiometabolic GRS with BACS or PANSS total scores (see Supplemental Table S10 for detailed results). 4 Discussion To our knowledge, this is the first study examining relationships between genetic risk scores for psychiatric and cardiometabolic conditions and peripheral inflammation in persons with psychotic disorders. The findings, considered preliminary given the sample size, suggest that elevated peripheral inflammation was associated with higher cardiometabolic GRS in participants of European descent with psychotic disorders, but not with SCZ, BD or MDD GRSs. In the psychosis group, high inflammation status was identified in 38% of participants and was also significantly associated with worse cognitive performance. These findings represent an important advancement in our understanding of possible genetic etiologies for elevated inflammation in psychosis and their relationships to elevated cardiovascular risk in this population. To date, there have not been comprehensive investigations characterizing the univariate and multivariate patterns of elevated genetic risk for multiple cardiometabolic disorders in persons with psychotic disorders. Only a few studies have examined the clustering pattern of established cardiometabolic risk factors (e.g., lipid, adiposity, blood pressure, sedentary lifestyle, etc.) and the joint influence of unfavorable cardiometabolic profile in non-psychiatric populations ( Stoner et al., 2017 ; Tsai et al., 2020 ; Klisic et al., 2021 ). No previous work has examined the aggregation of genetic risks for multiple cardiometabolic traits in psychiatric populations. Our multivariate examination and characterization of cardiometabolic GRSs represent a novel advancement by characterizing the pattern of genetic liability of six cardiometabolic traits that have significant genetic correlations and pathophysiological convergence. The present findings suggest that accumulating genetic risk for cardiometabolic diseases may play a predisposing role for elevated inflammation in patients with psychotic disorders. Our findings demonstrate that a one standard deviation increase in cardiometabolic GRS among persons with psychotic disorders was associated with a two-fold increase in the odds of being in the higher inflammation subtype. Relationships with elevated inflammation status suggest that the higher cardiometabolic genetic risk may play an intrinsic role in immune and inflammatory overactivation in some patients with psychotic disorders. One interesting finding in the post hoc analyses of individual inflammatory markers is that there was no robust correlation between cardiometabolic PRS and any single inflammatory marker, except for IL8 (R 2 = 0.059; p = 0.029), although there were trend level findings for CRP, IFNγ, TNFα, and VEGF in the psychosis group. The inflammation subtype resulting from multivariate analyses across intercorrelated inflammatory markers may therefore quantify a pattern of overactivation due to the dysregulation of immune/inflammation processes. In exploratory analyses, the network of functional interactions among cardiovascular risk genes included in the GRS calculation clustered into eight functional modules comprising of a total of 585 GO terms. Most of them represented biological pathways of metabolism and trafficking of lipids, carbohydrates, and proteins related to cardiometabolic function with two modules also enriched with genes related to immune/inflammatory processes. The biological processes for module-M5 and module-M8 were highly related to lipid and carbohydrate homeostasis and activation of the immune response. These findings are consistent with previous reports supporting the involvement of lipids and carbohydrates in the immune system and the complex interplay with pathogenesis of cardiometabolic risks ( Cobb and Kasper, 2005 ; Bernardi et al., 2018 ). In addition, 31 out of 54 genes in two immune modules were determined to have “druggable” potential based on the Drug Gene Interaction Database ( Freshour et al., 2021 ). Nine genes therein ( DNMT3A, EHMT2, GALNT2, PLTP, SCARB1, ARID1A, FTO, RAF1, and SIK3 ) were determined to have direct interactions with FDA-approved drugs (See Supplement II for the list of druggable genes and interacting drugs.). These drugs include some cardiometabolic agents (e.g. statins, beta-blockers, etc.) and immune modulating biologics (e.g. trastuzumab, etc.) that have replicated findings of efficacy in psychoses and other psychiatric illnesses. These preliminary explorations of immune pathways and interacting with pharmacological agents inform hypothesis generation for further exploration. Consistent with our previous findings from the larger multiracial cohort ( Lizano et al., 2021 ) and other studies ( Ribeiro-Santos et al., 2014 ; Fillman et al., 2016 ), the present findings demonstrate that higher inflammation status was associated with lower cognitive performance in European ancestry participants with psychotic disorders. There have been clinical trials examining the adjunctive anti-inflammatory medications on in psychosis symptoms, but treatment efficacy has been mixed ( Bishop et al., 2022 ). This might be due to the choice of anti-inflammatory drug or the fact that only a subgroup of psychotic disorder patients with elevated inflammation might respond to the treatments. Findings linking inflammation and cognition suggest that cognition may be a target to examine in future trials of anti-inflammatory treatments of patients with inflammatory overactivation. We did not observe a significant association between cardiometabolic GRS and cognition despite the association with inflammation. This may be due to our limited sample size, or because additional factors beyond genetic features contribute to peripheral inflammation which is the endpoint directly related to adverse cognitive performance. We did not identify associations of higher inflammation status with severity of psychotic, depressive, or manic symptoms, nor cardiometabolic GRS. This is in contrast with previous findings linking inflammation with symptoms of psychiatric illnesses ( Miller, 2020 ). One possible explanation is that our participants were all clinically stable, albeit with mild-moderate symptoms of psychosis, depression, or mania. The restricted range of symptoms in this investigation along with the smaller sample size may have limited the ability to ascertain these relationships. Contrary to our hypotheses, we did not observe relationships between genetic risk for SCZ, BD, or MDD with peripheral inflammation measures. This also may be related to our observation that inflammation features were altered only in a subset of affected individuals, which would limit power to detect such effects when examined at the level of the full sample. Previous evidence suggests a shared genetic etiology between some autoimmune conditions and psychiatric illnesses based on genome-wide estimates of genetic correlations ( Pouget et al., 2019 ; Tylee et al., 2018 ). Future studies in larger sample sizes should examine GRS at larger P T values, or calculated with other algorithms ( Pain et al., 2021 ), to further explore the polygenic architecture of psychosis and other mental illnesses and the immune/inflammatory implications. Limitations are important to consider when interpreting these findings. First, due to the relatively small sample size of HCs, we were underpowered to detect associations with clinical phenotypes in that group (see Supplemental Methods for power analysis and Supplemental Discussion for elaboration). This attenuated the power to detect statistical significance for relationships with smaller effect sizes. Second, while we adjusted for multiple comparisons and potential overfitting, we acknowledge the need for confirmation of these findings in a larger study sample. Third, we only conducted analyses in persons of white/European ancestry to ensure appropriate use of GRS calculations. This, however, limits the generalizability of our findings to other populations. Further study of these relationships in other populations remains important. Fourth, the cross-sectional study design precludes longitudinal investigations of the change of systemic inflammation and the impact of other confounders, including disease progression, treatment duration, medication exposure over time, adherence, and quality of medical care. Fifth, gene expression of inflammation markers was not measured and controlled for, which will be valuable to investigate in future studies. Lastly, lifestyle and environmental factors, such as body mass index, smoking status, physical activities, dietary habits, and maternal or early-life infectious exposure, were not measured in this study. These factors may impact both inflammation and mental health ( Kolb and Mandrup-Poulsen, 2010 ; Johannsen et al., 2014 ; Aas et al., 2017 ). 5 Conclusion In summary, in persons with psychotic disorders, elevated inflammation was associated with lower cognitive performance and trend towards more psychotic symptoms. Elevated inflammation was associated with greater combined genetic risk for cardiometabolic diseases, but not genetic risk for SCZ, BD or MDD in persons with SCZ spectrum or psychotic BD. These findings suggest that cardiometabolic genetic factors may contribute to the inflammation overactivation in some individuals with psychosis that may then adversely impact brain anatomy and function, symptom severity and cognition. Declaration of competing interest CAT declares an ad hoc consulting relationship with Sunovion, Astellas and Merck and membership on a Merck DSMB; CAT is on the Clinical Advisory Board at Kynexis and Karuna Therapeutics and holds stock in Karuna. MSK has received support from Sunovion and GlaxoSmithKline; MSK is a consultant to Forum Pharmaceuticals. JAS has received support from VeraSci. JRB has served as a consultant to OptumRx. The other authors report no related disclosures. Acknowledgements We are thankful for the patients and their families who participated in this study, and Gunvant K. Thaker, MD, for his scientific contributions to the B–SNIP consortium. We thank the GWA meta-analysis data on coronary artery disease available through the CARDIoGRAMplusC4D and UK Biobank CardioMetabolic Consortium CHD working group who used the UK Biobank Resource (application number 9922) (Data were downloaded from www.CARDIOGRAMPLUSC4D.org ). We thank the Psychiatric Genomics Consortium Schizophrenia and Bipolar Disorder working groups for the availability of GWA meta-analysis on schizophrenia, major depressive disorder, and bipolar disorder (Data were downloaded from https://www.med.unc.edu/pgc/download-rfesults/ ). We thank the DIAGRAM, GIANT, and Global Lipids Genetics consortiums for the availability of GWA meta-analysis on type-2 diabetes ( https://diagram-consortium.org/downloads.html ), waist-to-hip ratio ( http://portals.broadinstitute.org/collaboration/giant/images/e/eb/GIANT_2015_WHRadjBMI_COMBINED_EUR.txt.gz ) , and lipids ( http://csg.sph.umich.edu/willer/public/lipids2013/ ). This work was supported in part by the National Institute of Mental Health (NIMH) grants MH-077851 (to CAT), MH-077945 (to GDP), MH-078113 (to MSK), MH-077862 (to JAS), MH-103366 (to BAC), MH-103368 (to ESG and SKK ), MH-083888 (to JRB), MH-077852 (to GKT); Dupont Warren and Livingston Award from Harvard Medical School (to PL); Commonwealth Research Center (grant SCDMH82101008006 to MSK); the NIH's National Center for Advancing Translational Sciences (pilot grant ( UL1TR000114 ) to JRB). Portions of this work were presented at the 2021 Society of Biological Psychiatry (SOBP) Virtual Annual Meeting and as a part of symposium at the 2021 American College of Neuropsychopharmacology (ACNP) annual meeting. The funding agencies had no role in the design and conduct of the study collection, management, analysis, and interpretation of the data, and preparation, review, or approval of the manuscript. Appendix A Supplementary data The following are the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Multimedia component 2 Multimedia component 2 Multimedia component 3 Multimedia component 3 Multimedia component 4 Multimedia component 4 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.bbih.2022.100459 . | [
"AAS",
"BERNARDI",
"BISHOP",
"BULIKSULLIVAN",
"CHOI",
"CLARKE",
"COBB",
"CORRELL",
"DEHERT",
"DEMARCO",
"DONATH",
"FERNANDEZEGEA",
"FILLMAN",
"FRESHOUR",
"GARCIARIZO",
"GARVER",
"GOLDSMITH",
"HAGI",
"HOANG",
"HOWARD",
"JOHANNSEN",
"KEEFE",
"KLISIC",
"KOLB",
"KRAJA",
... |
a6b9c395299d40509a96587f6d3813a1_Rearing zombie flies Laboratory culturing of the behaviourally manipulating fungal pathogen Entomoph_10.1016_j.mex.2023.102523.xml | Rearing zombie flies: Laboratory culturing of the behaviourally manipulating fungal pathogen Entomophthora muscae
| [
"Edwards, Sam",
"De Fine Licht, Henrik H."
] | Insect pathogenic fungi (IPF) and insects have ubiquitous interactions in nature. The extent of these interkingdom host-pathogen interactions are both complex and diverse. Some IPF, notably of the order Entomophthorales, manipulate their species-specific host before death. The fungus-induced altered insect behaviours are sequential and can accurately be repeatedly characterised temporally, making them a valuable model for understanding the molecular and chemical underpinnings of behaviour and host-pathogen co-evolutionary biology. Here, we present methods for the isolation and laboratory culturing of the emerging behaviourally manipulating model IPF Entomophthora muscae for experimentation.
•
E. muscae isolation and culturing in vitro.
•
Establishing and maintaining an E. muscae culture in vivo in houseflies (Musca domestica).
•
Controlled E. muscae infections for virulence experiments and quantification of conidia discharge per cadaver. | Specifications table Subject area: Agricultural and Biological Sciences More specific subject area: Entomopathogenic fungi Name of your method: Isolation and culturing of Entomophthora muscae Name and reference of original method: N/A Resource availability: Resources are included in the text. Method details Background Houseflies are one of the most widespread species of insect in the world [ 1 , 2 ]. In part due to their global distribution, this resilient species is easily reared en masse for animal food and feed, and for waste management [3–5] . Additionally, being vectors of over 130 human and animal food-borne diseases, there has been interest in using natural pathogens as control agents [6] . The obligate entomopathogenic fungus Entomophthora muscae [7] , has been explored as a biological control agent due to its high-host specificity [8–10] . The fungus E. muscae has been reported to infect up to a 100% of housefly populations in the wild, being particularly prolific in semi-closed environments of high fly density, such as in byres [11] . Following exposure of houseflies to E. muscae spores (conidia), the spores will germinate and penetrate the cuticle of the fly to gain access to the hemocoel [12] . In houseflies, the within-host life cycle of E. muscae occurs over six to seven days, during which time the fungus proliferates logistically as wall-less cells (protoplasts) considered to help the fungus evade the host's immune system [13] . During this time, E. muscae -infected houseflies exhibit reduced activity, altered flight patterns, and reduced reproductive capabilities [ 14 , 15 ]. Towards the end of the within-host stage of infection, infected flies are forced by the fungus to climb to an elevated position, extend and affix their proboscides to the substrate surface, and raise their wings as the host dies ( Fig. 1 A). One of the peculiarities of this insect pathogenic fungus is that the apparent obligate behavioural manipulation of the moribund host prior to death occurs within four hours before sunset [ 16 , 17 ]. This predictable timing of the behavioural manipulation onset makes this an excellent system for unravelling the mechanism behind the behavioural manipulation phenomenon [16] . A couple of hours after host death, the fungus emerges from between the intersegmental membrane of the host abdomen ( Fig. 1 B–D) [ 16 , 17 ]. The active discharge of infectious spores into the environment (Sporulation) occurs over ca. 24 h, whereby the infectious cadavers start to desiccate and lose infectivity ( Fig. 1 B–D). The conidia are forcibly ejected from conidiophores at high speed to a considerable distance to be horizontally transmitted to other healthy conspecifics [ 18 , 19 ]. Fungal isolates within the species complex of E. muscae have been found to be host-specific, with specific isolates only infecting a single host species naturally [ 16 , 20 , 21 ], although under laboratory conditions it is possible to cross-infect other host species, including Drosophila melanogaster [22] (for greater detail about the biology of E. muscae , please refer to references [ 16 , 20 and 23 ]). The genetic underpinnings of many of these unique traits are beginning to be unravelled due to genome sequencing efforts enabling studies of ‘omics methods to investigate the molecular basis for interactions between Entomophthora muscae and its dipteran hosts [24] . While E. muscae is commonly found in many areas [ 16 , 23 ], the difficulty with which it can be isolated and the usually slow in vitro growth of fungal cultures have hampered widespread research progress [25] . Here we present protocols for how to isolate and maintain E. muscae , both in vivo and in vitro . We also provide specific protocols on how to perform infections for experimental procedures, which provide up to 100% infection and mortality in our study system. E. muscae isolation and culturing in vitro This protocol is designed to acquire in vitro isolation of the fungus from dead sporulating cadavers (from the laboratory or collected in the field) for applications like genomic DNA or RNA extraction and culturing of the fungal pathogen in vitro ( Fig. 1 ) [25] . To obtain a liquid culture, E. muscae can be cultured in GLEN or Grace's Insect media ( Tables 1 , 2 ). Growth of the liquid cultures are slow and usually take 2–6 weeks. Materials • Dead fly cadavers sporulating with E. muscae (ideally 6–18 h old) ( Fig. 1 B–D). • Petri dishes, sterile. • Liquid culture medium (e.g. GLEN, Grace's Insect Medium; Tables 1 , 2 ). • Sterile 10 or 25 mL pipette tips and pipette. • Parafilm®. • 50 mL cell culture flask (e.g. Greiner Bio-One CELLSTAR®, Germany). 1. Place a sporulating cadaver ( Fig. 1 B-D) (removal of wings will decrease obstruction to conidial distribution, but not essential) in the lid of a ‘downside-up’ petri dish [25] , keeping the dish bottom (hereafter called ‘upper part’ due to it being upside down) untouched and sterile ( Fig. 1 E). Ensure the sporulating cadaver is positioned so that the actively discharged conidia can reach the upper part of the petri dish, this often means placing the fly cadaver with the dorsal side downwards. 2. Leave for 30 min minimum to allow the cadaver's conidia to eject and stick to the surface of the upper part. Exact duration will vary based on the stage of conidiophore maturation and amount of conidia being discharged, best option is to check conidial quantity before proceeding to step 3. Anything between 30 min and 8 h have worked in our experience. However, we also experienced that the longer you leave the cadaver to sporulate, the higher the chance of the culture being contaminated (possibly from contaminants on the fly itself). Conidia should be visible on the underside of the upper-part of the petri dish before proceeding to step 3 ( Fig. 1 F). 3. Remove upper part with conidia and place with a new sterile lid. Turn so the petri dish is placed normally with the lid on top. Add liquid media enough to cover the entire surface of the petri dish bottom in a layer ca. half the height of the petri dish (amount of liquid depends on the size of petri dish used) ( Fig. 1 G). 4. Seal the petri dish with Parafilm® and leave at room temperature ca. 21°C or at 18°C depending on habitat where the E. muscae naturally occurs and away from light until growth is visible. Check once every week for proliferation from the conidia of E. muscae cells growing in the media using an inverted microscope (see examples of E. muscae cell morphology in-vitro in [ 27 , 29 ]. May take 2–6 weeks before growth is visible. 5. When growth is visible and aplenty, transfer the growing culture to a cell culture flask and supplement with 5 mL of fresh liquid media ( Fig. 1 H). 6. Repeat step 5 every 4–6 weeks, adding 10 mL of media to 1 mL of liquid culture in new cell culture flasks. See example in Fig. 7A&B of Elya and De Fine Licht [16] , for assessing the health of the growing culture. Different isolates of E. muscae may grow differently in vitro , and more than one type of fungal cell morphology [ 27 , 29 ] may be present at the same time in a growing culture. E. muscae culture maintenance in vivo in houseflies (Musca domestica) This protocol is designed for in vivo maintenance of E. muscae in live laboratory-maintained houseflies. The number of cadavers used to infect a number of healthy conspecifics will create temporal variation for future cadaver collection, but this will still usually fall within 5 to 8 days after initial exposure. The maintenance of this system strongly relies on the maintenance of a housefly system, as they will be the future cadavers used for experiments or continuation of the system in vivo ( Fig. 2 ). Infections are thus easily accomplished by using fresh sporulating cadavers (0–18 h following death of the host), which can be refrigerated for a few days at 5 °C and subsequently be used for infections at a later date, although this may lower virulence. Host death and sporulation can also be delayed by leaving infected flies (three to five days post infection) at 5 °C for a few days, this delays fungal progression and artificially extends the within-host life cycle of the fungus. Materials • Dead sporulating cadavers (use the same fungal isolate) –– one to six cadavers is sufficient. • 30 mL medicine cup containing 2–5 mL of 1.5% water agar. • Entomology forceps. • Humid chamber – made up of a plastic box containing water soaked paper towels. • 18–23 °C incubator or room on a light:dark cycle between 16:8 and 12:12. • Clear tape. • Elastic bands. • Netted mesh (20 × 20 cm) with a hole (1 × 1 cm in the centre). • 365 mL (8.5 × 8 cm) plastic honey cups with a circular hole the diameter of a 15 mL falcon tube in the side, the hole needs to be made. • 15 mL falcon tube filled with distilled water and plugged with cotton. • Food – 1:1 ratio of skimmed milk powder and caster suga.r • CO2 (carbon dioxide). 1. Houseflies are housed in a plastic honey cup containing ad libitum food and water. Water is available from a falcon tub filled with demineralised water and plugged with cotton inserted into the hole in the pot side. The cup is covered by the netted mesh and held in place by elastic bands. A medicine cup lid is placed over the hole in the net and maintained in place with clear tape to prevent flies from escaping. To ensure continued access to water, we place containers at a slight elevated angle (e.g. resting the falcon tube on a cardboard support) so the water in the falcon tube is covering the cotton lid ( Fig. 2 D). 2. Using cleaned forceps, gently grab a cadaver by the head and thorax, and bury the head and thorax into the agar in a medicine cup, keeping the abdomen exposed ( Fig. 2 A, B). Poking a hole in the agar beforehand with the forceps will reduce the risk of decapitating the cadaver. 3. After placing one to six (or more) cadavers in this manner, place the medicine cup upside down and over the hole cut in the netted lid covering the housefly container ( Fig. 2 C). The live flies can optionally be anaesthetised using CO2 (carbon dioxide) to simplify this. 4. Fix the medicine cup in place using clear tape, using the medicine cup lid as a label for identification. 5. Place the container in a humid chamber for 24 h to allow for optimal sporulation conditions. 6. After 24 h, remove the containers and keep in fixed temperature and light:dark conditions. Establishing an in vivo E. muscae culture in houseflies by injection This protocol is designed to transfer a liquid culture of E. muscae fungus back into a live host for continued in vivo host to host maintenance of E. muscae culture, and thus not for carrying out injection-based infection assays ( Fig. 3 ). This procedure may alter the usual temporal restrictions of the fungal infection, i.e. the flies may not die exactly 6–7 days post injection and may not display the characteristic behavioural manipulation seen in E. muscae infections. Why these changes occur is unknown, however the stereotypical infection is usually resumed in the next round of infections using the sporulating cadavers resulting from infection by injection. Materials • Live healthy adult houseflies. • CO 2 (carbon dioxide). • 10 µL micro-syringe (e.g. Kloehn CO., INC, Whittier, California, U.S.A.) (see Fig. 3 ). • 1–3 µL of liquid fungal culture per fly. • 1.5 mL Eppendorf. • Centrifuge. • 1000 mL range sterile pipette. • Sterile cut-off 1 mL pipette tip (use a pair of scissors to cut off the tip to widen the entrance hole and sterilise in autoclave). • Housefly housing container (as mentioned in vivo maintenance section above). 1. Use a sterile 1 mL cut-off pipette tip and place 500–800 µL of actively growing E. muscae culture in 1.5 mL Eppendorf. 2. Optional, but recommended : Gently spin down the fungal culture in centrifuge at low speed (<200 rcf, 5–10 min) to not kill the fungal cells. 3. Optional, but recommended : Carefully remove some of the supernatant media to concentrate the fungal cells. 4. Using CO 2 (carbon dioxide) anaesthetised houseflies, hold the fly firmly in one hand. 5. Using a micro-syringe, gently pierce the side of the thorax of the restrained fly. 6. Inject 1–3 µL amount of concentrated liquid fungal culture and place housefly in the usual housefly cage setup ( Fig. 3 ). 7. Allow 3 to 14 days for the infection to kill the flies and sporulate from the abdomen as normal. With this technique, at least 10% of the flies develop infection. Usually more, but depends on the virulence and state of the fungal culture used for infection. 8. Use these flies to continue the in vivo infection as per the previous protocol. Controlled E. muscae infections for virulence experiments This protocol is used for experiments that need a guaranteed exposure to E. muscae conidia and death from infection 6 or 7 days post exposure to infected cadavers ( Fig. 4 ). The high exposure rate to conidial showers provide near 100% death by day 6 in our system, with deaths prior to day 6 not being caused by the fungus as there is no behavioural manipulation and no fungal sporulation from the cadavers. Materials • Three dead sporulating cadavers (from the same fungal isolate) – with 2:1 or 1:2 male:female cadaver sex ratio to account for sporulation differences in cadaver sex. • Medicine cup containing 2–5 mL of 1.5% water agar. • Entomology forceps. • CO 2 (carbon dioxide) or cold room at ∼5 °C. • Humid chamber (as mentioned in vivo maintenance section above). • 18–23 °C incubator on light:dark cycle between 16:8 and 12:12 (keep constant during experiments). • Housefly housing container (as mentioned in vivo maintenance section above) 1. Using cleaned forceps, gently grab a cadaver by the head and thorax and bury the head into the agar, keeping the abdomen exposed. Making a hole in the agar beforehand will reduce the risk of decapitating the cadaver. For mock-infections for an uninfected control treatment, replace the sporulating cadavers by freeze-killed flies (freeze-kill with 5–10 min exposure to −5 or −20) ( Fig. 4 A). 2. Perforate the lid and cup sides to allow for aeration during infection. 3. After placing three cadavers in this manner, add up to 10 anaesthetised (CO 2 or cold exposure depending on experiments) flies and place the medicine cup upside down in the humid chamber for 24 h (as little as six hours exposure has also worked in our laboratory with near 100% infection) ( Fig. 4 B). 4. After 24 h, remove the live flies and place in a normal housing container. Discard the medicine cup and cadavers. 5. Keep the flies at constant light:dark cycle and temperature for accurate planning of manipulation behaviours and death. Quantification of E. muscae conidia discharge per cadaver This protocol is used to calculate the exposure dosage of E. muscae from individual sporulating cadavers ( Fig. 5 ). Variations can be found in different host species and E. muscae isolates, making this a simple protocol for checking the discharge dosage. The E. muscae conidia are collected in an acid solution to prevent germination of discharged conidia [25] . These can be counted in a haemocytometer under a microscope or using image analysis [30] . See Figs. 3&4 in [21] and Figs. 7&8 in [7] for examples of microscope images of conidia. Materials • Dead sporulating cadavers – variations per individual cadaver are high and it should therefore be considered to repeat this with both male and females to account for sporulation differences of cadaver sex. • 2 mL Eppendorf tube. • 1% Triton-X. • 0.2% maleic acid. • Vaseline® (Conopco, Inc., USA). • 0.2 mm haemocytometer (e.g. Fuchs-Rosenthal Chamber, 3720). • Entomology forceps 1. Add Vaseline inside the lid. 2. Prepare a solution containing 1% Triton-X and 0.2% maleic acid. 3. Place 1 mL of the above solution in an Eppendorf. 4. Using cleaned forceps, gently grab a cadaver by the head and thorax and bury the head into the Vaseline, keeping the abdomen exposed. 5. Leave to sporulate for duration of interest. For example, for 24 h if wanting to quantify the conidial discharge under conditions in above protocol “Controlled E. muscae infections for virulence experiments”. 6. After selected duration, add solution to a haemocytometer and place under a microscope for determining spore concentration. Ethics statements We have used fungi and houseflies for our experiments, all of which complied with our institution's safety and laboratory guidelines. CRediT authorship contribution statement Sam Edwards: Project administration, Conceptualization, Methodology, Investigation, Validation, Writing – original draft, Writing – review & editing. Henrik H. De Fine Licht: Supervision, Funding acquisition, Project administration, Resources, Conceptualization, Methodology, Writing – original draft, Writing – review & editing. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments We would like to thank Jørgen Eilenberg, Annette Bruun Jensen and Andreas Naundrup for passing on their knowledge and skills of working with E. muscae to us. This work was supported by the European Union's Horizon 2020 research and innovation programme Insect Doctors under the Marie Skłodowska-Curie grant (859850), and by a Sapere Aude grant (8049–00086A) from the Independent Research Fund Denmark, and a Semper Ardens: Accelerate grant (CF20–0609) from the Carlsberg Foundation, Denmark. | [
"GREENBERG",
"MARQUEZ",
"CICKOVA",
"MAKKAR",
"VANHUIS",
"KHAMESIPOUR",
"KELLER",
"EILENBERG",
"TOBIN",
"VEGA",
"KALSBEEK",
"EILENBERG",
"HANSEN",
"WATSON",
"ELYA",
"KRASNOFF",
"LOVETT",
"DERUITER",
"STEINKRAUS",
"GRYGANSKYI",
"ELYA",
"DEFINELICHT",
"DEFINELICHT",
"HAJEK... |
04b084496e034337b650ea723ec73019_A microfluidics-integrated impedancesurface acoustic resonance tandem sensor_10.1016_j.sbsr.2019.100291.xml | A microfluidics-integrated impedance/surface acoustic resonance tandem sensor | [
"Kustanovich, Kiryl",
"Yantchev, Ventsislav",
"Doosti, Baharan Ali",
"Gözen, Irep",
"Jesorka, Aldo"
] | We demonstrate a dual sensor concept for lab-on-a-chip in-liquid sensing through integration of surface acoustic wave resonance (SAR) sensing with electrochemical impedance spectroscopy (EIS) in a single device. In this concept, the EIS is integrated within the building blocks of the SAR sensor, but features a separate electrical port. The two-port sensor was designed, fabricated, and embedded in a soft polymer microfluidic delivery system, and subsequently characterized. The SAR-EIS tandem sensor features low cross-talk between SAR and EIS ports, thus promoting non-interfering gravimetric and impedimetric measurements. The EIS was characterized by means of the modified Randle's cell lumped element model. Four sensitive parameters could be established from the tandem sensor readout, and subsequently employed in a proof of principle study of liposome layers and their interaction with Ca2+ ions, leading to transformation into molecular film structures. The associated shift of the sensing quantities is analysed and discussed. The combination of impedimetric and gravimetric sensing quantities provides a unique and detailed description of physicochemical surface phenomena as compared to a single mode sensing routine. | 1 Introduction The integration of in-liquid sensing units in lab-on-a-chip devices is among the key challenges on the road to micro total analysis systems (μTAS) [ 1 ]. The combination of various independent sensor principles on the same chip is particularly attractive, but also especially difficult to implement [ 2 ]. Lowered fabrication costs, orthogonal and multi-analyte sensing capability, and reduced time of analysis are competing with increased design requirements, interfacing challenges, and additional materials compatibility considerations. Therefore, the combination of two or more complementary sensing principles in a single physical sub-unit is very attractive, as it facilitates interfacing with the fluidic framework, and reduces the required design efforts. A variety of different individual sensors capable of gas detection, or chemical and biochemical analysis in a liquid environment have been reported in the past, driven by demands by industrial technology manufactures for small inexpensive high-performance sensors [ 3 ]. Commonly, these devices are based upon the measurement of resonance frequency shifts in piezoelectric materials, which are related to mass changes at the sensing interface (gravimetric analysis using acoustic wave technology [ 4–6 ]), or upon monitoring changes in the electrical impedance response of a system (electrochemical transduction techniques [ 7 ]). Intrinsically label-free, the gravimetric and impedimetric sensors require in many cases fewer or no sample preparation steps, compared to sensors that involve light absorption or emission, where the attachment of an optical label is often necessary. The introduction of acoustic wave technology into lab-on-a-chip platforms was an important milestone on the road to micro total analysis systems, providing not only mass sensing capability and direct electronic readout [ 8 ], but also additional sample preparation functions, such as fast fluidic actuation, contact-free particle manipulation and sorting [ 9 ]. Moreover, the electrode interfaces of acoustic wave sensors provided opportunities for the simultaneous evaluation of analyte-related electrochemical processes in conjunction with their gravimetric determination. Specifically, electrochemical impedance spectroscopy was implemented along with acoustic wave sensing in combination devices, e.g., integrated with a miniaturized polymeric analyte delivery system for assessing water toxicity [ 10 ], studies of molecular lipid films [ 11 ], and some other applications [ 12–16 ]. In such studies, integrated sensors that detect mass changes and viscoelastic properties, and provide a response related to the dielectric properties of the analyte at the sensor interface, can provide complementary information about both mechanical and electrical characteristics of the chemical or biochemical system under investigation. Simultaneous, complementary measurements are a reliable way to establish the reproducibility of individual sensor data, but can as well reveal additional information of system features and behaviour [ 10 , 11 , 13 ]. For instance, Briand et al. have performed EIS experiments to study the interaction of the supported lipid bilayer (SLB) with the pore-forming toxin gramicidin D (grD), where no significant change in QCM-D response was observed upon addition of low grD concentrations, due to the small molecular weight of the peptide. However, the change of electrical properties of the SLB upon insertion of active ion channels owed to the presence of grD molecules inside the bilayer was confirmed by the EIS response obtained. For higher concentrations, the QCM-D data showed dramatic changes in the viscoelastic properties of the membrane while the EIS spectra did not change [ 11 ]. Moreover, the penetration depth of the fringing electric fields above the planar interdigital electrodes reaches far beyond the acoustic penetration depth, as it is proportional to the spacing between the center lines of the electrode fingers [ 17 ]. This opens new possibilities for bioanalytical studies on cells and pathogenic microorganisms, where the biomembrane at the cell-surface interface, and the integrity of the bulk structure can be simultaneously monitored. Examples for either technique applied individually have been reported earlier [ 18 , 19 ]. A combination of pathogen detection and drug response determination could, for example, be an application of the tandem sensor. Impedance spectroscopy (IS) integration with acoustic wave sensing has been earlier successfully demonstrated with the quartz crystal microbalance (QCM) platform [ 10 , 11 ]. These implementations feature an additional reference electrode, which occupies additional space as it is not functional part of the QCM itself. Furthermore, the comparatively large physical size of the conventional QCM sensor limits miniaturization and sensitive measurements on small surface areas, as well as parallelisation and integration with microfluidic sample platforms. High frequency alternatives to QCM have also been developed, which allow for sensor area reduction and parallelisation. Significant efforts in the field of microfluidics-integrated sensing technology were focused on thin-film electroacoustic technology (often referred to as piezo-MEMS). Microsystems based on this approach have a particularly strong potential for commercialization, due to fabrication technologies established by the radio frequency (RF) filter industry. Thickness-excited quasi-shear film bulk acoustic resonators (shear-FBARs) [ 20 , 21 ] have shown the strongest potential so far, since they have reached the stage of commercial sensing array prototypes [ 22 , 23 ]. Thin-film S0-Lamb waves and their equivalent extensional plate modes are also promising [ 24 , 25 ]. However, these approaches are limited by practical considerations concerning fabrication process uniformity, device fragility and strong local pressure sensitivity. The latter two being inherent for all thin-film membrane devices. Alternatively, shear surface transverse acoustic waves (SH-SAW) [ 26 ] have been explored as a high-frequency alternative to QCM. The high frequency of operation is associated with smaller sensor dimensions, because the acoustic wavelength scales inversely proportional with the operation frequency. Typically, SAW sensor implementations rely on delay-line configurations where the SAW propagates through a comparatively long distance between input and output transducers (two-port configuration) to accumulate sufficient time delay and phase shift. Thus, the frequency scaling effects are to a large extent compensated. As a result, SAW delay line sensors are relatively large and characterized by strong transmission loss when operated in-liquid, so that additional signal processing is often required to remove interferences. These limitations can be overcome by utilizing a two-port SAW resonator with smaller distance between the transducers and a set of reflectors on each end [ 27–29 ]. However, the SAW transducers in this configuration remain susceptible to liquid conductivity and can become short-circuited. To protect the transducers from the liquid environment, and to confine the analyte in the sensing area, SH-SAW sensors were integrated with PDMS microchannel system at the expense increased damping losses [ 30 ]. Alternatively, sensing of biomolecules was reported in a dry state after removal of the liquid from the sensor, employing Rayleigh SAW two-port resonators. This eliminates the interdigital transducer (IDT) interaction with conductive liquids [ 31 ]. Moreover, the two-port configuration is somewhat problematic when parallelization needs to be considered. For that reason, one-port SAW sensor configurations have been developed as a high frequency and small form-factor alternative to the QCM [ 32 ]. We have recently introduced a microfluidics-integrated high frequency sensor concept, combining the advantages of the 1-port measurement setup with wafer-scale, commercially viable fabrication processes and materials. This concept of separate driving and sensing units was for the first time implemented in SAW sensor technology [ 33 , 34 ]. Our device utilizes the reflective gratings of a one-port SAW resonator as mass loading-sensing elements, with the SAW IDT being protected from the measurement environment and acting only as a read-out element. The optimum sensor performance has been derived from a trade-off between the ability of the IDT to probe the sensing blocks and their sensitivity determined by the amount of energy confinement [ 35 ]. This configuration achieves low susceptibility to damping with good control over device impedance, reduces the complexity of the readout electronics, and facilitates the integration in sensor arrays. The new surface acoustic resonance (SAR) sensor can detect mass and viscous loadings in liquid at a level comparable and better to the state-of-art high frequency gravimetric sensors, while also demonstrating lower damping from viscous loading. Our concept provided significant practical advantages for typical sensor integration and deployment, such as the ability to sense in conductive and highly viscous media. In the current work we demonstrate the integration of both the SAW gravimetric and electrochemical impedance spectroscopy (EIS) sensing concepts in a single device with microfluidic support. Unlike other functional combinations, here the EIS is integrated within one of the SAW sensor building blocks with a separate EIS interface port. We gain the ability to simultaneously record impedimetric and gravimetric data for chemical and biochemical analytes, while enabling high sensor parallelisation. The envisaged utility of this sensing combination can be considered in two aspects. On one hand, impedimetric data would allow for precise calibration of the experiments, while on the other hand can be used complementary to the gravimetric data to improve the sensing specificity and gain additional information about dielectric properties of the analyte deposits on the surface. The tandem of impedimetric and gravimetric patterns can represent a unique signature for a given analytical system. The demonstrated integration relies on the one-port SAR sensor design specifically employing the SAW reflector gratings as sensing elements, spatially separated from the IDT read-out. Thus, the SAW reflective gratings, which usually are electrically short-circuited, can be reconfigured into an interdigitated electrode (IDE) capacitive configuration, galvanically isolated from the SAW transducer. In addition to the galvanic isolation, SAW and impedimetric measurements are performed within noninterfering frequency bands leading to significantly suppressed measurement crosstalk. Impedance spectroscopy is performed at frequencies below 1 MHz, while SAW resonance is monitored at about 185 MHz. It is also noted that in impedance spectroscopy integration with QCM, the sensing system employs relatively close (i.e. eventually interfering) frequency bands as the QCM is usually operating in the 5 MHz–10 MHz frequency range. The integrated SAR-impedance spectroscopy dual-sensing device was tested in an aqueous environment in the presence of lipid reservoir deposits on the sensor surface, which was further subjected to interaction with Ca 2+ ions. When lipid vesicles attach and spread on the sensor surface, the acoustic wave detects changes in mass and viscoelasticity by detecting shifts in resonant frequency and the magnitude of the conductance peak of the SAR sensor. The integrated EIS provides information about the electrical properties of the lipid membrane, such as its dielectric permittivity and dielectric loss (resistivity). Lipid membranes are specifically chosen as a versatile model system for mimicking the plasma membranes of biological cells. They are utilized for membrane protein investigations, studies of cell adhesion, membrane fusion and interaction with various molecules [ 36 , 37 ]. We consider these nanometre-thick membranes particularly suitable for practically evaluating sensor performance, as they provided small mass loadings and electrical perturbations, ease of deposition on a solid support, and a rich feature set of electrical and mechanical properties that strongly depend on the composition and the chemical environment. 2 Sensor design and measurement configuration 2.1 Design The principles of operation and the fabrication technology of the one-port SH-SAW resonant sensor have been thoroughly discussed in previous work [ 34 , 35 ]. Here we describe the sensor structure in brief, and emphasize its integration with impedance spectroscopy (IS). Fig. 1 shows a PDMS mould containing the microfluidic circuits to be interfaced with the reflector gratings, and an air cavity to be positioned above the IDT. It is separately fabricated and bonded onto the die containing the planar one-port sensor structure. In the original design, the electrically short-circuited reflective gratings are used as sensing elements, with the microfluidic structures placed on top, while the wideband IDT is protected from the measurement environment by an air cavity. The resonant frequency shifts are determined primarily by the mass load deposited over the reflector grating, which perturbates the propagation properties (phase velocity and reflectivity) of the acoustic wave travelling under it. We use Y -cut LiNbO 3 as piezoelectric substrate. This material is known for its high electromechanical coupling coefficient of about 25% for Leaky SAW (LSAW) that propagates with bulk radiation losses along the crystal X axis [ 38 ]. The “leaky” nature of the wave is suppressed by means of an Au periodic strip grating that slows down the wave, converting it to a surface-guided SH-SAW while efficiently reflecting it back to the resonator [ 39 ]. Alternatively, the recently introduced hetero-acoustic layer substrates employing LiTaO 3 layer on Quartz can be employed, since they offer reasonably high electromechanical coupling and temperature stability [ 40 ]. The integration of the one-port SAW sensor with impedance spectroscopy is embodied through changes in the electrical connections of the strips in one or both of the SAW reflector gratings. In this new configuration one of the grating busbars remains a common ground to the IDT, while the other becomes an EIS signal pad. The grating strips are now connected as interdigitated electrodes. Forming a regular IDE with periodicity of the polarity resembling that of the IDT (2 strips per λ, where λ is the SAW wavelength at resonance) should be avoided, since such structure will receive the propagating SH-SAWs electrically, and causes re-excitation of SAWs because of electrical impedance mismatch at the IDE ports. That will ultimately perturb the SAW sensitivity due to externally induced wave interference effects. Instead, the IDE should be interdigitated in split electrode configuration [ 41 ], with the frequency of SAW excitation being different from the SAW resonance frequency. Thus, effects of SAW re-excitation and SAW related charge accumulation are readily avoided at the cost of reduced static capacitance of the structure. Here we propose a trade-off IDE design with retained capacitance, employing a split configuration of 3 electrodes per λ in the first 10λ next to the IDT, while the rest is in regular IDE configuration with 2 strips per λ. In such a configuration the SAW energy is strongly confined near the IDT, while attenuating along the reflectors. This attenuation is further boosted by the damping effects in the presence of liquid in the microfluidic containers. Accordingly, SAW regeneration is not likely to appear in the regular IDE part. It is noted that SAW regeneration is associated with the formation of charges on the strips that are in synchronicity with the SAW at resonance. Thus, the split-electrode section of the IDE can ensure cancelling of the SAW induced charge, and provide sufficient isolation between the EIS and the IDT ports. Furthermore, the IDE structure was reduced to only one of the reflector gratings, while the other remained electrically grounded. The latter was needed in view of ensuring easy contacting to the SAW and EIS ports. In Fig. 2 , a sketch view of the implemented design is shown along with image of the SAW-EIS sensing chip as fabricated. The contact pads are fabricated long enough to be able to be contacted outside the PDMS microfluidic device. We currently develop a much more robust test fixture enabling chip size reduction and automated measurements, which employs spring-loaded contacts applied directly through the PDMS mould to the SAW and EIS ports, thus avoiding the use of long contact pads which introduce some parasitic resistance and capacitive crosstalk between the ports. The integrated SAR-EIS sensor occupies a very small area in the range of 2 mm 2 ( Fig. 2 b), while the microfluidic inlets and assembly procedures require chip dimensions in the range of 1 cm 2 . Accordingly, a minimal integration of about 5 SAR-EIS sensors within the same microfluidic chip with additional inlets is feasible, enabling the design of sensor array systems within an area comparable to the space occupied by a single QCM sensor. Parallelisation of sensors makes simultaneous measurement of several analytes in the same sample possible, or parallel measurements of the same analyte for improved reliability. Details of the fabricated SAW-EIS sensor are shown in Fig. 3 . 2.1.1 Device fabrication A detailed description of the fabrication process has been provided in earlier publications [ 34 ]. Briefly, the metallization on top of a Y -cut LiNbO 3 substrate consists of a 25 nm Ti adhesion layer, a 260 nm Au functional layer, and a 15 nm Ti cap layer. A 100 nm thick SiO 2 passivation layer is deposited on top of the device to protect the electrodes from corrosion, and to increase bonding strength with the PDMS microfluidic delivery system. In view of the EIS performance, the additional coating with a dielectric film such as SiO 2 is also known to further suppress the Faradaic response of the system [ 42 ]. Both the IDT and the reflectors are formed by a periodic grating array with 10 μm pitch (p 0 ), and a strip width of about p 0 /2. The entire stack is 300 nm thick, and the device aperture is W = 40p 0 . The IDT consists of 7 pairs of electrodes for wideband operation with a wavelength at synchronism λ = 2p 0 = 20 μm. The number of strips in each reflector is 69, of which 49 overlap with a microfluidic container. A variation of grating reflectivity is introduced by means of narrowing the local mark-to-pitch ratio of each reflector near the IDT in order to confine a larger amount of SAW energy in the microfluidic containers, and thus to enhance the device gravimetric sensitivity. We referred to this design enhancement as lateral energy confinement [ 35 ]. The 1 cm thick polydimethylsiloxane (PDMS) microfluidic analyte and solution delivery system (channel width × height: 50 μm × 50 μm) was fabricated using standard soft lithography [ 43 , 44 ], and O 2 -plasma-bonded onto the surface of the one-port SAW resonator. 2.1.2 Measurements The crosstalk between the sensing ports was initially tested by measuring the transmission insertion loss between the two ports. The S 21 measurement was performed by means of a vector network analyser (VNA) in the frequency range 300 kHz–200 MHz, limited by the lowest frequency of the VNA (Planar 304/1, Copper Mountain Technologies, USA). The transmission loss between the IDT and the EIS port measured near 185 MHz (the SAW resonance frequency) was found to be as low as −60 dB, while in the range from 300 kHz to 1 MHz it was varying from −120 dB to −90 dB. This crosstalk is purely capacitive and is introduced by the contact pads. An improvement through optimized interfacing can be anticipated. It is further noted that the EIS operates at a frequency where acoustic waves are not excited, thus both functionalities, i.e. acoustic sensing and EIS sensing, can be operated independently. Our initial tests of the sensor ports in liquid environment have shown practically no mutual interference and influence on the measurement data. The noise of the resonance frequency measurement was below 0.5 ppm, which establishes the limit of acoustics frequency detection. This noise number is typical for the VNA-supported measurements of the SAW resonance frequency [ 35 ]. Fig. 4 shows the measurement setup. The sensor is mounted on a temperature control stage with 0.1 K temperature stability [ 35 ]. It allows for simultaneous measurement of the SAW and the EIS response. For that purpose, the IDT is probed by a VNA with a set of measurement probes (Picoprobe, Model 40A, GGB Industries Inc., USA). The resonance frequency and magnitude of the conductance peak are determined from the device conductance obtained from the measured S 11 parameter. The EIS port is connected to a potentiostat (SP-300 with Ultra Low Current adapter by Bio-Logic Science Instruments, France). The impedance spectra of the sensor were obtained by applying a 7.07 mV RMS AC voltage in the frequency range from 1 Hz to 1 MHz. Each data point was averaged 10 times. 3 Impedance spectroscopy fundamentals and equivalent circuit model Impedimetric measurement data need to be quantified through a lumped element model to obtain characteristic quantities. Suitable equivalent models with a good fit to real measurements were previously proposed[ 45–49 ]. Fig. 5 shows a schematic representation of the IDE structure in contact with the solution and the deposited (lipid) film, and its simplified electrical equivalent circuit model (ECM) giving the best fit . R represents the intrinsic buffer solution resistance, superimposed upon the parasitic resistance of the IDE structure itself. The latter is the only contribution when measuring in air ( Sol R ), Geo R is determined by the leakage current passing through the electrode interface with the analyte and the biochemical deposits, and Leak C is a geometric (stray) capacitance between two electrodes through the medium (including the LiNbO Geo 3 substrate, the liquid dielectric permittivity and the thin SiO 2 coating, as well as any parasitic capacitances in parallel emanating from the connection of the IDEs to the potentiostat). The capacitance of the lipid layer C is in series with the double layer capacitance Lipids C formed between the IDEs and the analyte. In reality, the double-layer capacitor often deviates from an ideal capacitor due to roughness, porosity and inhomogeneities on the IDE surfaces. For that reason, an empirically modelled constant phase impedance element (CPE) is used instead of an ideal capacitor to represent Dl C . In our system, the presence of a sputtered (porous) SiO Dl 2 thin film over the Au IDEs justifies the use of CPE in the model. The electrical double layer capacitance mainly depends on the ion type and concentration, while changes in CPE are also attributed to the formation of an additional deposited layer on the electrode surface, which in our case is a lipid layer. In air this equivalent circuit is simplified, featuring only the upper branch of capacitance and parasitic resistance in parallel. When the lipids are not present on the sensing surface, the CPE-complementary Dl C is also omitted from the in-liquid model. Lipids In practical EIS sensing experiments the sensitivity is considered with regard to the change of the lumped elements. Most sensitive elements are identified and selected as sensing quantities. Although the lumped elements have physical meaning, their behaviour is typically not independently determined because of the inherent complexity of the sample composition. Often CPE and Dl R components are good identifiers of interface phenomena resulting from biochemical reactions, while Leak C and Geo R provide information for the bulk environment. Sol 3.1 EIS-SAR sensor characterisation 3.1.1 In air and DI water The baseline performance of the EIS-SAR sensor was initially assessed through measurements in air and deionized water (DIW). The high frequency response is characterized with a well-defined high quality-factor resonance at about 185 MHz, even when the device is immersed in DIW Fig. 6 . Upon exposure to DIW, a strong response of the SAW resonance frequency of about −2000 ppm and the magnitude of conductance peak of about −32% are observed, which is in agreement with our previous measurements [ 34 ]. Impedimetric measurements in air and DIW reveal the distinct performances of the IDE in both environments. As seen from the Nyquist plot in Fig. 7 a , the IDE performance in air is well represented by the semi-circle determined by the stray capacitance C = 19.3 pF, and the parasitic resistance of about 1.7 GΩ. At frequencies below 10 Hz the measurement instability is dominating, which is a known issue in EIS at very high impedances, since very small currents are being measured. Upon contact with DIW the constant phase element becomes evident in the Nyquist plot ( Geo Fig. 7 b) through the constant slope of the curve, especially in the frequency range below 1 kHz. In the frequency range from 1 kHz to 1 MHz a semicircle can be seen, the diameter of which depends on the conductance of the media and the stray capacitance C in parallel. At frequencies below 1 kHz the impedance is dominated by Geo CPE , which is caused by the electrical double-layer at the IDE/DIW interface. Dl Considering the noise-related signal instability at frequencies below 10 Hz, the impedance curve recorded at 10–100 Hz appears to be appropriate for monitoring the change of CPE . At higher frequencies up to 1 MHz, the dielectric behaviour (i.e. Dl C ) of the solution dominates the signal. Geo The IDE capacitance shifted to C = 65.4 pF, which accounts well for the twice larger dielectric permittivity of DIW as compared to LiNbO Geo 3 . R has shifted down to 0.9 GΩ due to the finite resistivity of the DIW. The appearance of double-layer is manifested by the Sol CPE = 1.2 nF/(s Dl n cm 2 ) and interface leakage current determined by the R = 47 MΩ. Although the EIS response is dominated by the latter, in the higher frequency range the stray capacitance becomes a current flow path concurrent to the DIW/IDE interface. Leak The integrated EIS-SAR sensor clearly demonstrates physically meaningful responses, being able to provide complementary data for the acoustic and electric properties at the sensor surface. Furthermore, the dominating role of the CPE in the impedimetric spectra suggests enhanced EIS sensitivity towards physicochemical processes at the sensing surface. 4 Sensing of lipid deposition Initially, DIW was replaced with HEPES buffer needed for the subsequent lipid deposition. As seen from Table 1 , this is accompanied by an additional SAW resonance frequency shift of about −118 ppm, along with a 2.1% decrease of the conductance peak magnitude. The impedimetric measurement shows an overall decrease of impedance as seen on the Bode plot in Fig. 8 . The decrease is seen both in lower and higher frequency regions suggesting changes both near the IDE surface and in the liquid volume. This observation is further quantified in Table 1 , where CPE increases 30% from 1.2 nF/(s Dl n cm 2 ) to 1.6 nF/(s n cm 2 ). The lipid membrane was formed in the first step by exposing the SiO 2 -coated surface to a multilamellar lipid vesicle (MLV) suspension in HEPES buffer [ 50 ]. A detailed description of MLV suspension preparation routines can be found in the materials and methods section below. The SAW resonance (see Table 1 ) underwent a strong downshift of −86 ppm due to the viscoelastic mass loading at the surface, while the magnitude of peak conductance decreased by 2.3%, caused by acoustic wave damping in the lipid layer. The values were recorded after 10 min, when we observed saturation of the signal and flushed away the excess MLVs. Regarding the EIS response, the deposition of lipid bilayer on the sensor surface adds a new charged layer as a leaky capacitor in series with the double layer, but is also causing improved conduction through the double layer, i.e., a change in the fundamental double-layer properties. These effects are clearly manifested below 1 kHz, where the impedance is lower as compared to HEPES. For frequencies above 1 kHz, the changes are insignificant since the response there is dominated by the electrical properties of HEPES. These observations are further confirmed by extracting the elements of the ECM through a fitting process (see Table 1 ). Furthermore, we observed an increase of CPE by 45%, and the reduction of Dl R by 31%. A complementary capacitance Leak C = 3.2 nF was extracted from the fit. Together, the SAW and EIS quantities suggest that a layer with distinct viscoelastic properties, and leaky capacitance behaviour has been formed on the sensing surface. Lipids Upon addition of Ca 2+ ions, a strong upshift of both the SAW resonance frequency and magnitude of peak conductance by about +19 ppm and +2.1%, respectively, was determined. This shift brought the response to levels of the magnitude of peak conductance very close to the HEPES level, while the resonant frequency remained closer to the lipid levels. The measurement was performed after the transformation was complete, which was confirmed by the stabilized sensor response. This response to Ca 2+ -induced lipid spreading is in good agreement with previously published data [ 35 ]. A relatively strong upshift of the magnitude of the peak conductance in the range between 2%–3%, and an upshift in the frequency around +20 ppm were observed. In our previous work [ 35 ] we provided an explanation of this behaviour in relation to the improved rigidity upon Ca 2+ -induced formation of surface-supported molecular lipid films followed by rupturing, and the occurrence of nano-tubular protrusions floating in the aqueous media [ 51–53 ]. For reference, we have captured this behaviour by laser scanning confocal microscopy, both on the sensor surface ( Fig. 9 a ), and on a SiO 2 -coated glass surface ( Fig. 9 b–e). In Fig. 9 a, a mixed film of lipid layer (lower intensity dark green) and sparse MLVs (the lighter-larger objects having a size of about 10 μm and larger) are seen attached to the SiO 2 - coated sensor surface. In a separate experiment an MLV ( Fig. 9 b) on SiO 2 - coated transparent glass substrate, allowing for close-proximity observations, was subjected to Ca 2+ interaction. Calcium ions at concentrations >0.8 mM initiate tension-driven surface wetting (Marangoni flow) by the lipid deposit, where the lipid layer spreads in a circular fashion as a rolling double bilayer from the MLV, gradually consuming the MLV ( Fig. 9 c) [ 54 ]. Excess MLVs are flushed away in a subsequent washing step ( Fig. 9 d), leaving behind a continuous lipid film on the surface. The membranes originating from different spreading patches merge in this process ( Fig. 9 e). Ca 2+ ions continuously pin the lipid layer to the surface, forming a rigidified solid-supported layer. When the lipid reservoirs are depleted, the continuous adhesion to the surface promoted by Ca 2+ pinning causes a significant increase in lipid membrane tension, eventually resulting in rupturing. The lipid material released by the rupture process is transferred onto the proximal bilayer [ 50 , 52 ]. Detailed information on the Ca 2+ -induced changes has been extracted through simultaneous EIS characterisation. The Bode plot in Fig. 8 shows that the magnitude of impedance has recovered slightly above the HEPES level, which is associated with a significantly decreased conductivity at the interface layer, suggesting an interface structure with reduced CPE and/or increased Dl R . This behaviour appears to result from the formation of an extensively ruptured rigid layer, which on one side reduces the capacitance of the layer while on the other reduces the surface leak conductivity by neutralisation of charge carriers through the Ca Leak 2+ reaction. As shown in Table 1 , CPE has decreased by 38%, while Dl R increased by 14%. Moreover, Leak C practically vanished, further supporting our notion on the electrical characteristics of the ruptured surface. Both the acoustic and the EIS responses underwent extensive signal recovery towards the base HEPES levels. The SAW resonance frequency and Lipid R have shown a moderate recovery, which is due to the partial, but not complete removal of weakly adhered reservoir vesicles. The remaining MLVs are transformed into the double bilayer during the spreading process. The magnitude of peak conductance, and Leak CPE recovered almost to the levels of HEPES, which we attribute to the loss of a homogeneous lipid double bilayer in the rupturing process. Dl 5 Materials and methods for the lipid assay 5.1 Chemicals Soybean Polar Lipid Extract (SPE) purchased from Avanti Polar Lipids, Inc. (USA). Chloroform and 4-(2- hydroxyethyl), PBS tablets, and piperazine-1-ethanesulfonic acid (HEPES) solution (1 M) were obtained from Sigma-Aldrich (Missouri, USA). Calcium chloride (CaCl 2 ) was purchased from KEBO lab (Sweden). Two different fluorophore-conjugated lipids were used for labelling. 2-Dioleoyl-sn-glycero- 3- phosphoethanolamine with the fluorophore ATTO-488 (ATTO488 - DOPE) [ Fig. 9 a] was purchased from ATTO-TEC (Germany) and the 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N-(lissamine rhodamine B sulfonyl) (ammonium salt) (16:0 Liss Rhod PE) was purchased from Avanti Polar Lipids [ Fig. 9 b–e]. 6 Vesicle preparation The multilamellar lipid vesicles were composed of SPE, consisting of PE (phosphatidylethanolamine) 22.1%, PI 18.4%, PC 45.7%, PA 6.9%, and other lipids 6.9%. The lipid mixtures in the fluorescent micrographs contained fluorescently labelled lipid ATTO488-DOPE (1 wt%) or Rhodamine-PE (1 wt%). The giant vesicles were prepared as described in [ 55 ] and stored in 0.5 mL Eppendorf tubes at −20 °C until use. A glass coverslip (Menzel Gläzer 24 × 60) with 100 nm surface coating of SiO 2 was prepared by reactive sputtering. A PDMS ring was placed on top of the surface, and 500 μL 10 mM HEPES buffer (pH 7.4) was placed inside the PDMS ring. An addition of 60 μL of the giant lipid vesicle suspension was placed inside the HEPES buffer solution for 10 min. Thereafter, the HEPES buffer was exchanged for a solution of CaCl 2 solution with the concentration 5 mM. After 10 min, the solution was exchanged again with HEPES buffer and imaged. Stereo-fluorescence microscopy: A Leica M205 FA upright fluorescence stereo microscope, equipped with a 1× ‘Planachromat’ main objective, an external halogen light source, and the GFP and DSR filter sets, was used for obtaining Fig. 9 a. Microscopy: A confocal laser scanning microscopy system (Leica SP8, Germany), with HCX PL APO CS 40× oil objective (NA: 1.25) was used for acquisition of confocal fluorescence images ( Fig. 9 b–e). The excitation wavelength for the imaging of the Rhodamine-PE was provided via a white-light laser source (Leica) at 560 nm and the emission was collected at 583 nm using a hybrid fluorescence filter. 7 Conclusions Operation of a combined surface acoustic resonance and impedimetric sensor within the confines of a single sensor die was demonstrated. The sensor was designed in a tandem arrangement, in which both sensing circuits physically share surfaceprinted components, in this case interdigital electrode fingers, but are operated in different frequency ranges through two electrical interface ports. The proposed integration SAR-EIS scheme demonstrates crosstalk below −60 dB, and enhanced sensitivities towards particle and molecular film deposits at the sensing surface, and their calcium ion-induced transformation into each other. The obtained multiparametric sensor response was discussed with respect to the expected and observed phenomena, providing a first impression of the rich information that can be acquired by a tandem sensor. The SAW and the EIS responses are not only complementary in view of detecting the nature of the molecular layer structures and their properties, but also provided insights into possibilities for further improvement of the sensor figures of merit. Declaration of completing interest The authors declare no conflict of interest Acknowledgements We thank Gizem Karabiyik from the University of Oslo for providing the Ca 2+ induced lipid spreading and rupturing experiments. This work was made possible through financial support obtained from the Swedish Research Council (Vetenskapsrådet) Project Grant 2015-04561 , the H2020 ITN “Chemical Reaction Networks – CReaNET” - Ref. 812868 , the Research Council of Norway (Forskningsrådet) Project Grant 274433 , Swedish Foundation for Strategic Research (SSF) ( GMT14-0077 ), UiO: Life Sciences Convergence Environment as well as the start-up funding provided by the Centre for Molecular Medicine Norway & Faculty of Mathematics and Natural Sciences at the University of Oslo . | [
"LUKA",
"ZHANG",
"CHEN",
"LANGE",
"FU",
"FU",
"GRIESHABER",
"GO",
"DING",
"LIU",
"BRIAND",
"SABOT",
"XIE",
"HE",
"PINTO",
"JANSHOFF",
"MAMISHEV",
"XU",
"KATARDJIEV",
"DEMIGUELRAMOS",
"NIRSCHL",
"HOFFMANN",
"MIREA",
"MIREA",
"CALIENDO",
"HOHMANN",
"NOMURA",
"MUJA... |
ab3bb8b75e214eae9562b65699537b39_River network-based index to clarify transmission of hydrological drought in reservoir-regulated bas_10.1016_j.ejrh.2023.101604.xml | River network-based index to clarify transmission of hydrological drought in reservoir-regulated basins | [
"Zhou, Mi",
"Xiong, Lihua",
"Jiang, Cong",
"Chen, Gang",
"Liu, Chengkai",
"Zha, Xini"
] | Study region
Hanjiang River basin (HRB), China.
Study focus
Understanding the transmission of hydrological drought along river networks is of extreme importance for monitoring, forewarning, and mitigation of drought. However, it is extremely difficult to do so under the situation of complex river networks, particularly in the presence of spatially distributed reservoirs. Taking advantage of hydrological reasoning information within a hierarchical river network structure, this study proposed a nonstationary standardized streamflow index (NSSIrn) to clarify the transmission of hydrological drought in reservoir-regulated basins. Then, the generalized regression model based on the normal distribution was used to clarify the transmission of hydrological drought along reservoir-regulated river networks.
New hydrological insights for the region
The NSSIrn aided with river network information enables reservoir-induced alteration in drought transmission to be better clarified under nonstationary conditions. The NSSIrn-based drought characteristics determined throughout the HRB showed that droughts that occurred in the downstream region were more severe and longer lasting than those that occurred in the upstream region during 1957–2019. Moreover, regulation of the Danjiangkou reservoir weakens the correlation between upstream and downstream hydrological droughts, i.e., roughly one quarter of upstream drought events did not develop into drought events downstream of the reservoir. This study offers a valuable reference for evaluating the effect of other interventions on drought transmission in human-modified basins. | 1 Introduction As a recurring natural hazard characterized by sustained and extended occurrence of below-normal water deficit, drought poses an ever-increasing threat to water-related natural and societal systems through the water cycle; consequently, it has received increasing attention over the past decades ( Mishra and Singh, 2010; Das et al., 2020; Yin et al., 2023 ). Drought can be classified into several different types according to the form of water deficit, e.g., precipitation, streamflow, or soil moisture ( Heim, 2002; Peters et al., 2003 ); however, hydrological drought associated with deficit in (sub)surface water is recognized as the most important aspect of drought owing to its extensive influence on a wide variety of water use sectors ( Shukla and Wood, 2008; Wang et al., 2020 ). Generally, streamflow is a prime variable for characterizing hydrological drought because it determines water availability for various water supply activities ( Heim, 2002; Yasmin and Sivakumar, 2018 ). Owing to the hydraulic connection of river networks, spatial aggregation exists between streamflow processes at a specific location and areas upstream ( Mudelsee, 2007 ). The hydrological drought that occurs at a location of interest incorporates the streamflow deficit signals from areas upstream through the hierarchical structure of the river network. Hence, to distinguish the transition of drought type from one to another (termed drought propagation), the transmission of hydrological drought along a river network is called drought transmission ( Mishra and Singh, 2010; Xu et al., 2019 ), which is a remarkable feature of hydrological drought that is substantially different from other types of droughts such as meteorological drought. Therefore, an improved understanding of the transmission of hydrological drought along a river network is of extreme importance for monitoring, forewarning, and mitigation of drought. Numerous reservoirs have been constructed on many rivers globally to regulate streamflow for various purposes such as flood control, irrigation, water supply, and hydropower generation ( Mulligan et al., 2020; Dong et al., 2022 ). The hydraulic continuity between upstream and downstream regions of a river is severed by the construction and operation of such reservoirs, which can eventually result in change in the spatial behavior of hydrological drought within the river network ( Wang et al., 2019; Jehanzaib et al., 2020; Brunner, 2021 ). Targeted studies have been extensively implemented to examine the effect of reservoir regulation on drought transmission in regulated river basins ( He et al., 2017; Li et al., 2013; Zhang et al., 2015; Yu et al., 2019; Zhang et al., 2022a ). Reservoir-induced alteration in the drought transmission has been found to be closely related to the position, storage capacity, purpose of flow regulation, and corresponding operation rules of the reservoirs ( Johnson and Kohne, 1993; Lorenzo-Lacruz et al., 2013; Wu et al., 2018; Ma et al., 2019 ). However, previous studies focused mostly on the impact of single or aggregated reservoirs without consideration of the inherent linkage between the streamflow processes at different locations in the river network ( López-Moreno et al., 2009; Guo et al., 2021; Wang et al., 2022 ). Actually, for a multireservoir network, such alteration at a given location represents the superposition of all upstream contributions from regulated or unregulated subbasins as determined by the spatial distribution of the reservoirs within the river network ( Dong et al., 2020; Cipollini et al., 2022 ). Clearly, the use of an aggregated reservoir without the aid of river network information confounds the discrepant impacts of spatially distributed reservoirs with different features (e.g., storage capacity, regulation purpose, and operation rules), and further makes the mechanism of drought transmission along the multireservoir network remains unclear. Hence, the expansion of river network information is essential for clarifying drought transmission in reservoir-regulated basins. In recent decades, the method predominantly used to characterize hydrological drought related to streamflow anomalies has been to deploy standardized drought indexes evolved from the well-known Standardized Precipitation Index ( Bachmair et al., 2016; Tijdeman et al., 2020 ). Commonly, these indexes include the Standardized Streamflow Index (SSI) that reflects anomalies in observed streamflow ( Shukla and Wood, 2008 ), and the Standardized Runoff Index (SRI) that relates to anomalies in modeled runoff per unit area ( Vicente-Serrano et al., 2012; Tijdeman et al., 2020 ). These two indexes are broadly similar and the former was selected for use in the present study. Similar to the Standardized Precipitation Index, the traditional SSI expresses streamflow as a non-exceedance probability that allows spatiotemporal comparability of hydrological drought with the advantages of simplicity and effectiveness ( Diaz et al., 2020; Mishra and Singh, 2011 ). Thus, reliable estimation of the streamflow frequency distribution (SFD) is of great importance in calculation of the SSI. In practice, the SFD is commonly fitted based on in situ streamflow samples, which are assumed to follow a theoretical probability distribution with time-invariant parameters ( Shukla and Wood, 2008; Vicente-Serrano et al., 2012 ). In the case of a regulated river basin, the fundamental assumption of stationarity used in deriving the traditional SSI is not valid largely owing to reservoir regulation ( Li et al., 2013; Jiang et al., 2015 ). More recently, several studies have attempted to construct a nonstationary SSI (NSSI) through covariate-assisted estimation of a nonstationary SFD using the Generalized Additive Models for Location, Scale, and Shape (GAMLSS) ( Li et al., 2013; Wang et al., 2020; Zhang et al., 2021; Wang et al., 2022 ). Unfortunately, such studies focused mostly on the cumulative impacts of a multireservoir network as an entity without consideration of the individual effects of spatially distributed reservoirs within the river network. The objective of this study was to develop a river network-based NSSI (hereafter, NSSIrn) to clarify the transmission of hydrological drought in reservoir-regulated basins. Therefore, this paper focuses on the following: (1) proposal of a river network-based hierarchical model for SFD estimation that accounts for both the streamflow dependence along the network and the regulation effect of spatially distributed reservoirs; (2) estimation of the monthly NSSIrn from the normalized SFD in the regulated Hanjiang River basin (HRB), which is one of the largest tributaries of the Yangtze River in China; and (3) clarification of the transmission of hydrological drought using the generalized regression model. The remainder of this paper is organized as follows. Section 2 introduces the study area and data sources used in this study. Section 3 derives the NSSIrn aided with river network information, and proposes a method to clarify drought transmission in reservoir-regulated basins. Section 4 presents the results of river network hierarchical partition, SFD estimation, NSSIrn calculation, and drought transmission in the studied basin. A discussion about comparisons with previous works is presented in Section 5 , which also includes mechanical reason why the NSSIrn is superior to the traditional SSI, and an outline of limitations and related future research. Finally, the main conclusions are presented in Section 6 . 2 Study area and datasets 2.1 Study area The Hanjiang River, which is the longest and largest tributary of the Yangtze River in China, originates from the Qinling Mountains, flows across Shaanxi, Sichuan, Henan, and Hubei provinces, and finally discharges into the Yangtze River at Wuhan. The HRB (30°10′–34°20′N, 106°15′–114°20′E) is one of the most important economic zones of the Yangtze River Economic Belt, covering an area of 1.59 × 10 5 km 2 . The complex topography of the HRB, i.e., numerous tributaries and substantial elevation difference (range: 32–2200 m), contributes to the spatiotemporal variability of the streamflow distribution ( Jiang et al., 2015 ). Moreover, more than 2900 reservoirs have been built to regulate streamflow for flood control, power generation, water supply, irrigation, and drought resistance purposes in the HRB, including the Shiquan, Ankang, Danjiangkou, Huanglongtan, and Yahekou mega reservoirs. As shown Fig. 1 , this study focused on the catchment above Huangzhuang (HZ) station on the mainstream of the HRB, which contains the five major water storages listed above, and covers an area of 1.42 × 10 5 km 2 (i.e., 89% of the area of the HRB). Danjiangkou reservoir with a drainage area of 9.52 × 10 4 km 2 was built in 1958, entered initial operation for power generation in 1967, and started full operation in 1974. The first-stage 162-m-high dam with total storage capacity of 17.45 km 3 , as the first mega water regulating structure located on the mainstream of the HRB, was heightened to 176.6 m in 2013 to provide increased capacity of 31.95 km 3 ( Peng et al., 2020; Wang et al., 2022 ). Other dams located on the mainstream of the HRB include the Shiquan dam, forming a reservoir with a drainage area of 2.34 × 10 4 km 2 and total storage capacity of 0.37 km 3 , and the Ankang dam, holding 3.2 km 3 of total storage capacity. Since the completion and operation of the Shiquan reservoir in 1974 and the Ankang reservoir in 1992, these two reservoirs have played key roles in flood control, irrigation, and power generation. The Huanglongtan reservoir, with a total storage capacity of 0.95 km 3 , was built in 1978 on a tributary of the HRB called the Du River, and it controls a drainage area of 1.11 × 10 4 km 2 (i.e., 95% of the area of the Du River subbasin). The Yahekou reservoir, with total storage capacity of 1.34 km 3 , was built in 1960 in the midstream region of the Tang River subbasin. 2.2 Data collection Monthly measurements of streamflow recorded at Ankang (AK), Xiangjiapin (XJP), Baihe (BH), Huanglongtan (HLT), Huangjiagang (HJG), Xindianpu (XDP), and HZ hydrological gauging stations from 1957 to 2019 were collected from Bureau of Hydrology, Changjiang Water Resources Commission. As shown in Fig. 1 , four of the stations (AK, BH, HJG, and HZ) are located on the mainstream, while the remaining three stations (XJP, HLT, and XDP) are located near the outlets of the tributaries of the Xun River, Du River, and Tang River, respectively. The construction and operation of the reservoirs in the HRB have inevitably disrupted the natural streamflow routing processes, resulting in significant nonstationarity in the streamflow time series ( Chen et al., 2007; Wang et al., 2022 ) and thus, also the spatiotemporal characteristics of hydrological drought within this region ( Jiang et al., 2021b; Wang et al., 2022; Zhang et al., 2022b ). 3 Methodology Fig. 2 illustrates the flowchart of the proposed NSSIrn intended to clarify the transmission of hydrological drought in reservoir-regulated basins. First, the hierarchical structure of the multireservoir network is generalized to enable consideration of both the streamflow dependence and the regulation effect of spatially distributed reservoirs within the river network. Then, the monthly NSSIrn for each station is calculated from the normalized cumulative distribution function (CDF), which is estimated from a river network-based hierarchical model. Finally, on the basis of NSSIrn-characterized hydrological drought events, the generalized regression model based on the normal distribution is used to clarify the transmission of hydrological drought along reservoir-regulated basins river networks. 3.1 Derivation of NSSIrn based on hierarchical model 3.1.1 Generalization of the hierarchical structure of a multireservoir river network The generalized tree-like river network considered in this paper, as shown in Fig. 3 , consists of one mainstream and N tributaries, where numerous reservoirs have been built to regulate streamflow. Because of the inherent linkage of streamflow processes along the river network, the streamflow variable at a mainstream station of interest should be dependent on its upstream streamflow variables. Hence, the streamflow dependence between these three stations can provide a more complete information to understand the changes in streamflow characteristics caused by reservoir regulation. Moreover, expanding additional information with hydrological reasoning beyond the at-site flood samples has been recognized as being important in improving the accuracy of flood frequency distribution ( Merz and Blӧschl, 2008; Jiang et al., 2021a ). Obviously, similar information expansion is expected to improve the accuracy of SFD. As described earlier, the clarification of drought transmission in such a multireservoir network should be carried out with the aid of river network information. Given the above, the river network can be divided into N nested river network hierarchies from selected mainstream stations. The subdivided river network is labeled sequentially from the upstream to the downstream by number k ( k = 1, 2, …., N ), which serves as the subscript to denote the variables of the hierarchy. Moreover, superscripts 1 and 2 are used to distinguish the variables associated with the mainstream and a tributary, respectively. In the k- th hierarchy, the tributary gauged by hydrological station flows into the mainstream between two tandem gauging stations, i.e., H k 2 and H k 1 , and an equivalent reservoir lies downstream of the confluence, where reservoir index H k + 1 1 is used to quantify the effect of reservoir regulation on streamflow downstream of the reservoir. RI k + 1 1 To quantify the reservoir-induced alteration to the downstream flow regime in the basin, a dimensionless reservoir index (RI) developed by López and Francés (2013) was adopted in this study, and its calculation formula is defined as follows: where (1) RI k l = ∑ i = 1 N k l ( A i A k l ) ⋅ ( V i C k l ) is the reservoir index for station RI k l , H k l l = 1, 2; A and i represent the catchment area above reservoir A k l i and above station , respectively; H k l V is the total capacity of reservoir i i ; represents the mean annual streamflow at station C k l ; and H k l is the number of the reservoirs associated with station N k l for a given year. Specifically, reservoirs associated with station H k l are those upstream of the station, whereas reservoirs associated with station H k 2 ( H k 1 l = 1) are those between stations and H k − 1 1 . H k 1 3.1.2 Derivation of NSSIrn As noted above, the traditional SSI proposed by ( Nabaltis, Tsakiris, 2009 ) has been widely used to characterize hydrological drought in stationary conditions ( Vicente-Serrano et al., 2012 ). However, follow-up studies recognized that the failure of the traditional SSI in capturing drought events in regulated river basins stems from its ignorance of the nonstationarity in streamflow time series. Subsequently, several GAMLSS-based SFD fitting methods have been developed to construct nonstationary SFDs to improve the performance of the resulting NSSI in hydrological drought characterization ( Wang et al., 2020; Zhang et al., 2021; Wang et al., 2022 ). However, overlooked in the derivation of the NSSI is the fact that streamflow, as a spatial aggregation process, has spatial dependence within the river network, and is further complicated by the presence of spatially distributed reservoirs, as shown in Fig. 3 . To address this problem, this study proposed the NSSIrn, the computation of which involves the following two steps: (1) fitting the probability distribution to monthly streamflow data from the river network-based hierarchical model, and (2) calculating the NSSIrn with the normalized CDF using the classic approximation of Abramowitz and Stegun (1965) : (2a) NSSIrn = κ ( m − a 0 + a 1 m + a 2 m 2 1 + b 1 m + b 2 m 2 + b 3 m 3 ) where (2b) m = − 2 ln [ F Q k l ( q k l ) ] is the observed streamflow at station q k l ; H k l m is an intermediate variable; a 0 , a 1 , a 2 , b 1 , b 2 , and b 3 are constants ( a 0 = 2.515517, a 1 = 0.802853, a 2 = 0.010328, b 1 = 1.432788, b 2 = 0.189269, b 3 = 0.001308); when ≤ 0.50, then F Q k l ( q k l ) κ = −1, when > 0.50, then NSSIrn estimation entails replacing F Q k l ( q k l ) by F Q k l ( q k l ) in 1 − F Q k l ( q k l ) Eq. (2b) , and letting κ = 1 in Eq. (2a) ; and is the CDF of streamflow variable F Q k l ( q k l ) given Q k l , which can be estimated from the following river network-based hierarchical model. Q k l = q k l 3.1.2.1 General framework of river network-based hierarchical model For the k- th hierarchy shown in Fig. 3 , the streamflow variable at station Q k + 1 1 is dependent on the upstream streamflow variables H k + 1 1 and the reservoir index ( Q k 1 , Q k 2 ) , which can be described by the following conditional distribution ( RI k + 1 1 Jiang et al., 2021a ): where (3) Q k + 1 1 | ( Q k 1 , Q k 2 , RI k + 1 1 ) ∼ f [ q k + 1 1 ; θ Q k + 1 1 ( q k 1 , q k 2 , RI k + 1 1 ; α Q k + 1 1 ) ] denotes the conditional distribution of f [ q k + 1 1 ; θ Q k + 1 1 ( q k 1 , q k 2 , RI k + 1 1 ; α Q k + 1 1 ) ] on the explanatory variables Q k + 1 1 given ( Q k 1 , Q k 2 , RI k + 1 1 ) and Q k 1 = q k 1 , rather than the SFD of Q k 2 = q k 2 ; Q k + 1 1 is the distribution parameter vector, which generally contains the location parameter ( θ Q k + 1 1 ) and the scale parameter ( μ Q k + 1 1 ) of the distribution; σ Q k + 1 1 is the hyperparameter vector for linking the distribution parameter vector α Q k + 1 1 with the covariates of θ Q k + 1 1 , which can be estimated using the GAMLSS. The GAMLSS is a semiparametric regression model for describing the time-varying characteristics of nonstationary time series ( ( q k 1 , q k 2 , RI k + 1 1 ) Jiang et al., 2015; Wang et al., 2022 ). Under the GAMLSS framework, the independent observations are assumed to follow a distribution, where the vector of the distribution parameters q k + 1 1 can be related to the covariates of θ Q k + 1 1 using link functions ( ( q k 1 , q k 2 , RI k + 1 1 ) Rigby and Stasinopoulos, 2005 ). Taking as an example, the function relationship can be expressed as follows: μ Q k + 1 1 where (4) g ( μ Q k + 1 1 ) = α 0 + α 1 q k 1 + α 2 q k 2 + α 3 RI k + 1 1 is the link function, and g ( ⋅ ) , α 0 , α 1 , and α 2 are the hyperparameters for α 3 , i.e., μ Q k + 1 1 . α Q k + 1 1 = ( α 0 , α 1 , α 2 , α 3 ) Deploying the total probability formula, the SFD of can be derived as follows (Jiang et al., 2021): Q k + 1 1 where (5) f Q k + 1 1 q k + 1 1 = ∬ f q k + 1 1 ; θ Q k + 1 1 q k 1 , q k 2 , RI k + 1 1 ; α Q k + 1 1 h q k 1 , q k 2 d q k 1 d q k 2 is the joint probability distribution of h ( q k 1 , q k 2 ) . ( q k 1 , q k 2 ) As can be seen from Eq. (5), in addition to directly fitting the streamflow series at following a theoretical probability distribution, the SFD of H k + 1 1 can also be derived from the conditional distribution of Q k + 1 1 on the explanatory variables Q k + 1 1 and the joint probability distribution of the upstream streamflow variables ( Q k 1 , Q k 2 , RI k + 1 1 ) . The two-stage hierarchical model can be expressed as follows: ( Q k 1 , Q k 2 ) (6) { Q k + 1 1 | ( Q k 1 , Q k 2 , RI k + 1 1 ) ∼ f [ q k + 1 1 ; θ Q k + 1 1 ( q k 1 , q k 2 , RI k + 1 1 ; α Q k + 1 1 ) ] ( Q k 1 , Q k 2 ) ∼ h ( q k 1 , q k 2 ) The Pearson correlation coefficient ( r ) is used to quantify the spatial correlation of streamflow variables ( Wright, 1921 ). When the absolute value of r is > 0.20, the streamflow variables and Q k 1 are correlated, and the joint distribution of Q k 2 may be constructed using the copula technique ( ( Q k 1 , Q k 2 ) Nelsen, 2006; Jiang et al., 2019 ); otherwise, it is estimated through multiplication of the respective probability density functions (PDFs) of and Q k 1 ( Q k 2 Jiang et al., 2021a ), i.e.: where (7) h ( q k 1 , q k 2 ) = { f Q k 1 ( q k 1 ) ⋅ f Q k 2 ( q k 2 ) ⋅ c ( F Q k 1 ( q k 1 ) , F Q k 2 ( q k 2 ) ; θ c ) , | r | ≥ 0.2 , p < 0.05 f Q k 1 ( q k 1 ) ⋅ f Q k 2 ( q k 2 ) , otherwise represents the PDF of the copula; c ( ⋅ ) denotes the PDF of f Q k 1 ( ⋅ ) , which is the result of the two-stage hierarchical model in hierarchy ( Q k 1 k− 1); denotes the PDF of f Q k 2 ( ⋅ ) at station Q k 2 , which is assumed to obey certain types of theoretical probability distribution, and the optional distribution can be obtained using the GAMLSS with H k 2 as the explanatory variable; RI k 2 and F Q k 1 ( ⋅ ) are the marginal distributions for F Q k 2 ( ⋅ ) and Q k 1 , respectively; and Q k 2 is the copula parameter of θ c and F Q k 1 ( q k 1 ) . F Q k 2 ( q k 2 ) Specifically, the SFD of in hierarchy 1 can be estimated by the GAMLSS in a manner similar to the SFD estimation of Q 1 1 . Hence, the proposed hierarchical model presents a recursive relationship for SFD estimation from the upstream to the downstream along the river network. Q k 2 3.1.2.2 Candidate distribution function When applying the proposed hierarchical model to SFD estimation in the k- th hierarchy, suitable functions for both the SFD of and the conditional distribution of Q k 2 should be determined. In this study, candidate distributions were selected from four two-parameter probability distributions: the Gamma (GA), Gumbel (GU), Logistic (LO), and Normal (NO) distributions, as shown in Q k + 1 1 Table S1 . All of these distributions were considered in both stationary and nonstationary conditions. Thus, a total of 16 candidate configurations were constructed to find the most suitable, as shown in Table 1 . In nonstationary conditions, at least one distribution parameter is conditioned on certain covariates by the GAMLSS; whereas for stationary conditions, all distribution parameters are assumed to be fixed. 3.1.2.3 Parameter estimation and model optimization The parameters of candidate configurations are estimated using the maximum likelihood estimation method ( Rigby and Stasinopoulos, 2005 ). The optimal configuration at each site for each month is determined in terms of the Schwarz Bayesian Criterion (SBC; Schwarz, 1978 ), where a smaller SBC value indicates a better configuration. The goodness-of-fit was assessed using the Kolmogorov– Smirnov (KS) test ( Frank and Massey, 1951 ). The p -value of > 0.05 of the KS test indicated that the model is reasonable. 3.1.2.4 Computation of CDFs derived by the hierarchical model For the streamflow variable at station Q k 2 in the upstream, the CDF of H k 2 given as Q k 2 can be estimated as follows: Q k 2 = q k 2 where (8) F Q k 2 q k 2 = ∫ 0 q k 2 f Q k 2 ξ d ξ = ∫ 0 q k 2 f ξ ; θ Q k 2 RI k 2 ; α Q k 2 d ξ is the distribution parameter vector, and θ Q k 2 is the GAMLSS hyperparameter vector for linking the parameter vector α Q k 2 with the covariate of θ Q k 2 . RI k 2 According to the PDF of in Q k + 1 1 Eq. (5) , the CDF of at station Q k + 1 1 in the downstream can be derived as follows: H k + 1 1 (9) F Q k + 1 1 ( q k + 1 1 ) = ∫ 0 q k + 1 1 f Q k + 1 1 ( ξ ) d ξ = ∫ 0 q k + 1 1 [ ∬ f [ ξ ; θ Q k + 1 1 ( q k 1 , q k 2 , RI k + 1 1 ; α Q k + 1 1 ) ] h ( q k 1 , q k 2 ) d q k 1 d q k 2 ] d ξ Notably, the relationship in Eq. (9) has no analytical solution because of its implicit definition of a time-varying probability distribution. Operationally, numerical integration based on the Monte Carlo sampling method ( Niederreiter, 1978 ) is used as an alternative to estimate the PDF of by first generating Q k + 1 1 M sets of random samples ( ( q k , m * 1 , q k , m * 2 ) ) based on the joint probability distribution m ∈ { Ω 0 | 1 , 2 , … , M } , and then estimating the PDF for h ( q k 1 , q k 2 ) using the following equation: Q k + 1 1 where (10) f Q k + 1 1 ( q k + 1 1 ) = 1 M ∑ m = 1 M f [ q k + 1 1 ; θ Q k + 1 , m 1 ( q k , m * 1 , q k , m * 2 , RI k + 1 1 ; α Q k + 1 1 ) ] ( θ Q k + 1 , m 1 ) is the distribution parameter vector estimated from the random sample m ∈ Ω 0 ( ( q k , m * 1 , q k , m * 2 ) ) and m ∈ Ω 0 . Similarly, RI k + 1 1 M sets of random samples ( q k + 1 , m * 1 ) can be generated based on m ∈ Ω 0 ( f [ q k + 1 1 ; θ Q k + 1 , m 1 ( q k , m * 1 , q k , m * 2 , RI k + 1 1 ; α Q k + 1 1 ) ] ). Consequently, the CDF for m ∈ Ω 0 is calculated from the empirical distribution function: Q k + 1 1 (11) F Q k + 1 1 ( q k + 1 1 ) = 1 M + 1 ∑ m = 1 M 1 ( q k + 1 , m * 1 ≤ q k + 1 1 ) 3.2 Clarification of spatial transmission of hydrological drought 3.2.1 Characterization of hydrological drought events On the basis of the monthly NSSI rn time series, run theory was applied to characterize hydrological drought events with two selected characteristic variables: duration and severity ( Guo et al., 2020 ). Drought duration ( D ) is the persistent period between the initiation time with a value of the NSSIrn of < –0.50 and the termination time with a value of the NSSIrn of > –0.50. Drought severity ( S ) means the cumulative NSSIrn deficiency during the corresponding drought duration. 3.2.2 Matching of upstream and downstream hydrological drought events In this study, downstream hydrological droughts were matched to upstream hydrological droughts if there was overlap of drought duration. According to the number of upstream and downstream drought events with overlapped periods ( Wu et al., 2022 ), the following five matching types were distinguished: (1) one-to-one (1–>1), situation in which a matched group consists of only one upstream and one downstream hydrological drought event; (2) one-to-many (1–>n), situation in which a matched group consists of only one upstream hydrological drought event and more than one downstream hydrological drought event; (3) many-to-one (n–>1), the inverse situation of one-to-many; (4) many-to-many (n–>n), situation with more than one upstream and more than one downstream hydrological drought event involved in a matched group; (5) many/one-to-none (n/1–>0), situation in which upstream hydrological drought is not transmitted downstream. 3.2.3 Construction of upstream and downstream transmission relationship The generalized regression model with the basic form of linear and nonlinear functions is used to estimate the transmission relationship between upstream and downstream drought characteristics. For the k- th hierarchy, it is assumed that represents the drought duration variable at downstream station D k + 1 1 , while H k + 1 1 and D k 1 are the drought duration variables for the matched upstream droughts at upstream stations D k 2 and H k 1 , respectively. The generalized regression model based on the normal distribution is used to describe the relationship between H k 2 and ( D k + 1 1 , D k 1 ), which can be expressed as follows ( D k 2 Jiang et al., 2023 ): where (12) D k + 1 1 = μ D k + 1 1 | D k 1 , D k 2 + ε D k + 1 1 | D k 1 , D k 2 is the conditional expected value of μ D k + 1 1 | D k 1 , D k 2 , and D k + 1 1 is the model residual, which is assumed to follow the normal distribution: ε D k + 1 1 | D k 1 , D k 2 where (13) ε D k + 1 1 | D k 1 , D k 2 ∼ N ( 0 , σ D k + 1 1 | D k 1 , D k 2 2 ) is the variance of the model residual. Hence, the conditional probability distribution of σ D k + 1 1 | D k 1 , D k 2 2 with respect to ( D k + 1 1 , D k 1 ) can be expressed as follows: D k 2 where (14) D k + 1 1 | ( D k 1 , D k 2 ) ∼ N ( μ D k + 1 1 | D k 1 , D k 2 , σ D k + 1 1 | D k 1 , D k 2 2 ) and μ D k + 1 1 | D k 1 , D k 2 are location and scale parameters, respectively. Under the framework of the generalized regression model, to capture linear or nonlinear relationships between σ D k + 1 1 | D k 1 , D k 2 and ( D k + 1 1 , D k 1 ), given D k 2 and D k 1 = d k 1 , the former can be described as a function of ( D k 2 = d k 2 , d k 1 ), which is chosen from the linear and exponential candidates, as follows: d k 2 where (15) { μ D k + 1 1 | D k 1 , D k 2 = a D , 0 + a D , 1 d k 1 + a D , 2 d k 2 ; μ D k + 1 1 | D k 1 , D k 2 = e a D , 0 + a D , 1 d k 1 + a D , 2 d k 2 σ D k + 1 1 | D k 1 , D k 2 = b D , 0 + b D , 1 d k 1 + b D , 2 d k 2 ; σ D k + 1 1 | D k 1 , D k 2 = e b D , 0 + b D , 1 d k 1 + b D , 2 d k 2 a , D ,0 a , D ,1 a , D ,2 b , D ,0 b , and D ,1 b are model parameters estimated using maximum likelihood estimation ( D ,2 Rigby and Stasinopoulos, 2005 ). Then, the definite expressive functions of can be obtained for each hierarchy with the minimum SBC. The location parameter D k + 1 1 describes the relationship between the conditional expected value of μ D k + 1 1 | D k 1 , D k 2 and the upstream drought duration variables ( D k + 1 1 , D k 1 ), while the scale parameter D k 2 describes the distribution of the random residual term σ D k + 1 1 | D k 1 , D k 2 . This means that the dispersion of ε D k + 1 1 | D k 1 , D k 2 near its expected value reflects the influence of random factors, other than the duration of upstream hydrological drought, on the duration of downstream hydrological drought. As the value of the scale parameter increases, the correlation between upstream and downstream drought duration decreases. d k + 1 1 Similarly, suppose that represents the drought severity variable at downstream station S k + 1 1 , while H k + 1 1 and S k 1 are the drought severity variables for the matched upstream hydrological drought at upstream stations S k 2 and H k 1 , respectively. Then, the relationships between H k 2 and ( S k + 1 1 , S k 1 ), given S k 2 and S k 1 = s k 1 , can be expressed as follows: S k 2 = s k 2 where (16) { μ S k + 1 1 | S k 1 , S k 2 = a S , 0 + a S , 1 s k 1 + a S , 2 s k 2 ; μ S k + 1 1 | S k 1 , S k 2 = e a S , 0 + a S , 1 s k 1 + a S , 2 s k 2 σ S k + 1 1 | S k 1 , S k 2 = b S , 0 + b S , 1 s k 1 + b S , 2 s k 2 ; σ S k + 1 1 | S k 1 , S k 2 = e b S , 0 + b S , 1 s k 1 + b S , 2 s k 2 a , S ,0 a , S ,1 a , S ,2 b , S ,0 b , and S, 1 b are model parameters. The definite expressive functions of S ,2 can be obtained using the same method as that used for S k + 1 1 . D k + 1 1 4 Results 4.1 Partition of river network hierarchical structure and spatial correlations With consideration of the river network shown in Fig. 1 , as well as the spatial distributions of hydrological stations and reservoirs in the HRB, the river network can be divided into three nested two-stage hierarchies (i.e., N = 3). Because BH station in hierarchy 1 is located downstream of two parallel stations, i.e., AK and XJP stations, the streamflow variable at BH station should be associated both with the bivariate streamflow variables ( Q 2 1 , Q 1 1 ) of AK and XJP stations, and with the Q 1 2 quantifying the regulation effect of the Shiquan and Ankang reservoirs. For hierarchy 2, the streamflow variable RI 1 1 of HJG station is associated both with the bivariate streamflow variables ( Q 3 1 , Q 2 1 ) of BH and HLT stations, and with the Q 2 2 and RI 2 2 quantifying the effect of reservoir regulation through the Huanglongtan dam and Danjiangkou dam, respectively. Similarly, the streamflow variable RI 3 1 of HZ station in hierarchy 3 is associated both with the bivariate variables ( Q 4 1 , Q 3 1 ) of HJG and XDP stations, and with the Q 3 2 quantifying the regulation effect of the Yahekou reservoir. RI 3 2 Fig. 4 depicts the spatial correlations of monthly streamflow variables at the seven selected stations using Pearson correlation coefficients. For all three hierarchies, the streamflow correlation coefficients for all calendar months range 0.35–0.93 ( p -value <0.005), 0.39–0.86 ( p -value <0.001), 0.36–0.85 ( p -value <0.003) for the pairs of parallel stations AK–XJP, BH–HLT, and HJG–XDP, respectively. The result illustrates that these bivariate variables are correlated, and that the joint probability distributions should be constructed using the copula technique. In addition, the streamflow correlations for all calendar months range 0.93–0.99, 0.48–0.96, 0.36–0.92, 0.15–0.88, 0.92–0.99, 0.45–0.89 for the paired stations BH–AK, BH–XJP, HJG–BH, HJG–HLT, HZ–HJG, and HZ–XDP, respectively. This result indicates that streamflow within the network has strong upstream–downstream dependence, which should be considered in the SFD estimation. It is worth noting that the streamflow correlation for paired mainstream stations without an interlaying reservoir (e.g., BH–AK as high as 0.99 in October) is distinctly larger than that with such a reservoir (e.g., HJG–BH), especially from January to June, suggesting that the effect of the reservoir regulation should also be considered. 4.2 SFD estimation based on the proposed hierarchical model The SFDs of the four upstream stations (i.e., AK, XJP, HLT, and XDP) for all three hierarchies of the HRB were obtained using the GAMLSS with in situ streamflow samples and the covariate of RI, while the SFDs of the downstream stations (i.e., BH, HJG, and HZ) were derived sequentially using the hierarchical model with the conditional distributions on the upstream streamflow variables and RI. The optimal model schemes for each month at these stations were selected based on the SBC. The optimal statistical connections between the distribution parameters and the explanatory variables are listed in Table S2 . The p -values of the KS test range 0.135–0.921, 0.480–0.977, 0.154–0.895, 0.488–0.991, 0.195–0.983, 0.098–0.954, 0.197–0.921, 0.295–0.785, 0.447–0.999, 0.059–0.984, 0.348–0.959, 0.291–0.961 for January to December, respectively (more details in Table S2 ). The KS test results indicate that all selected models exhibited satisfactory fitting ability in SFD estimation. Table 2 presents the optimal model configurations and corresponding explanatory variables for each month at the seven stations. For the upstream stations in all three hierarchies, Table 2 clearly shows that model configurations with RI as the covariate are the best-fitting probability distributions for some calendar months under the effect of reservoir regulation (i.e., AK, HLT, and XDP stations), whereas the stationary models with the GA distribution are optimal for the other months, as well as for all months at XJP station unaffected by reservoir regulation. For the three downstream stations (i.e., BH, HJG, and HZ), the optimal model relies mostly on the upstream streamflow variable(s) (see Table S2 ), implying superiority of those model configurations incorporating river network-based streamflow dependence over those using only RI as the covariate. Overall, covariate- assisted model configurations with different distributions are optimal for 51 months or stations, accounting for about 61% of the total 84 months or stations (i.e., 12 months × 7 stations), while the stationary models (S1) for the remaining 33 months or stations (i.e., 39% of the total). That is, for the studied basin as an entity, it is impossible for a unique probability distribution to effectively fit the observed streamflow series for each station and in each month because the best-fitting probability distributions vary by month along the river network. Owing to space limitations, only the evolutions of the estimated SFDs against the observed streamflow data during 1957–2019 for the seven stations in January and July are shown in Fig. 5 as examples. Most of the in situ streamflow samples are evidently distributed reasonably in the area between the 2.5% and 97.5% quantiles, suggesting that the proposed hierarchical model presents acceptable performance in the HRB. Fig. 5 shows that the effect of reservoir regulation on the mean values of streamflow at downstream stations diminishes gradually with increasing distance between the station and the reservoir. For example, following completion of the Ankang reservoir in 1992, the mean values of streamflow at AK and BH stations in January increased from 138 and 180 m 3 /s to 208 and 255 m 3 /s, respectively, i.e., an increase of 51% and 42%, respectively. Similarly, following completion of the Danjiangkou reservoir in 1967, the mean value of streamflow at HJG and HZ stations in July declined from 1896 and 3176 m 3 /s to 1467 and 2631 m 3 /s, respectively; i.e., a reduction of 23% and 17%, respectively. 4.3 Estimation of the NSSIrn On the basis of the optimal SFDs (including the 33 stationary months or stations), monthly NSSIrn values during 1957–2019 at the seven stations were calculated using Eq. (2). Time series of the calculated NSSIrn values for all seven stations are shown in Fig. 6 . The range of variation in NSSIrn at most stations is from − 4 to 4. Using the NSSIrn, drought characteristics at the seven stations in the HRB, including drought duration and severity, were extracted by run theory. Table 3 clearly indicates that drought duration over the studied basin shows spatial variation, ranging from 3 months (AK) to 5 months (HJG) for mean drought duration, and from 11 months (AK) to 25 months (HJG) for maximum drought duration. Overall, drought duration in the HRB tends to increase from the upstream to the downstream ( Table 3 ). Similar tendencies can be found in the spatial evolution of drought severity during the 63-year study period. Drought events at upstream stations (e.g., AK, XJP, and HLT) have lesser drought severity and shorter duration, while those at downstream stations (e.g., HJG, and HZ) are more severe and longer lasting. The average drought duration increased from 3 months at AK station to 5 months at HZ station, and more droughts with greater severity (<−6) occurred at the downstream HJG and HZ stations. As shown in Fig. S1 , the NSSIrn correlations of BH, HJG, and HZ stations with their upstream mainstream stations, i.e., BH–AK (0.97), HJG–BH (0.64), HZ–HJG (0.95) pass the 0.05 significance level, and those with their upstream tributary stations in the same hierarchy, i.e., BH–XJP, HJG–HLT, and HZ–XDP, are 0.83, 0.46, 0.69, respectively with p -value of < 0.01. This finding indicates that the NSSIrn also has strong upstream–downstream correlation in response to streamflow correlation. Similarly, the NSSIrn correlation of the station group with an interlaying reservoir (e.g., HJG–BH) is distinctly lower than that of the station group without such a reservoir (e.g., BH–AK). 4.4 Drought transmission in the studied basin To analyze drought transmission in the HRB, drought events at upstream stations were matched with corresponding drought events at downstream stations following the hierarchical structure of the river network. The classification results of this matching are shown in Fig. 7 . This figure shows that single paired (1–>1) groups occupy the largest proportion of all three hierarchies, but the proportion in the different hierarchies varies substantially. For BH station in hierarchy 1 and HZ station in hierarchy 3, 1–> 1 groups account for 82.46% and 63.64% of the total drought events, respectively; i.e., more than half of the drought events at BH and HZ stations developed from upstream single hydrological drought events. In comparison, for HJG station in hierarchy 2, single paired groups account for 49.12% of the total drought events. Additionally, the n/1–> 0 group accounts for 24.56% of the total drought events at HJG station; i.e., notably larger than for both BH (1.75%) and HZ (6.82%) stations. This means that nearly one quarter of upstream drought events would not develop into drought events at HJG station. Meanwhile, 22.81% of the total drought events at HJG station are caused by the merging of multiple upstream drought events, which is also much larger than the equivalent proportion at both BH (5.26%) and HZ (13.64%) stations. The results indicate an obvious pooling phenomenon of hydrological drought at stations downstream of the Danjiangkou reservoir. Under the hierarchical structure of the river network, the generalized regression model was used to describe the transmission relationship of drought characteristic variables, including duration ( D ) and severity ( S ). The conditional distribution parameters of both D and S at BH, HJG, and HZ stations on their upstream drought characteristic variables and the reservoir index are summarized in Table 4 . Clearly, both D and S for downstream droughts are closely related to the equivalent values of upstream droughts. The location parameter of each conditional distribution is a linear function of the characteristic variables of upstream drought, while the scale parameter is an exponential function of the characteristic variables of upstream hydrological drought, except for HJG station that includes RI as an explanatory variable. 5 Discussion 5.1 Comparisons with assumed theoretical probability distributions Traditionally, following some theoretical probability distributions, the SFDs at downstream stations can also be fitted to in situ streamflow samples by the GAMLSS with the covariate of RI ( Wang et al., 2022; Shao et al., 2022 ). To highlight the advantages of the hierarchical model in capturing the evolutions of in situ streamflow samples, Fig. 8 compares the SFDs derived using the hierarchical model with those from the GAMLSS for three downstream stations (i.e., BH, HJG, and HZ) for April and September as examples. Evidently, the results of the two methods for the selected months mostly exhibit apparent discrepancies at HJG and HZ stations. In the hierarchical model, the SFDs at HJG and HZ stations exhibit decline in the mean values following the construction of the Ankang reservoir in 1992, an evolution similar to that of the SFD at BH station. By comparison, the GAMLSS-derived distributions with RI as the only covariate show a decline in the mean values of the SFDs at HJG and HZ stations after 1967, which are different from the evolution of SFD at BH station in both April and September. Fig. 4 (d) shows that the streamflow variable in April at HJG station is related to that at BH station ( r = 0.77), while the streamflow variable at HZ station is significantly correlated with that at HJG station ( r = 0.96). Thus, the regulation effect of the Ankang reservoir after 1992 on the SFD at BH station can transfer to the SFD at both HJG and HZ stations, resulting in similar evolution in the SFD at the three stations. Obviously, the failure of GAMLSS-derived distributions with the covariate of RI in capturing the similar evolution of SFD at different stations within the river network results from their ignorance of inherent linkage between these streamflow processes ( Wang et al., 2022 ). Moreover, irregular fluctuations in the quantile curves of the hierarchical model can better describe the nonlinear changes of observed streamflow at the three stations in HRB. These findings indicate that the hierarchical model, which accounts for both the river network-based streamflow dependence and the regulation effect of reservoirs, performs better in estimating the SFD in reservoir-regulated basins in comparison with the GAMLSS-derived distribution with RI as the only covariate. 5.2 Comparison of the NSSIrn with the traditional SSI As a standardized variable with zero mean and unit variance, the NSSIrn values can be compared with values of the traditional SSI proposed by Nalbantis and Tsakiris (2009) across time and space. The grade standards of the SSI and the NSSIrn are classified into five categories: (1) normal (>−0.50), (2) mild (−1.00 to −0.50), (3) moderate (−1.50 to −1.00), (4) severe (−2.00 to −1.50), and (5) extreme (≤−2.00). As shown in Fig. 9 for the example of HJG station, comparison of the NSSIrn with the traditional SSI demonstrates that drought events identified by the NSSIrn are generally more severe than those determined using the SSI, especially for the four historical records of severe drought events marked by the shaded areas: (A) 1966, (B) 1978, (C) 1998, and (D) 2014 ( Ren et al., 2013; Wang et al., 2022 ). The result indicates that the proposed NSSIrn is more sensitive than the SSI under extreme situations. Although similar results have been found in numerous previous studies ( Zou et al., 2018; Wang et al., 2022; Shao et al., 2022 ), the mechanical reasons for such disparities have rarely been studied comprehensively. For the SFD at HJG station in January, the PDF under assumed stationary conditions, as shown in Fig. 10 (a), can be obtained by directly fitting the theoretical probability distributions to observed streamflow data with a mean value (i.e., location parameter) of 651 m 3 /s. Similar values of monthly streamflow observed in 1957 (275 m 3 /s) and in 1998 (242 m 3 /s) should fall into the close interval of the probability density, and therefore are assigned the same moderate drought category by the SSI (= −1.33 and −1.45, respectively). These results are inconsistent with historical documents of drought disaster in the HRB. In contrast, the PDFs obtained from the hierarchical model considering the actual nonstationarity of streamflow provide different mean values of 286 m 3 /s in 1957 and 780 m 3 /s in 1998. Consequently, the former is identified by the NSSIrn as a normal drought (NSSIrn = −0.10), while the latter is defined as a severe drought (NSSIrn = −1.83). The reason why similar values of observed streamflow would be classified into different drought categories is that the proposed NSSIrn, benefitting from consideration of time-varying covariates, is able to respond to changes in environmental factors. Similar disparities can be found in other months, e.g., July ( Fig. 10 (b)), in which the monthly streamflow values of 797 m 3 /s in 1966 and 812 m 3 /s in 2001 are both assigned to a mild drought by the SSI (= −0.92 and −0.90, respectively). However, the drought categories characterized by the NSSIrn are different; extreme drought is detected by the NSSIrn with a value of − 2.17 in 1966, while mild drought is detected for the drought event that occurred in 2001 (NSSIrn = −0.87). These results confirm that the proposed NSSIrn appears more reliable as an indicator for hydrological drought assessment under changing conditions. 5.3 Effect of reservoir regulation on drought transmission Severing of hydraulic continuity between upstream and downstream regions by the construction of a reservoir has direct causal effect on drought transmission in reservoir-regulated basins. By considering the downstream drought characteristics as the response variable and the upstream drought characteristics (i.e., D and S ) and RI as conditional variables, the generalized regression model enables us to capture the cause–effect relationship between reservoir regulation and drought transmission in the studied basin. Taking the example of the Danjiangkou reservoir, which has the largest water storage capacity in the HRB, the conditional PDFs of drought duration ( ) and severity ( D 3 1 ) at HJG station for four selected values of upstream characteristic variables with three typical S 3 1 values of 0.1, 0.5, and 0.9 are shown in RI 3 1 Fig. 11 . Clearly, the conditional PDFs of both drought duration and severity flatten with increase in the value of , indicating that as the water storage capacity of the reservoir increases, the correlation between upstream and downstream hydrological drought characteristics decreases. Moreover, the influence of reservoir regulation increases gradually with increase in both the duration and the severity of upstream drought. RI 3 1 5.4 Limitations and future works The NSSIrn has all the inherent limitations of being a probability-based drought index. Similar to SSI ( Vicente-Serrano et al., 2012; Tijdeman et al., 2020 ), the limitations of NSSIrn mainly include the following aspects: (1) As a data-driven approach, NSSIrn is not distinctly applicable to ungauged or poorly-gauged basins, which limits its scope of application since long data sets are available in limited basins around the world. (2) Spatial variability in the statistical properties of streamflow series makes it difficult to select the most suitable distribution to calculate the NSSIrn in a large basin. (3) The length of streamflow record has a considerable impact on the values of probability-based drought index ( Mishra and Singh, 2010 ), which requires special attention from the NSSIrn user. Hence, further studies may focus on examining the effect of the length of streamflow record on the NSSIrn estimation. As the focus of this study is to propose a river network-based index to clarify drought transmission in reservoir-regulated basins, the proposed NSSIrn only considers nonstationary attributable to reservoir regulation by using RI as a covariate in the SFD estimation model. However, it does not mean that applying the NSSIrn to hydrological drought characterization is limited to the nonstationary conditions caused by reservoir regulation. Further efforts may be made toward applying to other nonstationary conditions induced by other human interventions and climatic change. Besides, some research questions have been raised by this study. For example, the results of run theory analysis showed that there is a lagged response to upstream hydrological drought at the stations downstream of the Danjiangkou reservoir after its completion in 1967. Follow-up studies of how reservoir regulation affects the time lag in drought transmission along the river network are needed. 6 Conclusions In this study, a river network-based nonstationary standardized streamflow index (NSSIrn) was proposed to clarify transmission of hydrological drought in reservoir- regulated basins. The main advantage of the proposed NSSIrn is that the index aided with river network information gains the ability to clarify drought transmission along a river network, including the complicated case represented by the presence of distributed reservoirs considered in this paper. A case study of the HRB was conducted to examine the efficiency of the proposed NSSIrn in characterizing hydrological drought in regulated basins. Correlation analysis results showed that streamflow within the river network in the studied basin has strong upstream–downstream dependence, highlighting the need for consideration of incorporation of streamflow dependence following the hierarchical structure of the river network into the SFD estimation. Hence, a river network-based hierarchical model was proposed, which showed satisfactory fitting ability throughout the HRB. In comparison with the conventional nonstationary SFD estimation method that uses the GAMLSS with the covariate of RI to fit in situ streamflow samples, the hierarchical model generates a more reasonable SFD estimation. Benefiting from this, the proposed NSSIrn is a more reliable indicator for drought assessment under changing conditions in comparison to the traditional SSI. With the NSSIrn-based drought characteristics in the HRB, it was revealed that regulation of mega reservoirs (e.g., the Danjiangkou reservoir) weakens the correlation between upstream and downstream hydrological droughts, i.e., 24.56% of upstream drought events would not develop into drought events at stations downstream of the reservoir. The above findings provide valuable information for monitoring, forewarning, and mitigation of drought in the HRB. Although the reservoir index was selected as a significant covariate with which to consider the nonstationarity of streamflow due to reservoir regulation, the NSSIrn could be extended to other nonstationary conditions attributable to other human interventions and climatic change. Further analysis is suggested to evaluate the time lag in drought transmission in reservoir-regulated rivers under the influence of reservoir regulation. Relevant findings might be useful for drought and water resources management under changing conditions. CRediT authorship contribution statement Mi Zhou : Conceptualization, Data curation, Methodology, Software, Writing – original draft. Lihua Xiong : Funding acquisition, Supervision, Writing – review & editing. Cong Jiang : Supervision, Writing – review & editing. Gang Chen: Writing – review & editing. Chengkai Liu : Resources, Software. Xini Zha : Writing – review & editing. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This research was supported financially by the National Natural Science Foundation of China (Grant Nos. U2240201 and 41890822 ), Ministry of Education “Plan 111″ Fund of China (Grant No. B18037 ), and National Key R&D Program of China (Grant Nos. 2021YFC3000205 and 2021YFC3200304 ), all of which are greatly appreciated. We thank James Buxton MSc, from Liwen Bianji (Edanz) (www.liwenbianji.cn/), for editing the English text of a draft of this manuscript. The authors are grateful to the anonymous reviewers for providing numerous constructive suggestions. Appendix A Supporting information Supplementary data associated with this article can be found in the online version at doi:10.1016/j.ejrh.2023.101604 . Appendix A Supplementary material . Supplementary material . Supplementary material | [
"ABRAMOWITZ",
"BACHMAIR",
"BRUNNER",
"CHEN",
"CIPOLLINI",
"DAS",
"DIAZ",
"DONG",
"DONG",
"FRANK",
"GUO",
"GUO",
"HE",
"HEIM",
"JEHANZAIB",
"JIANG",
"JIANG",
"JIANG",
"JIANG",
"JIANG",
"JOHNSON",
"LI",
"LOPEZ",
"LOPEZMORENO",
"LORENZOLACRUZ",
"MA",
"MERZ",
"MISHR... |
439e8a13f5af4ad8af578d93e06fdf75_Exploring the intercalation chemistry of layered yttrium hydroxides by 13C solid-state NMR spectrosc_10.1016_j.mrl.2022.03.001.xml | Exploring the intercalation chemistry of layered yttrium hydroxides by 13C solid-state NMR spectroscopy | [
"Liu, Yanxin",
"Jiang, Shijia",
"Xu, Jun"
] | Layered rare earth hydroxides (LREHs) are a novel class of two-dimensional materials with potential applications in various fields. The exchange reactions with organic anions are typically the first step for the functionalization of LREHs. Although the laminar structures seem to be clear for anion-exchanged compounds, the state of intercalated organic anions and their interactions with cationic rare earth hydroxide layers remain unclear. Herein, we demonstrate that the use of 13C solid-state nuclear magnetic resonance (ssNMR) spectroscopy enables to extract key information on the state of intercalated organic anions such as their local chemical environment, stacking, and dynamics, which are often difficult or impossible to obtain previously. In combination with powder X-ray diffraction and ab initio density functional theory calculations, the intercalation chemistry of two representative layered yttrium hydroxides with selected monovalent organic anions was studied in detail. The products can undergo secondary exchange with a divalent organic anion, depending on the match between the basal spacing of two phases, i.e., the replacement of benzenesulfonate (BS−), 2,4-dimethylbenzene sulfonate (DMBS−), and 4-ethylbenzene sulfonate (EBS−) with 2,6-naphthalene disulfonate (NDS2−) is allowed due to the insignificant change in basal spacing after exchange, while the replacement of very long dodecyl benzene sulfonate (DBS−) and dodecyl sulfate (DS−) with NDS2− is forbidden. The results therefore provide valuable insights into the structure-property relationships of LREH-based functional materials. | 1 Introduction Layered metal hydroxides such as layered double hydroxides (LDHs) have shown significant scientific and industrial importance since their anion-exchange, exfoliation, and self-assembly capacities have been discovered [ 1–4 ]. The properties of layered metal hydroxides can be further modulated by incorporating heteroatoms into cationic metal hydroxide layers [ 5–7 ]. Towards this end, rare earth (RE) elements are promising candidates of heteroatoms due to their well-known optical, magnetic, and catalytic activities [ 4 ]. However, it is very challenging to substitute the metal ions of LDHs (e.g., Mg 2+ and Al 3+ ) with target RE ions due to the marked differences in ionic radii and coordination chemistry. In recent years, anion-exchangeable layered rare earth hydroxides with the molecular formula of RE 2 (OH) 5 X⋅1.5H 2 O (LREH-X, X − represents the intercalated anion) have been demonstrated as a novel class of layered metal hydroxides [ 8–10 ], and the incorporation of other RE ions in certain LREH-X such as LYH-X is straightforward. By varying the composition of metal hydroxide layers and intercalated anions (or nanostructures), new functional materials have been obtained, exhibiting great potential in applications including catalytic materials [ 11 ], adsorption materials [ 12 ], magnetic materials [ 13 ], optical materials [ 14 , 15 ], etc. [ 16–20 ]. Although routine characterizations such as single-crystal and powder X-ray diffraction (XRD), infrared (IR) spectroscopy, elemental analysis (EA), thermogravimetric analysis (TGA), scanning and transmission electron microscopy (SEM and TEM) [ 21–24 ], etc. have provided fruitful information on hydroxide layers and anions of LREH-X, complementary techniques such as solid-state nuclear magnetic resonance (ssNMR) spectroscopy are always desirable to obtain information which are not available in those routine characterizations. ssNMR spectroscopy is highly sensitive to the local structural information of studied nuclei, and thus should be suitable for directly probing the state of hydroxide layers and intercalated anions. For example, the difference between free state and intercalated state of anions may induce observable change in chemical shift values, thus can be used to validate the existence of anions in interlayer space. This non-destructive analytical technique has been extensively used to study layered materials and it can retrieve key structural information including the evolution of their structures during electrochemical cycles [ 25 , 26 ], the determination of coordination polyhedrons within layers [ 27–30 ], etc. In the case of layered metal hydroxides, ssNMR has been used to probe the Mg 2+ /Al 3+ cation ordering of LDHs [ 31–33 ], the crystal structure of LREH-X and similar compounds [ 34–36 ], etc. We have reported that 1 H→ 89 Y cross-polarization (CP) magic-angle spinning (MAS) NMR experiments enable to determine the crystal symmetry of three LYH-X compounds (LYH-Cl, LYH-Br, and LYH-NO 3 ) by measuring the number of 89 Y NMR peaks [ 34 ], which in principle equals to the number of crystallographically inequivalent Y sites in the structure. 89 Y ssNMR data further reveal ordering of coordinated water molecules instead of disordering suggested by XRD measurements. In addition, two-dimensional (2D) 89 Y– 89 Y correlation ssNMR experiments probed the connectivity between different Y sites within yttrium hydroxide layers. 1 H→ 89 Y CP MAS NMR measurements also uncovered that the intercalation chemistry of three LYH-X compounds can be very different: it is straightforward to convert LYH-Cl to LYH-Br and LYH-NO 3 by anion-exchange, while opposite conversions seem to be impossible or challenging. This study thus inspires us to explore more details regarding intercalation chemistry of LYH-X, since anion-exchange is typically the first step for the functionalization of LREH-X materials: the replacement of Cl − by long-chain organic anions prior to exfoliation results in monolayer nanosheets of LREH-Cl [ 14 ]; the surface modification of LREH-X by anionic surfactants varies the catalytic performances [ 11 ]; the intercalation of different organic anions within LREH-X yields luminescent materials with different transition energy [ 22 , 23 ], etc. However, the state of intercalated organic anions and their interactions with hydroxide layers remain unclear or even unexplored yet. In this paper, we studied the anion-exchange reactions of two prototypical LYH-X compounds, namely, LYH-Cl and LYH-Br with monovalent organic anions, including benzenesulfonates with different sidechains and dodecyl sulfate. The products were further exchanged with divalent 2,6-naphthalene disulfonate (NDS 2− ) anion. We demonstrate that 1 H→ 13 C CP MAS ssNMR experiments can retrieve key structural information of intercalated organic anions such as their local chemical environment, stacking, and dynamics, in combination with ab initio density functional theory (DFT) calculations and powder XRD measurements. Our results therefore provide an important basis for exploring intercalation chemistry and understanding structure-property relationships of LREH-X. 2 Experimental section 2.1 Sample preparation All chemicals used in this work were analytical pure grade without any pretreatment. And all samples were stored under an atmosphere of 70% relative humidity (i.e., maintained by saturated NaCl solution). The preparation methods of LYH-X (X = Cl, Br) are based on those reported in literature [ 37 ]. 0.011 mol YX 3 ⋅ n H 2 O was first dissolved in 35 mL ultra-pure water, and then mixed with 0.021 mol NaOH and 0.0144 mol NaX under stirring. The resulting solution was transferred to a 100 mL Teflon-lined autoclave and hydrothermally treated at 150 °C for 12 h. After cooling to room temperature, the mixture was centrifuged at 10000 rpm for 5 min. The precipitate was washed twice with ultra-pure water and dried at 70 °C to obtain white powder. LYH-X samples intercalated with monovalent organic anion (OA − , the samples are hereafter referred to as LYH-OA) were prepared as follows. The selected organic anions are benzenesulfonate (BS − ), 2,4-dimethylbenzenesulfonate (DMBS − ), 4-ethylbenzenesulfonate (EBS − ), dodecylbenzenesulfonate (DBS − ), and dodecyl sulfate (DS − ), respectively. In a typical anion-exchange reaction, a mixture of 1 mmol LYH-X and 3 mmol sodium salt (hereafter referred to as SBS, SDMBS, SEBS, SDBS, and SDS, respectively) was added to 35 mL ultra-pure water, and the supernatant was discarded after shaking at 220 rpm for 8 h. The solid was then transferred to a 100 mL Teflon-lined autoclave and hydrothermally treated at 100 °C for 12 h. After cooling to room temperature, the mixture was centrifuged at 10000 rpm for 5 min. The precipitate was washed twice with ultra-pure water and dried at 70 °C to obtain white powder. LYH-OA samples were further exchanged with NDS 2− anion. A mixture of 1 mmol LYH-OA and 3 mmol sodium salt (SNDS) was added to 35 mL ultra-pure water and then shaken at 220 rpm for 8 h. The mixture was transferred to a 100 mL Teflon-lined autoclave and hydrothermally reacted at 100 °C for 12 h. After cooling to room temperature, the mixture was centrifuged at 10000 rpm for 5 min. The precipitate was washed twice with ultra-pure water and dried at 70 °C to obtain white powder product. 2.2 Sample characterization Powder XRD measurements were performed on a Rigaku Smart Lab diffractometer using the copper K α radiation ( λ = 0.15406 nm). The scanning speed, scanning range, and step value were 6°/min, 3° ≤ 2 θ ≤ 65°, and 0.01°, respectively. Simulated powder XRD patterns were generated using the Mercury software package (CCDC). 1 H→ 13 C CP MAS ssNMR experiments were carried out at a magnetic field strength of 9.4 T, corresponding to a 13 C Lamor frequency of 100.64 MHz. A Bruker AVANCE IIIHD spectrometer and a 4.0 mm Bruker 1 H/ 31 P– 15 N MAS probe (sample weight: ∼90 mg) were used, with a spinning frequency of 10 kHz. The Hartmann-Hahn match conditions for 1 H→ 13 C CP MAS experiments were optimized on solid 1,2- 13 C-glycine with a TPPM 1 H decoupling field strength of 62.5 kHz. The field strengths for 1 H excitation pulse and 13 C CP contact pulse were 62.5 kHz and 53.8 kHz, respectively. The recycle delay was 4 s. Different contact times (0.1 ms, 1 ms, 5 ms, and 10 ms) were tested in CP MAS ssNMR experiments ( Fig. S1 ) and the results obtained at 0.1 ms and 5 ms are selected and analyzed in this work. 13 C chemical shifts were referenced to the methylene signal of solid adamantane at 38.48 ppm. 64 scans were accumulated in all ssNMR experiments at ambient temperature. All free-induction delays (FIDs) were zero-filled to 65536 points prior to Fourier transformation. The deconvolution of 13 C NMR spectra was performed using the DMFIT software package [ 38 ]. 2.3 Theoretical calculations Ab initio DFT calculations were conducted using the Gaussian software package [ 39 ]. The input structures of organic anions were created by using the GaussianView software [ 40 ]. Geometry optimization and NMR calculation were performed at B3LYP/6-31G level. The calculated isotropic magnetic shielding σ iso is converted to the corresponding isotropic chemical shift δ iso according to the fit of linear correction between the experimental chemical shift ( δ iso ) and calculated magnetic shielding ( σ iso ) values of 13 C for several representative organic compounds ( Fig. S2 ). 3 Results and discussion 3.1 Anion-exchange from LYH-X to LYH-OA As Fig. 1 illustrates, the LYH-X (X = Cl, Br) compounds possess yttrium hydroxide layers separated with replaceable anions. However, the crystal symmetry is different: it is the orthogonal P 2 1 2 1 2 space group for LYH-Cl, whereas the space group changes to monoclinic P 2 1 in the case of LYH-Br. Such difference is believed to be responsible for the irreversible anion-exchange between LYH-Cl and LYH-Br, in which only the transformation from LYH-Cl ( P 2 1 2 1 2 space group, high symmetry) to LYH-Br ( P 2 1 space group, low symmetry) is possible by just eliminating a mirror plane at the Y1 site [ 34 ]. Anion-exchange reactions of LYH-X with the same OA − are thus studied and compared in this work to check if similar phenomena are present. Benzenesulfonates with different sidechains were used to explore the influences of sidechain structure on anion-exchange: i.e., benzenesulfonate (BS − ), 2,4-dimethylbenzenesulfonate (DMBS − ), 4-ethylbenzenesulfonate (EBS − ), and dodecylbenzenesulfonate (DBS − ). Dodecyl sulfate (DS − ) is then compared with DBS − to unravel the effects of aromatic rings. It should be noted that sodium salts of DBS − (SDBS) and DS − (SDS) are common swelling agents for two dimensional (2D) materials [ 14 , 41 , 42 ], and the information obtained on the state of intercalated DBS − and DS − anions are valuable to understand the exfoliation process. The phase purity and crystallinity of LYH-X compounds were first confirmed by powder XRD measurements ( Fig. 2 a) [ 34 ]. The basal spacing calculated using the Bragg equation 2 d sin θ = nλ is consistent with the literature values (8.4 Å for LYH-Cl, and 8.2 Å for LYH-Br, respectively), where n is the diffraction order of crystallographic plane (001), d is the corresponding distance of (001), λ is the wavelength of X-ray, and θ is the glancing angle. The smaller basal spacing observed for LYH-Br is due to displacement between yttrium hydroxide layers ( Fig. 1 ), even though the ionic radius of Br − (1.96 Å) is slightly larger than that of Cl − (1.81 Å) [ 43 ]. After exchanging with OA − , the powder XRD patterns of anion-exchanged compounds (hereafter referred to LYH-X-OA) all changed dramatically ( Fig. 2 b–f), with enlarged basal spacing ( Table S1 ). Such changes are often used to confirm the incorporation of foreign anions into the structure of 2D materials. Powder XRD patterns of LYH-Cl and LYH-Br exchanged with the same OA − (e.g., LYH-Cl-BS and LYH-Br-BS, Fig. 2 b) always look almost identical, implying that the structural differences between two precursors only subtly affect their exchange reactions with OA − . However, small discrepancies in basal spacing are indeed present (e.g., 14.6 Å for LYH-Cl-BS and 15.1 Å for LYH-Br-BS, respectively, Table S1 ). Moreover, relative intensities of diffraction peaks corresponding to hydroxide layers ((001) (002), etc.), are lower in LYH-Br-OA compounds. When the sidechain length of benzenesulfonates increases, the relative intensities of the aforementioned diffraction peaks grow subsequently. It is therefore evident that although powder XRD measurements are straightforward to probe the laminar structures of LYH-OA compounds, the information extracted on the state of organic anions in the structure is quite limited. The LYH-OA compounds are further investigated by 13 C ssNMR spectroscopy. From the NMR perspective, 13 C NMR spectral features and 1 H→ 13 C CP dynamics can provide fruitful information on the state of organic species: The 13 C isotropic chemical shift ( δ iso ) value is sensitive to the change in local carbon environment associated with the intercalation, and the peak width (i.e., full width at half maximum, FWHM) qualitatively relates to the ordering of local carbon environment. Typically, the higher the local ordering is, the smaller the peak width is. In 1 H→ 13 C CP ssNMR experiments, the change in signal intensity as a function of contact time (ct) can be described by the following equation [ 44 , 45 ]. where S ( t ) = S max ( 1 − T CP / T 1 ρ H ) − 1 [ exp ( − τ / T 1 ρ H ) − exp ( − τ / T CP ) ] T CP is the CP time constant, T 1 ρ H is the proton transverse relaxation time in the rotating frame of reference, and τ is the contact time. The T CP relates to the second moment of the dipolar coupling interaction between 1 H and 13 C, and the T 1 ρ H is affected by the local dynamics. As a consequence, the signal growth is governed by the strength of 1 H– 13 C dipolar coupling interaction, i.e., the peak attributed to carbon with directly bonded hydrogen grows faster and can be observed at a very short ct; while the local dynamics induce dramatic decay or even disappearance of 13 C signal at a long ct. This approach is validated by taking LYH-NDS as an example, which is known to consist of highly ordered NDS 2− anions within interlayer space [ 46 ]: According to the single-crystal XRD data, NDS 2− anions are tilted at an angle of 37.8° between yttrium hydroxide layers, with significant C- π stacking interaction between aromatic rings ( Fig. S6 ). As Fig. 3 a illustrates, 1 H→ 13 C CP MAS NMR spectra of SNDS and LYH-NDS are different, which reveal the change in local environment of NDS 2− anions after intercalation. In addition, all 13 C NMR peaks are very sharp (FWHM: ∼60 Hz), consistent with the high ordering of NDS 2− anions within LYH-NDS. When increasing the contact time from 0.1 ms to 5 ms, the dramatic growth of the two peaks at 139.5 and 130.8 ppm suggests that they correspond to carbons without directly bonded hydrogen, while the insignificant decay of the three peaks at 129.2, 125.7, and 123.4 ppm must be associated with the suppressed local dynamics due to C- π stacking interaction [ 46 ]. Such interpretation is further verified by the DFT-assisted assignment of 13 C peaks ( Fig. 3 a and Table S2 ) and solution NMR data ( Fig. S7 ) [ 47 , 48 ]. 13 C NMR spectral features and 1 H→ 13 C CP dynamics are therefore used to explore the state of organic anions in LYH-OA compounds ( Figs. 3 and 4 ). For benzenesulfonates without or with short sidechains (BS − , DMBS − , and EBS − ), the discrepancies between intercalated state and free state (i.e., in sodium salts) can be probed by 13 C ssNMR spectroscopy. However, such intercalation-induced change cannot be observed for bulky OA − including DBS − and DS − . In all cases, 1 H→ 13 C CP MAS NMR spectra of LYH-Cl or LYH-Br exchanged with the same OA − always look very similar ( Figs. S9–S10 ), consistent with the powder XRD results. And local ordering and dynamics of carbons can be further extracted by analyzing 13 C ssNMR data. Taking LYH-Cl-BS as an example ( Fig. 3 b), the widths of all 13 C peaks are comparable to those of LYH-NDS, indicating that intercalated BS − anions are highly ordered. The rapid growth of the peak at 142.3 ppm suggests that it corresponds to the carbon of BS − without directly bonded hydrogen, whereas the insignificant decay of the peaks at 130.8, 129.2, and 125.7 ppm implies that the local dynamics of corresponding carbons must be restricted. In a similar manner, it is unraveled that DMBS − anions are also ordered within interlayer space of LYH-Cl-DMBS, with negligible local dynamics ( Fig. 3 c). In contrast, local disorder and dynamics of aromatic carbons are present for intercalated EBS − ( Fig. 4 a) and DBS − ( Fig. 4 b). If aromatic ring is removed from the structure (i.e., DS − ), the long alkyl chain of intercalated anions becomes highly ordered, and the local dynamics are suppressed subsequently ( Fig. 4 c). The assignment of 13 C NMR spectra is accomplished combined with CP dynamics, DFT calculations, and solution NMR data and the results are shown in Figs. 3 and 4 and Tables S3–S7 . The differences in local ordering and dynamics of intercalated OA − can be understood on the basis of anion arrangement within interlayer space. In previous reports, the intercalated OA − anions are typically speculated to adopt an arrangement in reverse order between layers, with a tilt angle to compensate the mismatch between anion size and basal spacing [ 42 , 49 , 50 ]. Herein, it is noteworthy that the powder XRD patterns of LYH-NDS and LYH-Cl-BS are very similar ( Fig. S11 ). 13 C ssNMR results indicate that the state of intercalated anions is also very similar for two compounds. A model involving the formation of a pair consisting two BS − anions of opposite direction via π-π stacking interaction is thus proposed ( Fig. 5 ). The tilt angle is calculated to be 20.8° accordingly. Such arrangement is consistent with the high local ordering and negligible local dynamics of aromatic rings for intercalated BS − . When two methyl groups are added to the para- and ortho-positions with respect to sulfonate group (i.e., DMBS − ), close packing of aromatic rings via π-π stacking interaction is still available but the pair of DMBS − becomes perpendicular to yttrium hydroxide layers. It must be mentioned that such arrangement only induces the change in the local environment of the methyl group at the para-position, corresponding to the drift of 13 C peak from 19.8 ppm in SDMBS to 18.6 ppm in LYH-Cl-DMBS. On the contrary, further increase of sidechain length at the para-position induces less efficient stacking of aromatic rings, giving rise to higher local disorder and dynamics (i.e., EBS − ). In the case of LYH-Cl-EBS, displacement of aromatic rings must be involved in the model since the length of EBS − pair via face-to-face stacking is 1.3 Å less than the layer spacing. This arrangement is also consistent with the observed change in local environment of ethyl group. In addition, the use of very long alkyl chain such as dodecyl makes the π-π stacking of aromatic rings impossible. As a result, the local environment of aromatic rings and adjacent carbons of alkyl chain becomes disordered and significant local dynamics are observed for these carbons (i.e., DBS − ). However, the carbons of alkyl chain that are far away from the aromatic rings (e.g., CX-CX of DBS − ) can be stacked following a similar way as the high ordering of DS − in LYH-Cl-DS. The intercalated organic anions in LYH-Br-OA compounds must adopt similar type of arrangements due to almost identical basal spacing as LYH-Cl-OA ( Table S1 ), whereas the degree of displacement between adjacent yttrium hydroxide layers is likely different. 3.2 Anion-exchange from LYH-OA to LYH-NDS Secondary exchange reactions of LYH-Cl-OA with divalent NDS 2− were performed to further investigate the intercalation chemistry of LYH compounds. As Fig. 6 a shows, although powder XRD patterns of LYH-NDS and LYH-Cl-BS-NDS look almost identical, it is unclear whether the conversion was successful since powder XRD patterns of LYH-Cl-BS and LYH-Cl-BS-NDS are also quite similar. Towards this end, 1 H→ 13 C CP MAS NMR experiments unambiguously unravel that the anion-exchange reaction is complete ( Fig. 6 b), considering that 13 C peaks of intercalated or free BS − (e.g., the peak at 142.1 ppm) are not observed. 13 C ssNMR results also imply the same state of intercalated NDS 2− within as-made and anion-exchanged compounds. When different LYH-OA compounds were used as precursors ( Figs. S13–S15 ), it is found that the organic anions with short sidechains (i.e., DMBS − and EBS − ) can be substituted by NDS 2− , whereas the anions with very long alkyl chain (i.e., DBS − and DS − ) cannot. Such discrepancy is likely associated with the match between basal spacing of precursors and LYH-NDS. Possessing similar basal spacings, the NDS 2− anions can enter the interlayer space of LYH-Cl-BS, LYH-Cl-DMBS, and LYH-Cl-EBS and interact with two adjacent yttrium hydroxide layers via electrostatic force ( Table S1 ), driving the conversion between two phases. However, the basal spacing of LYH-Cl-DBS and LYH-Cl-DS are too large to afford NDS 2− anions to interact with two adjacent yttrium hydroxide layers simultaneously and to replace bulky DBS − and DS − anions. 4 Conclusion The intercalation chemistry of selected organic anions in two prototypical LYH-X compounds (LYH-Cl and LYH-Br) is explored in this work. Unlike the irreversible exchange between LYH-Cl and LYH-Br, it is found that the structural differences between two precursors are not important in the exchange reactions with these organic anions: the results indicate that the state of organic anions is very similar for LYH-Cl-OA and LYH-Br-OA, except that small differences in basal spacing can be observed. The interactions between organic anions and yttrium hydroxide layers are still weak to enable secondary exchange of LYH-X-OA with other organic anions. However, the match between basal spacing of two phases seems to be critical for the replacement of organic anions. For example, the LYH-Cl-OA (OA = BS, DMBS, EBS) compounds are allowed to exchange with NDS 2− due to the similar basal spacing of LYH-Cl-OA and LYH-NDS, whereas the very large basal spacing of LYH-Cl-OA (OA = DBS and DS) compounds makes the anion-exchange with NDS 2− impossible. The organic anions can be highly ordered within interlayer space via intermolecular interactions including π-π stacking, C-π stacking, stacking of linear alkyl chains, etc. The structure of organic anions such as the presence/absence of aromatic rings, the position, number, and length of sidechains therefore has pronounced effects on the state of intercalated organic anions and the intercalation chemistry. The information obtained in this work sheds light on the intercalation chemistry of LYH-X compounds and provides valuable insights into the structure-property relationships of LYH-based functional materials. CRediT authorship contribution statement Yanxin Liu: Investigation, Visualization, Writing – original draft. Shijia Jiang: Investigation. Jun Xu: Conceptualization, Methodology, Supervision, Writing – review & editing, Funding acquisition. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment The work is supported by the National Natural Science Foundation of China (grant no. 21904071 and 22071115 ) and the Open Funds (T151904) for the State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics of China. Appendix A Supplementary data The following is the supplementary data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.mrl.2022.03.001 . | [
"MEYN",
"NEWMAN",
"WANG",
"XU",
"LIU",
"SUN",
"LUO",
"YAPRYNTSEV",
"WANG",
"OMWOMA",
"XI",
"ZHONG",
"LEE",
"HU",
"LEE",
"XIANG",
"DONG",
"LIANG",
"WU",
"COUTINHO",
"HINDOCHA",
"CHU",
"GU",
"GU",
"YOON",
"ARMSTRONG",
"SHEIKH",
"SUTRISNO",
"SUTRISNO",
"YAN",
... |
47bb1abd8c2144fd8509e6d84253fab7_Structural diversity bioactivity and biosynthesis of phosphoglycolipid family antibiotics Recent adv_10.1016_j.bbadva.2022.100065.xml | Structural diversity, bioactivity, and biosynthesis of phosphoglycolipid family antibiotics: Recent advances | [
"Ostash, Bohdan",
"Makitrynskyy, Roman",
"Yushchuk, Oleksandr",
"Fedorenko, Victor"
] | Moenomycins, such as moenomycin A, are phosphoglycolipid specialized metabolites produced by a number of actinobacterial species. They are among the most potent antibacterial compounds known to date, which drew numerous studies directed at various aspects of the chemistry and biology of moenomycins. In this review, we outline the advances in moenomycin research over the last decade. We focus on biological aspects, highlighting the contribution of the novel methods of genomics and molecular biology to the deciphering of the biosynthesis and activity of moenomycins. Specifically, we describe the structural diversity of moenomycins as well as the underlying genomic variations in moenomycin biosynthetic gene clusters. We also describe the most recent data on the mechanism of action and assembly of complicated phosphoglycolipid scaffold. We conclude with the description of the genetic control of moenomycin production by Streptomyces bacteria and a brief outlook on future developments. | Introduction Phosphoglycolipid antibiotics, first described in the 1960s, constitute a rather compact, in terms of chemical diversity, family of natural products produced by Gram-positive bacteria of the Actinobacteria phylum [1] . Moenomycin A (MmA, Fig. 1 ), an archetypal member of this family, proved to be a formidable challenge for medicinal chemistry efforts to generate analogs to probe MmA mode of action and improve its pharmacological properties [ 2 , 3 ]. This, along with the availability of more promising drug leads, has put moenomycins (as well as many other classes of natural products [4] ) into the drawers of the antibiotic development industry of the 20th century. The relentless rise of multidrug-resistant nosocomial infections in the last two decades, recently exacerbated by the coronavirus pandemic [5] , has renewed the need for antibiotics that would possess novel mechanisms of action and, thus, slow down the development of antimicrobial resistance. MmA in this regard represents a very attractive drug candidate. First, it is a subnanomolar inhibitor of peptidoglycan glycosyltransferases (PGTs), essential peptidoglycan (PG) assembly enzymes not currently exploited by any other drug [6] . Second, many of the issues that plagued the rational design of MmA-based PGT inhibitors are now solved. We have robust chemical and biological tools to construct complex phosphoglycolipid scaffolds and very detailed insight into MmA-PGT interactions. Third, the available evidence suggests that by targeting the active sites of numerous distinct PGTs in the cell moenomycins become a very tough target for drug resistance evolution. Part of the aforementioned breakthroughs has been achieved in the first decade of the 21st century, and these have been exhaustively reviewed [1] . Here we mainly focus on studies of cell and molecular biology of phosphoglycolipid antibiotics that were carried out since 2010. An update of our understanding of the chemical diversity of moenomycins will be first given. Then important developments in the area of MmA mode of action and antibiotic activity will be reviewed, as well as genomics of moenomycin biosynthetic gene clusters. A description of the complete phosphoglycolipid biosynthetic pathway, recent regulatory and structural insights into the former, as well as the outlook of future research directions in the field, will conclude the review. Diversity of naturally occurring phosphoglycolipid antibiotics: chemical and genetic points of view 3-phosphoglycerate is a structural unit (G; Fig. 1 ) that distinguishes moenomycins from the other classes of carbohydrate- and lipid-containing natural products. Unit G bridges a lipid moiety and the complex carbohydrate scaffold bristling with functional groups. As can be seen from Fig. 1 , chemical diversity around the phosphoglycolipid scaffold arises mainly from the presence/absence of substituents on carbohydrate units B, C, and D. A suite of unit B modifications is especially striking as it includes the presence of carboxamide and amino acid-decorated compounds. Glycine-bearing moenomycins have been reported previously under typical submerged fermentation conditions, whereas transfer of alanine, serine, and cysteine residues onto unit B has been observed only upon overexpression of amidotransferase gene moeH5 in moenomycin-producing streptomycetes [7] . Moenocinol is usually found as a lipid chain of moenomycins; the only exception is AC326-α which features diumycinol, a moenocinol isomer. The presence of diumycinol is also suggested (but not proven) for teichomycins on the basis of analysis of a gene cluster for teichomycin biosynthesis in Actinoplanes (A.) teichomyceticus ; this will be detailed further in this review. Streptomyces ( S .) sp. K04–0144, a nosokomycin producer, was reported to accumulate nosokophic acid, an early phosphoglycolipid intermediate consisting of units F, G, and H [8] . There are several in vitro assays to monitor the activity of MmA target enzymes, PGTs, which, theoretically, can be used for screening the bacterial extracts to turn up novel phosphoglycolipid antibiotics. At the moment these assays were validated on libraries of known compounds and no screening of natural sources was reported yet [9–12] . Сollision-induced dissociation patterns of phosphoglycolipid antibiotics in multistage mass spectrometry (MS) experiments provide another means for their identification in complex mixtures [ 13 , 14 ]. Facile loss of lipid moiety (428 Da, anion) in MS 2 is a primary signal for the presence of moenomycins in the sample. Recent mining of a sizable collection of Streptomyces genomes led to the identification of tens of thousands of biosynthetic gene clusters (BGCs) for specialized metabolites, where phosphoglycolipid BGCs were among the least abundant ones [15] . As of the time of writing of this review (July 2022), we were able to reveal 20 actinobacterial and three enterobacterial ( Photorhabdus ) species carrying moenomycin BGCs ( Fig. 2 ). Many of these 20 actinobacterial species carry identical moenomycin BGCs, and so the latter can be classified into 7 closely related types. The BGCs differ mainly in the presence/absence of genes for chromophore C 5 N moiety (unit A) biosynthesis ( moeA4, moeC4 ) and conversion of glucosamine into quinovosamine (unit C; moeR5, moeS5 ). Moenomycin BGCs from Actinoplanes species also harbor genes putatively involved in lipid chain biosynthesis, such as the supply of isoprene building blocks and conversion of moenocinol to diumycinol. Gene clusters for phosphoglycolipid biosynthesis in Photorhabdus bear a little resemblance to actinobacterial moenomycin BGCs, and so might encode molecules significantly different from those depicted in Fig. 1 . Except for S. ghanaensis (= viridisporus ), S. prasinus, S. umbrinus and Actinoplanes teichomyceticus , no phosphoglycolipid production was described in the strains mentioned in Fig. 2 . Particularly extensive metabolomic analysis has been undertaken recently for S. clavuligerus (carries type 3 moenomycin BGC, see Fig. 2 ) and Photorhabdus species (heterogeneous type 8) [16–18] . Failure to identify phosphoglycolipid metabolites in these strains likely implies that their production is strictly regulated and may require very specific induction signal(s). Mode of action, resistance mechanisms, and the spectrum of activity against various bacteria MmA interferes with the cell wall biosynthesis in Gram-positive and some Gram-negative bacteria. It does so via inhibiting PGT domains of penicillin-binding proteins (PBPs) and monofunctional transglycosylases (TGases) [19] . The available data agree with the idea that MmA is a structural analog of natural substrates of aforementioned enzymes, such as Lipids II and IV. The 25-carbon moenocinol chain corresponds to Lipid II/IV undecaprenyl chain ( Fig. 3 ), 3-phosphoglycerate resembles pyrophosphate, and the oligosaccharide portion of MmA mimics Glc N Ac-Mur N Ac region of Lipid IV [1] . Biochemical assays show that MmA competes with Lipid IV for binding to the donor site of PGT, and thus affects the initiation of synthesis of PG strands [20] , although some debate over this issue remains [9] . Moenocinol is crucial for the biological activity of MmA, assuring its anchoring in the lipid membrane. Indeed, the introduction of the terminal hydroxyl group into the lipid chain of MmA renders the antibiotic inactive [1] . 3-phosphoglycerate and the oligosaccharide portion of MmA interact with the PGT domain. The interaction of MmA with PGT domains was elucidated with the help of X-ray analysis and/or cryoelectron microscopy (cryo-EM). Structural data are available for MmA co-complexed with Mtg (monofunctional TGase) and PBP2 of Staphylococcus (St.) aureus [21–24] , PBP1b of Escherichia ( E. ) coli [25–29] , and PBP1a of Aquifex aeolicus [ 30 , 31 ] . All PGTs mentioned above share transmembrane (TM) N-terminal α-helix and PGT domain, while PBPs carry also the transpeptidase (TP) domain ( Fig. 4 A). E. coli PBP1b harbors, in addition, a UB2H domain (UvrB domain 2 homolog). The role of UB2H in PGT and TP reactions is unclear, but it might be important for binding co-factor proteins [ 32 , 33 ]. The PGTs are a challenging target for structural biologists as these enzymes adopt various conformations in solution and the growth of well-diffracting crystals required a ligand. MmA serves well as the latter, while attempts to co-crystalize PGT with Lipid II/IV have not yet met with success. It was, however, possible to infer the 3D structure of PBP1a from E. coli in apo form using cryoEM [25] . Being rather diverged at the level of primary structure, the PGT domains possess similar folds with two subdomains, the “head” and “jaw” ones, delineated with a catalytic groove [34] (see Fig. 4 A). Donor (for Lipid II/IV/nascent PG/MmA) and acceptor (for Lipid II) sites are located within the catalytic groove. The “flap” region is the most dynamic part of the “jaw”. PGT domain undergoes a series of conformational changes to achieve one cycle of PG extension [21] , and the “flap” region assists in shuttling the Lipid II and growing PG between donor and acceptor sites [ 24 , 28 ]. Certain segments of the “jaw” and “flap” subdomains (likely involved in Lipid II recognition) are partially embedded into the membrane [21] . Although we assume that MmA mimics Lipid II/IV binding to PGT ( Fig. 4 B), this mimicry is not complete. Rather, MmA appears to lock the PGT domain in a certain conformation which impedes proper functioning, but this conformation is not identical to the one that PGT adopts in presence of natural substrates [25] . EFCB units of MmA bind within the catalytic groove where EF disaccharide forms a network of hydrogen bonds to the active site. CB disaccharide corresponds to the Glc N Ac-Mur N Ac backbone of the growing PG and contacts the amino acids of the donor binding site [28] . A sketch of MmA-PBP interaction is given in Fig. 4 B. Finally, MmA might also have an additional non-canonical binding site at the “flap” region [21] , although it is unknown whether such binding occurs in vivo . Studies of MmA-PGT interactions inspired the creation of new PGT inhibitors, either MmA derivatives or its structural analogs [ 1 , 29 , 35-40 ]. A rather unexpected feature of MmA is the ability to inhibit conjugal transfer of plasmids in E. coli and Enterococcus ( Ent. ) feacalis [ 41 , 42 ] and “cure” bacterial cultures of plasmids ( i.e. block normal plasmid segregation in course of cell division) [43] . These effects were observed for different types of plasmids utilizing sub-inhibitory concentrations of MmA. Since some of the investigated plasmids encode lytic TGases (required for PG remodeling prior to conjugal transfer) – it was speculated that MmA might inhibit these enzymes [41] . This, however, does not explain how MmA affects the transfer of plasmids not encoding lytic TGases. Possessing detergent-like properties [44] , MmA could as well destabilize cell membranes, interrupting the protein complexes required for conjugal transfer and segregation of plasmids. Whatever the real explanation is, such a phenomenon leads to one peculiar outcome justifying MmA application in animal husbandry. It is known that MmA is not metabolized and adsorbed in the animal digestive system, thus it accumulates in feces [45] , “curing” fecal plasmid mobilome potentially mediating the spread of antibiotic resistance determinants. Actinomycetes producing moenomycins [ 1 , 46–48 ] require certain auto-resistance mechanisms to avoid suicide caused by antibiotic accumulation. Genes for ABC transporters found in each moenomycin BGC (see Fig. 2 ) appeared to impact MmA titers but not resistance to it, as studies on S. ghanaensis have shown [49] . High-level resistance to MmA is widespread among moenomycin non-producing actinomycetes, with S. albidoflavus ( =albus ) J1074 being one notable exception. This species is 10,000-fold more sensitive to MmA when compared to S. ghanaensis and S. coelicolor , although its genetic basis remains enigmatic [ 50 , 51 ] . It is uncertain whether MmA resistance in actinomycetes is determined by some specific mechanisms or simply caused by a thick cell wall, physically hindering the delivery of MmA to the target. There is no direct evidence that actinomycete PGTs are inhibited by MmA in vitro , and no attempts to co-crystallize actinobacterial PGTs with MmA were made . Treatment of S. coelicolor with MmA changes the expression of hundreds of genes (including those for PGTs), suggesting the involvement of general stress response [52] . Gram-positives such as St. aureus, St. epidermidis, St. haemolyticus, Streptococcus pyogenes, Listeria monocytogenes , some strains of Ent. faecalis, and Ent. faecium are sensitive to ng/mL MmA concentrations [ 1 , 53–56 ]. Since only a couple of hundred PBPs are found in typical Gram-positive cell [57] , extremely low amounts of MmA are needed to exhaustively bind all of them. On the contrary, other Gram-positive bacteria, such as Bacillus ( B. ) subtilis , are intrinsically resistant to MmA. B. subtilis possesses four MmA-susceptible class A PBPs. Deletion of all four corresponding genes does not terminate PG biosynthesis [58] , implying the presence of a bypass route. Exposure to MmA significantly induces the expression of SigM (σ M )-regulon in B. subtilis [59] , where SigM is an extracytoplasmic function (ECF) σ-factor sensing the environmental stressors of the cell envelope [60] . Knockout of sigM rendered B. subtilis hypersensitive to MmA [59] . A recent investigation of SigM regulon revealed the rodA gene for SEDS ( hape, s longation, e ivision, and d porulation) family protein exhibiting PGT activity that is not subject to inhibition by MmA [ s 61 , 62 ]. SEDS proteins are widely distributed across Terrabacteria , including actinobacteria, and might explain the high level of MmA resistance in the latter. Finally, one environmental B. subtilis isolate was reported to degrade MmA enzymatically [63] . It is possible to raise MmA-resistant (Mm r ) mutants of Gram-positive bacteria, such as St. aureus . Thickened PG was the reason for Mm r phenotype of St. aureus mutants in the first known report on this topic [64] . In the other case, Mm r St. aureus strains were found carrying two point amino acid substitutions (Y196D and P234Q) in the donor binding site of PBP2 (see above) [65] . The same substitutions hindering MmA binding were found in more than 30 screened Mm r clones. The Y196D and P234Q substitutions seemed to prevent non-specific interactions of MmA and PBP2. The other mutations, e.g. in sites responsible for more specific interactions, are probably impossible. MmA and Lipid IV specifically interact with the same amino acids. Hence, mutations disrupting specific binding will interfere with the normal docking of Lipid IV to PBP2, leading to a lethal effect. The aforementioned Mm r mutations came at a cost: mutant PBP2 appeared to generate shorter PG chains than the wild type enzyme, causing cell morphology and division abnormalities. Notably, such changes were observed only if mutants were grown in presence of MmA since TGases SgtA and SgtB complement mutated PBP2 in the absence of MmA [65] . Type III [66] ABC transporter AbcA was also shown to be involved in MmA resistance in St. aureus, where knockout of the corresponding gene led to the increment in MmA susceptibility, while its overexpression increased MmA resistance [67] . AbcA is a typical transmembrane efflux pump and its contribution to MmA resistance is not clear, given that MmA acts on the cell surface. However, AbcA might be involved in maintaining the fitness of the PG. Reduced levels of phosphatidyl glycerol in a daptomycin-resistant B. subtilis mutant also led to the increase in MmA resistance [68] . Gram-negative bacteria are usually several-fold more resistant to MmA as compared to Gram-positives [69] . At the same time, PBPs from Gram-negative and Gram-positive bacteria are inhibited by MmA in vitro to the same extent [ 70 , 71 ]. An obvious explanation for this is the outer membrane of the Gram-negative cell envelope shielding PBPs. Indeed, E. coli mutants with impaired outer membrane are more sensitive to MmA [55] . At the same time, some Gram-negatives, such as Neisseria, Brucella, Pasteurella, and Pseudomonas , are relatively sensitive to MmA. Recent findings also suggest that MmA is effective against notable Gram-negative pathogens. For example, the MIC of MmA for Helicobacter (H.) pylori is 2 µg/mL; moreover, MmA remains efficient against a variety of clinical H. pylori isolates, including multidrug-resistant ones [71] . MmA shows excellent activity against H. suis [72] . A few studies depict MmA as an extremely active antibiotic against Neisseria (N.) gonorrhoeae , with MICs within the 0.008–0.06 µg/mL range [ 48 , 73 ]. Unlike in E. coli, MmA was found to penetrate easily the outer membrane of N. gonorrhoeae . The presence of lipooligosacharides instead of lipopolysacharides in N. gonorrhoeae outer membrane might be a key reason for this difference [48] . Biosynthesis of moenomycins Our current understanding of biosynthetic logic behind moenomycins is summarized in Fig. 5 . Pathway from phosphoglycerate to nosokomycin A has been first described in 2009 [74] and reviewed in 2010; we therefore will not discuss this point further. The decoration of terminal glucuronic acid (unit B, see Fig. 1 ) with chromophore C 5 N (unit A), amine, or glycine has been deciphered in 2013 for S. ghanaensis (= viridosporus ) ATCC14672 [7] . MoeH5, an amidotransferase of glutamine amidotransferase (GAT) superfamily was shown to control all of these modifications of NoA. MoeH5 resembles MoeF5, another GAT type amidotransferase involved in the carboxyamidation of unit F of moenomycins. MoeF5 is a canonical GAT enzyme that harbors an intact N terminal hydrolase domain (absolutely necessary for the hydrolysis of glutamine to produce free amine) and catalyzes a single reaction. MoeH5 appears to lose the hydrolase domain and instead directly transfers various amine-bearing moieties (as diverse as C 5 N and ammonium) onto the carboxyl group of unit B (see Fig. 5 ). Interestingly, MoeH5 ortholog encoded within A. teichomyceticus teichomycin BGC, retains hydrolase domain in its sequence and is able to produce only carboxyamidated moenomycins [46] . Future studies of the promiscuity of MoeH5 orthologous group enzymes towards their donor and acceptor substrates are of interest given that such modifications improve the antibacterial properties of moenomycins [7] . Prenyltransferases MoeO5 and MoeN5 involved in MmA lipid chain assembly have been purified and characterized in vitro . MoeO5 was shown to catalyze the transfer of the farnesyl group onto oxygen of 3-phosphoglycerate. The mechanism of this reaction includes the isomerization of prenyl substrate farnesyl pyrophosphate into either nerolidyl or ( Z,E )-farnesyl pyrophosphate, giving rise to cis -allylic double bond observed in the lipid chain of MmA [75] . MoeO5 was crystallized as a dimer which features a small catalytic pocket capable of accommodating a single molecule of the product, 2-( Z,E )-farnesyl-3-phosphoglycerate (FPG), in a bent (around C8=C9 double bond) conformation. This appears to be essential for isomerization to occur [76] . These studies underscore that MoeO5 employs a mechanism distinct from that reported for the other homologous TIM barrel prenyltransferases. Crystal structures of MoeN5 in complex with various substrate analogs were also reported, they revealed the dimeric structure of the protein and two aspartate-rich motifs likely involved in substrate binding [ 77 , 78 ]. Although structural data for MoeN5 do not challenge the mechanism of moenocinol formation proposed by Arigoni two decades ago [79] , new insight into this reaction also has not yet emerged due to the low resolution of the X-ray data and the use of substrate analogs quite different from the native trisaccharide intermediate of MmA. Recently there has been reported crystal structure of protein TchmY from teichomycin BGC of A. teichomyceticus [80] . TchmY crystallized as a monomer and possessed (α/α)6-barrel fold typical for many prenylcyclases. This observation is in agreement with the suggestion that TchmY is involved in the production of diumycinol, a terminally cyclized version of moenocinol. Regulation of moenomycin biosynthesis Unlike the majority of known BGCs in actinobacteria [81] , moenomycin BGCs do not harbor pathway-specific (cluster-situated) regulatory genes (see Fig. 2 ). Hence, MmA production must be governed by pleiotropic (global) regulators, e.g. those acting on more than one biosynthetic pathway. Current knowledge of regulatory mechanisms governing MmA production (mostly studied in ATCC 14672) is summarized in Fig. 6 and described below. The peculiar codon-based regulatory mechanism was the first global control circuit reported to limit MmA production to the late stages of the life cycle. This circuit consists of a gene bldA for leucyl tRNA UAA and its cognate codons TTA scattered in certain genes. The TTA codon is the rarest one in streptomycete genomes and can be decoded only by the bldA -encoded tRNA; the latter accumulates in significant quantities in the stationary phase of growth [82] . In ATCC 14672 TTA codons are present in key moe genes ( moeE5, moeO5 ) which effectively confines the translation of moe mRNAs to the late stage of growth. On the other hand, the TTA codon is also present in a gene for transcriptional factor AdpA, a master regulator of morphogenesis and specialized metabolism of Streptomyces . AdpA was shown to directly interact with key moe gene promoters and upregulate their transcription. In sum, bldA -based regulation exerts control over MmA biosynthesis in ATCC 14672 by limiting the AdpA-dependent transcription of the moe BGC and translation of TTA-harboring moe mRNAs [83] . The effect of bldA on MmA production was also reported under heterologous expression conditions [84] . It is obvious that any factor that impairs the accumulation of mature bldA tRNA will affect MmA production. Indeed, mutations within genes miaA and miaB for hypermodification of adenosine in the 37th (A37) position of tRNA XXA in S. ghanaensis and S. albus J1074 severely decreased the levels of moenomycin production [ 85 , 86 ]. AdpA also upregulates bldA expression, creating a positive feedback loop between these regulators [82] . Likewise, factors that impact AdpA abundance will also leave a footprint on MmA biosynthesis. For example, the absB gene for double-stranded RNA-specific endoribonuclease is known to perform cleavage of adpA mRNA in various streptomycetes. Manipulations of absB expression were shown to influence MmA biosynthesis, pointing to the existence of the regulatory triad BldA-AdpA-AbsB in this metabolic pathway [83] . The other factors influencing the function of this triad will be described in a paragraph devoted to BldD. Deletion of a gene wblA for Fe-S cluster-containing transcriptional regulator has been shown to enhance MmA production [87] . WblA falls into the WhiB-like (Wbl) family of proteins whose members are distributed exclusively within the phylum Actinobacteria , where they are typically involved in the late steps of morphogenesis, yet, due to their pleiotropicity, also often influence specialized metabolism [88] . Overexpression of SSFG_01620 for the Streptomyces subtilisin inhibitor (SSI) is thought to be responsible for MmA overproduction by the wblA knockout strain, although the exact mechanism remains unknown [87] . Expression of adpA, bldA , and wblA in ATCC 14672 was shown to be under the control of the master regulator BldD [89] . BldD is one of the most conserved regulatory proteins in Actinobacteria . It sits at the top of the regulatory cascade controlling morphological progression in Streptomyces by inhibiting the expression of sporulation genes during vegetative growth. In S. coelicolor , BldD exerts its control by regulating the expression of at least 167 genes, where a large portion includes the well-known regulators required for the maturation of spores, cell division, chromosome segregation, and secondary metabolite production [90] . Deletion of bldD severely impaired the morphogenesis in ATCC 14672 and nearly completely abolished MmA biosynthesis. Transcription of adpA and the key structural moe genes required for MmA assembly was strongly reduced in the bldD mutant [89] . In contrast to S. coelicolor , where transcription of adpA is repressed by BldD [90] , adpA in ATCC 14672 is activated by BldD, similarly to what is observed in daptomycin producer S. roseosporus [91] . BldD also controls the expression of wblA in ATCC 14672 via repression of its transcription, which stems from the binding of BldD to the wblA promoter [89] . Deregulated expression of dozens of genes including adpA, wblA , and bldA seems to be one of the main contributors to the observed phenotype of S. ghanaenis Δ bldD strain. The regulatory activity of BldD is mediated by the second messenger, cyclic dimeric 3′−5′ guanosine monophosphate (c-di-GMP) [92] . In the presence of c-di-GMP, two monomers of BldD form a complex bound to four c-di-GMP molecules, which then proceeds to bind to target promoter sites [ 92 , 93 ]. The intracellular pool of c-di-GMP is replenished by diguanylate cyclases (DGCs) and degraded by phosphodiesterases (PDEs). DGCs make c-di-GMP from two molecules of GTP, whereas PDEs break c-di-GMP either to linear dinucleotide 5′-phosphoguanylyl-(3′→5′)-guanosine (pGpG) or directly to two molecules of GMP, depending on a class of PDE [94] . ATCC 14672 genome encodes nine proteins for c-di-GMP metabolism [89] . Deletion of cdgB for a highly and constitutively expressed DGC reduced both c-di-GMP and MmA accumulation. In contrast, MmA production was boosted in ATCC 14672 strain deficient in rmdB for active PDE [89] . Deletion of the gene for active DGC SSFG_02181 (CdgC) negatively influenced MmA accumulation, leading to precocious sporulation, while the overexpression of ssfg_02181 blocked sporulation and remarkably improved the antibiotic titer [95] . Furthermore, individual deletion of rmdA, cdgA , and cdgD encoding a bifunctional DGC/PDE, an active PDE, and a predicted DGC, respectively, positively influenced MmA accumulation, whereas deletion of cdgE for a DGC had no impact on MmA titers [96] . The transcription of most DGC-encoding genes is repressed by BldD, forming a reciprocal regulatory loop, where DGCs are proposed to synthesize c-di-GMP to stimulate the activity of BldD [ 89 , 94–96 ]. When the c-di-GMP pool reaches a certain threshold level, BldD represses the transcription of DGC genes. BldD controls the expression of TTA-containing rmdB indirectly at the translational level by regulating the abundance of BldA to avoid premature PDE activity [89] . Additionally, to adjust c-di-GMP levels in response to fluctuating environmental or intracellular signals, most DGCs and PDEs are accompanied by auxiliary regions required either for the recognition of specific triggers or spatial allocation [94] . Altogether, the c-di-GMP-mediated regulatory network in ATCC 14672 seems to be immensely intricate and includes numerous layers to ensure proper coordination of morphological progression and antibiotic production. Prokaryotes, including streptomycetes, have evolved an efficient system of intercellular communications mediated by low molecular weight signaling compounds (LMWCs) [97] . For instance, the archetypal gamma-butyrolactone A-factor triggers morphogenesis and production of streptomycin in S. griseus by binding to the TetR family transcriptional repressor ArpA. This releases ArpA from the promoter of adpA that activates the expression of genes related to morphological progression and secondary metabolite biosynthesis [81] . In silico analysis of the ATCC 14672 genome revealed the presence of genes for at least two classes of LMWCs, γ-butyrolactones, and avenolides [98] . Deletion of SSFG_07725 encoding a putative γ-butyrolactone synthase in ATCC 14672 abrogated the production of diffusible LMWCs leading to morphogenetic deficiency and somewhat decreasing MmA production [99] . Introduction of extra copies of SSFG_07725 into the wild-type strain affected neither morphogenesis nor MmA production [98] . No moenomycin production was observed during the heterologous expression of MmA BGC in S. lividans M707 carrying a deletion of the γ-butyrolactone synthase gene scbA [98] . Thus, the LMWC-based regulatory pathway seems to impact MmA biosynthesis in a species-dependent manner. Likely, the expression of adpA could be either γ-butyrolactone-dependent ( S. griseus ) or independent ( S. coelicolor and ATCC 14672). Several genes influencing MmA production were recently revealed in course of Tn5 and mariner transposon mutagenesis of ATCC 14672 [ 85 , 100 ]. No further elucidation of these genes beyond initial annotation has been carried out yet. While for some of the identified genes (such as those for kinases and RNA polymerase subunit) a putative role can be easily put forward, an association of the majority of these genes with MmA remains vague. Both native and heterologous producers of MmA were engineered to reveal the factors limiting MmA production [101–103] . The production of moenomycins varies greatly in different heterologous hosts, indicating their different metabolic and regulatory backgrounds influenced moenomycin biosynthesis. None of the tested strains was superior over ATCC 14672 in terms of moenomycin productivity. Nonetheless, the obtained data about the regulatory pathways governing moenomycin biosynthesis in heterologous hosts laid a useful background for the exploration of its regulation in the natural MmA producer. For instance, overexpression of the S. coelicolor ppGpp synthetase gene relA has been shown to positively correlate with moenomycin accumulation in both native and heterologous hosts [101] . Similarly, duplication of moe genes via the introduction of an additional copy of the moe cluster offers a beneficial way to significantly improve MmA titers [ 101 , 104 , 105 ]. Several genome-engineering approaches were employed to boost MmA production. Intriguingly, not only moenomycin accumulation but also the growth dynamic was greatly improved after in vivo elimination of binding sites for the pleiotropic regulator AdpA gh in the oriC region of the S. ghanaensis chromosome [106] . Rational combination of moe gene dosage along with the overexpression of the pleiotropic regulator bldA led to a notable increase in MmA synthesis [104] . Strong improvement in moenomycin accumulation was achieved during the expression of a hybrid BGC derived from ATCC 14672 and S. lincolnensis NRRL2936 in S. albus J1074 [103] . Promoter refactoring along with the overexpression of the gene salb-PBP2 for peptidoglycan biosynthetic protein PBP2 and media optimization experiments further elevated the antibiotic titers. This study illustrates the power of synthetic biotechnology in tackling some of the toughest problems in the development of industrial antibiotic producers. Outlook The unique structure and mode of action of moenomycins fueled decades of investigations, which culminated in total synthesis of MmA, atomic-level view of its interaction with PGTs, and delineation of its biosynthetic logic. MmA proved to be an invaluable chemical probe to understand cell wall biosynthesis. It helped overproduce Lipid II [107] , validate high-throughput screens of PGT inhibitors, and reveal an entirely new SEDS family of peptidoglycan synthases. Nevertheless, a much sought-after application, the one as a drug to treat human diseases, still eludes this highly promising class of compounds. The harsh reality of the economics of development and marketing of any new antibacterial is indeed a roadblock [ 4 , 108 ], yet one that fades gradually as the antimicrobial resistance crisis becomes urgent [109] . Below we suggest how biology-oriented studies of moenomycins can help transform this class of natural products into a drug. MmA is not orally bioavailable and exhibits an extremely long half-life in the bloodstream [1] . These are the main drawbacks (from a point of view of pharmacology) of MmA as well as all studied to date members of this family. The diversity of naturally occurring moenomycins is low, and so are the chances of finding new congeners with different pharmacological profiles. Nevertheless, identification of very distinct moenomycin-like BGCs in Photorhabdus and Actinomyces spp. (see Fig. 2 ) suggests that chemical space around the phosphoglycolipid scaffold is larger than we know at the moment. In this regard, it is difficult to underestimate the value of fundamental research into regulatory mechanisms that are involved in sensing and handling the MmA-induced stress response, and silencing of moenomycin BGCs. The former will find their use to develop inexpensive cell-based tools for screening of moenomycin producers in large strain collections (akin LiaRS system for Lipid II binders [110] ), the latter can be used to reveal the compounds encoded by cryptic BGCs. Characterization of several moenomycin biosynthetic enzymes, such as MoeH5, MoeN5, and MoeO5 paves the road to the chemoenzymatic production of moenomycin analogs inaccessible naturally. Here it is crucial to continue the studies of all enzymes involved in the assembly of moenocinol, as the latter is the main cause of the poor pharmacokinetics of MmA. More effort should be focused on enzymes for moenuronamide, the most densely modified carbohydrate unit (F, see Fig. 1 ) of MmA and part of its pharmacophore. Radical SAM cobalamin-dependent methyltransferase MoeK5 is of particular interest, as it carries out crucial biotransformation on unit F via a mechanism that remains largely speculative [111] . Finally, the elucidation of antibacterial effects and mechanisms of MmA action on a wider set of pathogenic bacteria is as important as finding more novel moenomycins; recent works on Helicobacter and Neisseria clearly demonstrate this point. CRediT authorship contribution statement Bohdan Ostash: Conceptualization, Funding acquisition, Visualization, Writing – original draft, Writing – review & editing. Roman Makitrynskyy: Visualization, Writing – original draft, Writing – review & editing. Oleksandr Yushchuk: Visualization, Writing – original draft, Writing – review & editing. Victor Fedorenko: Conceptualization, Funding acquisition, Supervision, Writing – review & editing. Declaration of Competing Interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements V.F. and B.O. thank the Ministry of Education and Science of Ukraine for the continuous support of the moenomycin-related research at Lviv University over the last 15 years (grants BG-01F, -98F, -09F, -41Nr, -21F). B.O. was supported by DAAD (A/12/04489) and VRU5517-VI fellowships. Authors thank all coworkers at Lviv and Freiburg groups who contributed to the current understanding of moenomycin biology. | [
"OSTASH",
"WELZEL",
"TAYLOR",
"BISACCHI",
"BENGOECHEA",
"GALLEY",
"OSTASH",
"KOYAMA",
"HUANG",
"DAHMANE",
"BOES",
"HERNANDEZROCAMORA",
"ZEHL",
"GALLO",
"BELKNAP",
"ABUSARA",
"SHAIKH",
"SHI",
"SAUVAGE",
"GAMPE",
"PUNEKAR",
"HUANG",
"HEASLET",
"LOVERING",
"CAVENEY",
"... |
714ac077686e47e6bb31663992197d2b_ANM volume 5 issue 2 Cover and Back matter_10.1017_S1751731110002636.xml | ANM volume 5 issue 2 Cover and Back matter | [] | null | null | [] |
d1febd80843344d8befd8048d0ab1f00_Acute effects of ambient non-methane hydrocarbons on cardiorespiratory hospitalizations A multicity _10.1016_j.ecoenv.2022.113370.xml | Acute effects of ambient non-methane hydrocarbons on cardiorespiratory hospitalizations: A multicity time-series study in Taiwan | [
"Qiu, Hong",
"Chuang, Kai-Jen",
"Fan, Yen-Chun",
"Chang, Ta-Pang",
"Bai, Chyi-Huey",
"Ho, Kin-Fai"
] | Background
Few environmental epidemiological studies and no large multicity studies have evaluated the acute short-term health effects of ambient non-methane hydrocarbons (NMHC), the essential precursors of ground-level ozone and secondary organic aerosol formation.
Objective
We conducted this multicity time-series study in Taiwan to evaluate the association between airborne NMHC exposure and cardiorespiratory hospital admissions.
Methods
We collected the daily mean concentrations of NMHC, fine particulate matter (PM2.5), ozone (O3), weather conditions, and daily hospital admission count for cardiorespiratory diseases between 2014 and 2017 from eight major cities of Taiwan. We applied an over-dispersed generalized additive Poisson model (GAM) with adjustment for temporal trends, seasonal variations, weather conditions, and calendar effects to compute the effect estimate for each city. Then we conducted a random-effects meta-analysis to pool the eight city-specific effect estimates to obtain the overall associations of NMHC exposure on lag0 day with hospital admissions for respiratory and circulatory diseases, respectively.
Results
On average, a 0.1-ppm increase of lag0 NMHC demonstrated an overall 0.9% (95% CI: 0.4–1.3%) and 0.8% (95% CI: 0.4–1.2%) increment of hospital admissions for respiratory and circulatory diseases, respectively. Further analyses with adjustment for PM2.5 and O3 in the multi-pollutant model or sensitivity analyses with restricting the NMHC monitoring from the general stations only confirmed the robustness of the association between ambient NMHC exposure and cardiorespiratory hospitalizations.
Conclusion
Our findings provide robust evidence of higher cardiorespiratory hospitalizations in association with acute exposure to ambient NMHC in eight major cities of Taiwan. | 1 Introduction With the progressive increment of the concentration of ground-level ozone (O 3 ) in recent years ( Chang and Lee, 2007 ) and its adverse public health impact ( Qiu et al., 2021b; Vicedo-Cabrera et al., 2020 ), the strategies for O 3 control call for the regulation of its precursors – the hydrocarbon pollution. Hydrocarbons are organic compounds composed essentially of carbon and hydrogen atoms, many of which are volatile and can easily vaporize into the atmosphere at room temperature and normal atmospheric pressure and are referred to as volatile organic compounds (VOCs). The environmental epidemiological studies ( Dai et al., 2014; Dimitrova et al., 2021; Tian et al., 2018 ) always focused on the health impact of particulate matter, nitrogen dioxide, ozone, cold spells, heat waves, etc., however, few studies kept an eye on ambient non-methane hydrocarbons (NMHC) or VOCs, which are critical precursors of ground-level ozone and secondary organic aerosol formation ( Atkinson, 2000; Wu et al., 2006 ). Clinical effects of hydrocarbon exposure have been reviewed from the pertinent clinical trials, observational studies, and case reports; evidence of the toxicity of acute hydrocarbon exposure includes a wide array of pathologies, such as encephalopathy, pneumonitis, arrhythmia, acidosis, and dermatitis ( Tormoehlen et al., 2014 ). However, environmental epidemiological studies on the health impact of NMHC or VOCs are scarce worldwide. Up to date, only a few time-series studies have examined the adverse effects of ambient hydrocarbon pollution on human health. The short-term cardiopulmonary risks of some VOCs and NMHC were demonstrated in the United States ( Shin et al., 2015; Ye et al., 2017 ), Hong Kong ( Ran et al., 2019, 2018a,b ), and Taiwan ( Qiu et al., 2021a, 2020; Tsai et al., 2010 ), respectively. All of them are single-city studies, subjecting to the limitations that the city-specific effect estimate has potential heterogeneity in the magnitude, diversity in the model specifications, and uncertainty in the generalization. The paucity of evidence on the health impact of NMHC generated worldwide was mainly due to the scarcity in hydrocarbon pollution monitoring. Taiwan’s Environmental Protection Administration, Executive Yuan, R.O.C., has provided accurate, representative, and long-term data for O 3 precursors, including the hydrocarbons pollution and VOCs, to establish the relationship between O 3 , O 3 precursors, and weather conditions, to find out the factors behind O 3 formation, and to devise strategies for O 3 control ( TWEPA, 2020a ). The NMHC pollution levels may vary across cities because of the presence of different industries and varying meteorological factors. Taking advantage of the reliable and valuable data in NMHC from the air quality monitoring stations distributed over major cities in Taiwan ( https://taqm.epa.gov.tw/taqm/en/ ), we conducted current multicity time-series ecological study to examine the associations of acute airborne NMHC exposure with hospital admissions for cardiorespiratory diseases while controlling for co-exposure to PM 2.5 and O 3 . We hypothesized that short-term ambient NMHC exposure may increase the hospital admission risk of cardiorespiratory diseases. 2 Materials and methods 2.1 Hospital admissions data collection We extracted the cardiorespiratory hospital admission records from January 1, 2014, to December 31, 2017, for residents in eight cities of Taiwan (Taipei, New Taipei, Taichung, Yunlin, Chiayi, Tainan, Kaohsiung, Pingtung) from the National Health Insurance Research Database (NHIRD). NHIRD is managed by the Ministry of Health and Welfare, Taiwan and contains healthcare data covering 99% of Taiwan’s population under a universal health insurance program ( Lin et al., 2018 ). According to the codes of the World Health Organization International Statistical Classification of Diseases, Ninth or Tenth Revision (ICD-9 for 2014–2015; ICD-10 for 2016–2017), we computed daily counts of hospital admissions for diseases of the circulatory system (ICD-9: 390–459; ICD-10: I00–I99) and diseases of the respiratory system (ICD-9: 460–519; ICD-10: J00–J99), respectively, as the principal diagnoses and the main health outcomes in this study. 2.2 Environmental exposure data collection We collected the historic daily mean concentrations of air pollutants during 2014–2017 from the Taiwan Air Quality Monitoring Network (TAQMN) ( TWEPA, 2020b ). There were a total of 29 stations in these eight cities of Taiwan that monitored NMHC during the study period, including 10 stations in Kaohsiung, 4 stations in Taipei, New Taipei, and Taichung, respectively, 3 stations in Tainan, 2 stations in Yunlin, 1 station in Chiayi and Pingtung, respectively ( Fig. 1 ). Among them, there were 21 general stations, 5 traffic stations, and 3 industry stations. The daily mean concentrations of criteria air pollutants (PM 2.5 , NO 2 , O 3 , and CO), as well as weather condition data including ambient temperature and relative humidity (RH), were also derived from these stations. The daily mean value averaged over the available monitoring stations for each environmental exposure in each city was computed to denote the city-specific exposure level. We then linked air pollution concentrations and meteorological factors to the daily count of cardiorespiratory hospital admissions according to the hospital admission date for each city. We have acquired ethical approval from Taipei Medical University’s Joint Institutional Review Board (Approval No.: N201904082). Our study only used aggregated data without any individualized information; thus, the requirement for informed consent from study patients was waived. 2.3 Statistical approaches We applied a two-stage analytical approach ( F. Tian et al., 2020 ; Y. Tian et al., 2020 ), the first stage city-specific time-series analyses, and the second stage meta-analysis, to examine the association between ambient NMHC and cardiorespiratory hospital admissions in eight major cities of Taiwan. In the first stage, a generalized additive Poisson model (GAM) allowing over-dispersion with log link and adjustment for temporal trends, seasonal variations, weather conditions, and calendar effects was used to compute the city-specific effect estimate ( Qiu et al., 2012 ). We used a penalized smoothing spline to control for temporal trend and seasonality with 8 degrees of freedom (df) per year, and control for daily temperature on the same day (lag0) and on the previous three days (lag1–3), and RH on the same day with 3 df for each. Day of the week, as well as Taiwan’s public holidays as indicator variables, were included in the GAM model to control for calendar effects. Based on our previous studies ( Qiu et al., 2021a, 2020 ), we only observed the acute and immediate effect of NMHC or VOCs on cardiopulmonary hospital admissions on the same day (lag0) in Taipei, with the cognition that hydrocarbons involved in the photochemical process are critical precursors of ozone formation ( Wu et al., 2006 ) and cannot exist over a long time in the atmosphere. Therefore, we estimated the effect of NMHC on cardiorespiratory diseases on lag0 day for each city-specific analysis. In the second stage, we conducted a random effect meta-analysis ( F. Tian et al., 2020 ; Y.Tian et al., 2020 ) and pooled the eight city-specific effect estimates to obtain the overall associations of NMHC exposure on lag0 day with hospital admissions for respiratory and circulatory diseases, respectively. The heterogeneity between city-specific associations was estimated using the Cochran's Q test and I 2 statistic ( Higgins and Thompson, 2002 ). Although the p-values for the Cochran's Q tests in the meta-analysis were all > 0.05 (showing no statistically significant heterogeneity between cities), we still chose the default random-effect meta-analysis to pool the effect estimate as we intended to generalize the results beyond the included cities to get a generalization inference, and the city-specific effect sizes were different among the eight cities ( Borenstein et al., 2010; Tufanaru et al., 2015 ). To test the sensitivity of the city-specific effect estimate of NMHC to the co-pollutant exposure, we examined the Pearson correlation between NMHC and each of the criteria pollutant and included the co-pollutants with a correlation coefficient less than 0.7 into the GAM model. As the NMHC were monitored from the three types of air quality monitoring stations (general, traffic, and industry), we also conducted a sensitivity analysis with exposure data collected from the 21 general stations only, which may represent the population exposure better. The effect estimates were presented as relative risks (RRs) along with 95% confidence intervals (CIs) in daily cardiorespiratory hospital admission for a 0.1-ppm increase in NMHC. We executed the statistical analyses in R (version 3.5.3) ( R Core Team, 2019 ), using the mgcv package to fit the GAM and metafor package to apply the meta-analysis. 3 Results From January 1, 2014, to December 31, 2017, we have recorded a total of 2,018,207 and 2,984,498 hospital admissions for diseases of the respiratory and circulatory system, respectively, from the eight cities of Taiwan. The daily hospital admission count was 172.7 for respiratory diseases and 255.3 for circulatory diseases. The daily mean cardiorespiratory hospital admissions were relatively high in Taipei, Taichung, and Kaohsiung, followed by New Taipei and Tainan, and relatively low in Yunlin, Chiayi, and Pingtung, corresponding to their population size ( TWSB, 2020 ). The distribution of daily cardiorespiratory hospital admissions and NMHC concentrations for each city are shown in Table 1 . The mean NMHC concentrations were highest in New Taipei and Taipei city (0.25–0.31 ppm) and lowest in Yunlin (0.07 ppm), with the other cities in the middle range (0.14–0.19 ppm). Considering the site type, the mean NMHC level was much higher from the traffic stations than general and industry stations (0.37 vs. 0.17 and 0.13 ppm) ( Table 1 ). Time-series plots ( Fig. 2 ) showed the temporal trend and seasonal variation of the cardiorespiratory hospitalizations and NMHC exposure, across the eight cities of Taiwan. The city-specific mean level of the criteria air pollutants’ concentrations and weather factors showed the different pollution levels and weather conditions across the eight cities ( Table 2 ). Pearson correlation coefficients of NMHC with NO 2 and CO concentrations (r = 0.703–0.935) were quite high in the six cities except Yunlin and Tainan, while the correlation coefficients of NMHC with PM 2.5 and O 3 concentrations were moderate to low ( Table 3 ). Therefore, we only included PM 2.5 and O 3 as the co-exposures for adjustment in the multi-pollutant model. Fig. 3 shows the estimated city-specific relative risk of cardiorespiratory hospital admissions for a 0.1-ppm increase of lag0 NMHC and the pooled RR, demonstrating an overall 0.9% (95% CI: 0.4–1.3%) and 0.8% (95% CI: 0.4–1.2%) increment of hospital admissions for respiratory and circulatory diseases, respectively. Further analyses with adjustment for PM 2.5 and O 3 in the multi-pollutant model or restricting the NMHC monitoring from the general stations only ( Fig. 4 ) modified the city-specific effect estimates slightly and the pooled RRs increased a little bit. Such sensitivity analyses confirmed the robustness of the association between ambient NMHC exposure and cardiorespiratory hospitalizations in eight major cities of Taiwan. 4 Discussion In this multicity time-series study using daily averaged data during 2014 and 2017 from eight cities in Taiwan and a hierarchical two-stage statistical approach, we demonstrated the overall hospital admissions risk of cardiorespiratory diseases associated with the level of exposure to airborne NMHC on the same day. Such associations were robust after controlling for co-exposure to PM 2.5 and O 3 , while the monitoring station types may not influent the results much. Based on our previous studies on NMHC and VOCs in Taipei ( Qiu et al., 2021a, 2020 ) and the knowledge that hydrocarbons involved in the photochemical process are critical precursors of ozone formation ( Wu et al., 2006 ) and cannot exist over a long time in the atmosphere, we estimated the effect of NMHC on cardiorespiratory hospitalizations on lag0 day without considering longer lag days. We observed the heterogeneity of the effect estimates across cities, which may probably be due to the different pollution levels, weather conditions, and sample sizes in different cities ( Tables 1 and 2 ). The city-specific effect estimates of NMHC on cardiorespiratory hospitalizations were statistically significant in Taipei, New Taipei, Taichung, and Kaohsiung, while no significant effects were observed in Yunlin, Chiayi, Tainan, and Pingtung corresponding to relatively lower NMHC level and smaller hospital admission size in these cities. Co-pollutant adjustment for PM 2.5 and O 3 in the multi-pollutant model increased the city-specific and pooled effect estimates a little bit, demonstrating the independent effect of NMHC on cardiopulmonary hospitalizations. Furthermore, excluding the traffic and industry stations and just using NMHC data monitored from the general stations only modified the city-specific and pooled effect estimates vary slightly, indicating that the cardiopulmonary risk of NMHC may not be very relevant to the hydrocarbon pollution sources. Some experimental studies have plausibly addressed the potential biological mechanisms underlying ambient exposure to hydrocarbons pollution's health impact. Rodent models designed to mimic exposure to fuel oil–derived VOCs during an experimental oil spill revealed that VOC inhalation may elicit alveolar septal cell apoptosis due to DNA damage, and cause airway hyperresponsiveness, inflammation, and pulmonary emphysema ( Amor-Carro et al., 2020 ). Sub-chronic exposure to industrial volatile organic pollutants (toluene, n-hexane, etc.) in vitro cell lines in low concentrations may damage the cell membrane, increase intracellular free calcium and alter the redox status of glutathione ( McDermott et al., 2007 ). Potential hazards of VOCs exposure to lung health was also studied using simulated pulmonary surfactant which demonstrated that inhalation of BTEX (i.e., benzene, toluene, ethylbenzene, and p-xylene, serving as a VOC representative) might induce pulmonary dysfunction and various lung diseases due to the alteration of gas-liquid interfacial properties of pulmonary surfactants through solubilization capacity ( Zhao et al., 2019 ). Furthermore, ambient level BTEX exposure may be associated with sperm abnormalities, reduced fetal growth, cardiovascular and respiratory dysfunction, asthma, sensitization to common antigens, and have endocrine-disrupting properties ( Bolden et al., 2015 ). Household and workplace exposure to VOCs may increase the levels of biomarkers related to systemic inflammation and oxidative stress ( Chuang et al., 2017; Ma et al., 2010 ), and rapidly affect the cardiovascular system through the regulation of blood pressure, heart rate variability, and arterial dilatation and induce cardiorespiratory diseases ( Shin et al., 2015 ). Furthermore, indoor VOCs exposure in farmers may alter the immune response balance in cytokine levels and contribute to impairment in the respiratory tract ( Audi et al., 2017 ). Environmental epidemiological studies on the health impact of ambient hydrocarbons are scarce. Only a few time-series studies have explored the short-term association of hydrocarbons or VOCs with cardiopulmonary diseases. Our previous studies demonstrated that airborne NMHC exposure may increase the risk of respiratory hospital admissions ( Qiu et al., 2020 ) and ambient VOCs exposure may increase the risk of cardiorespiratory hospitalizations in Taipei ( Qiu et al., 2021a ). Some hydrocarbons such as propane, isobutane, and benzene on lag0 day were also found to be associated with increased cardiovascular mortality risk in Taichung, Taiwan ( Tsai et al., 2010 ). The associations of some VOCs, especially the BTEX (benzene, toluene, ethylbenzene, xylene), with increased risk of daily cardiopulmonary morbidity ( Ran et al., 2019, 2018b ), and circulatory mortality ( Ran et al., 2018a ) have been evaluated in Hong Kong, and in Drammen, Norway as well ( Oftedal et al., 2003 ). However, all the up-mentioned studies were conducted in single-site, sharing the same limitations that the city-specific effect estimate subjected to potential heterogeneity in the magnitude, diversity in the model specifications, and uncertainty in the generalization. The major strength of the current multicity study is the extensive dataset with large statistical power and the application of a standardized analytical approach in which we were able to detect the robust weak but universal association between airborne NMHC and cardiorespiratory and avoid the publication bias. This hierarchical two-stage statistical approach has been widely used to combine the risk estimates obtained from multiple locations while accounting for within-city standard error and between-city heterogeneity of the true risks and uncertainty ( Vicedo-Cabrera et al., 2020 ). Our findings may provide timely information for necessary regulation for hydrocarbon pollution and advise strategies for O 3 control. There are some limitations of the current study. Firstly, our results should not be considered truly overall estimates in Taiwan, as data of several cities from Eastern and middle Taiwan were not accessible. However, we could not deny its representability because the data of the major cities have been included in the current analysis, covering around 82% of the total population in Taiwan ( TWSB, 2020 ). Secondly, time-series study using ambient NMHC level monitored from the outdoor fix-site stations as the population exposure may induce the ecological fallacy and underestimate the true exposure level, as people spend more than 80% of their time indoors while indoor VOC concentrations are typically higher than those in outdoor environments ( Payne-Sturges et al., 2004; Son et al., 2003 ). Thirdly, we could not fully disentangle the health effects of ambient NMHC exposure from other traffic- or industry-related air pollutants. Thus, we cannot rule out the possibility that the observed associations might reflect the effects of traffic- or industry-related air pollution mixture. Sensitivity analysis excluding the traffic and industry stations and just using NMHC data monitored from the general stations did not change the observed associations, supporting the relatively independent cardiopulmonary risk of NMHC in some aspects. Fourthly, hospital admissions through the emergency source were assumed to better reflect the acute effect of air pollution exposure. However, the emergency hospital admissions were not accessible at present and we could not identify and exclude the non-emergent and scheduled inpatients from the overall cardiorespiratory hospital admissions in Taiwan, which may underestimate the acute effect estimate of NMHC to some extent. Finally, the applied time-series approach with overall cardiorespiratory hospitalizations prevents us from understanding the differential susceptibility of the population or potential mechanisms. Further studies are warranted to clarify the research question using cause-specific hospital admission data or toxicity studies to explore the underlying mechanisms. 5 Conclusions We added to the literature the robust evidence of higher cardiorespiratory hospitalizations risk in association with acute exposure to ambient NMHC in eight major cities of Taiwan. The findings may provide information for necessary hydrocarbon pollution regulation and advise strategies for O 3 control. Funding The Vice-Chancellor's Discretionary Fund of The Chinese University of Hong Kong (project no.: 4930744 ). CRediT authorship contribution statement H Qiu : Methodology, Formal analysis, Writing – original draft. KJ Chuang : Conceptualization, Validation, Writing – review & editing. YC Fan : Data Curation. TP Chang : Resources, Project administration. CH Bai : Resources, Writing – review & editing. KF Ho : Conceptualization, Supervision, Writing – review & editing, Funding acquisition. Author contributions H Qiu: analyzed the data, interpreted the results, and wrote the manuscript; KJ Chuang: coordinated the data collection process, interpreted the results, and reviewed the manuscript; YC Fan: worked on hospital admissions data cleaning and grouping analyses; TP Chang: coordinated the data collection process; CH Bai: provided the health data, reviewed and revised the manuscript; KF Ho: defined the research theme, designed the study, and supervised the conduction of the study. All authors have read the manuscript and approved the submission. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | [
"AMORCARRO",
"ATKINSON",
"AUDI",
"BOLDEN",
"BORENSTEIN",
"CHANG",
"CHUANG",
"DAI",
"DIMITROVA",
"HIGGINS",
"LIN",
"MA",
"MCDERMOTT",
"OFTEDAL",
"PAYNESTURGES",
"QIU",
"QIU",
"QIU",
"QIU",
"RAN",
"RAN",
"RAN",
"SHIN",
"SON",
"TIAN",
"TIAN",
"TIAN",
"TORMOEHLEN",
... |
41bfff74e5fc44ea959b261aa290926f_The United States food amp drug administrations manufacturer amp user facility device experience dat_10.1016_j.fastrc.2023.100337.xml | The United States food & drug administration's manufacturer & user facility device experience database: The missing link connecting patient, surgeon & industry? | [
"Roukis, Thomas S."
] | null | It is inherently understood that operative care involves a complex intertwining of patient, surgeon and industry. However, the role governmental agencies have remains less obvious. One such entity is the Manufacturer and User Facility Device Experience (MAUDE) database developed by the United States (US) Food and Drug Administration (FDA) of the Department of Human Services ( 1 https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/search.cfm ; Accessed 10 November 2023). The US FDA MAUDE database is a system of reporting adverse events related to medical devices that have been cleared through the 510k premarket submission process. Available since 1993, the unrestricted accessibility of the database allows for all involved parties (i.e., health care professionals, patients, medical institutions, device importers and manufacturers) to anonymously report medical device related adverse events. Additionally, health professionals may voluntarily report medical device problems through FDA Reporting Form 3500 or by directly contacting the FDA electronically. The designation of adverse events broadly encompasses those events that may have reasonably caused or contributed to death or serious injury. Serious injury, defined under Code of Federal Regulations Title 21 Section 803.3, is defined as an injury that is “…life-threatening, results in permanent impairment of a body function or permanent damage to a body structure, or necessitates medical or surgical intervention to preclude permanent impairment of a body function or permanent damage to a body structure.” Permanent means “…irreversible impairment or damage to a body structure or function, excluding trivial impairment or damage.” Manufacturers are required to submit adverse events to the FDA within 30 calendar days while device user facilities (i.e., medical institutions) have only 10 calendar days to submit once they become aware of the event occurrence. Device user facilities must also submit an annual summary of death and serious injury report to the FDA on January 1st for the preceding year. Healthcare professionals have no requirements to report deaths or injuries to the FDA. Since the entire US FDA MAUDE database is available to the public, it seems natural that this system would prove valuable as a research tool to determine the types, frequency and severity of complications reported for particular medical devices. Unfortunately, only a few such publications specific to foot surgery exist both involving prostheses used in the treatment of 1st metatarsophalangeal joint degenerative joint disease. 2 Review of one of these manuscripts reveals some interesting findings. Of the 64 reports submitted between 2010 and 2018 there were 11 individual manufacturers involving 16 unique prostheses covering stemmed, hemi-phalanx, hemi-metatarsal head, and bipolar designs. However, 50 % of the unique prostheses only had 1 report submitted. Further, 2 of the prostheses accounted for 44 % of submissions, specifically the Toemotion® bipolar system (Anika, Bedford, MA) and Cartiva® Synthetic Cartilage Implant (Stryker, Mahwah, NJ). Of the prostheses included, 6 are no longer available for use in the US market. Although difficult to determine, a 2008 publication stateed that over 2 million stemmed 1 3 st metatarsal-phalangeal joint prostheses alone had been implanted in the US ( https://www.hmpgloballearningnetwork.com/site/podiatry/article/8596 ; accessed 10 November 2023). So, we are expected to believe that over a 9 year period involving what is conservatively estimated to have involved several thousand 1 st metatarsal-phalangeal joint prosthetic implantations only a small handful of adverse events have occurred and were reported to the US FDA MAUDE database? This seems highly unlikely and extremely concerning to me. So, now what? Well, the vast majority of manuscripts submitted to Foot & Ankle Surgery: Techniques, Reports & Cases (FASTRAC) involve orthopaedic implants cleared through the US FDA 510k premarket submission process yet none have included a search of the US FDA MAUDE database for known adverse events specific to those medical devices. Since the manuscripts published in FASTRAC are freely available worldwide, authors should provide a brief synopsis of the adverse events published in the US FDA MAUDE database for readers to review. This is for the safety of prospective patients being treated, foot & ankle surgeons performing the care and our industry partners selling the products employed. In order to facilitate this process, I have added to the FASTRAC “Guide for Authors” ( https://www.sciencedirect.com/journal/foot-and-ankle-surgery-techniques-reports-and-cases/publish/guide-for-authors ) the need for the authors to provide a summary of adverse events that have been published in the US FDA MAUDE database for the orthopaedic implants they mention in their manuscript. Over time, armed with this information our audience will become more comfortable searching the US FDA MAUDE database and, in turn, better educated on the potential adverse events associated with the various orthopaedic implants in question. Furthermore, the foot & ankle surgeons reading FASTRAC will be able to determine if the products employed by the manuscript authors warrants the potential medical-legal risks associated with products that have a higher than anticipated adverse event rate than what they are currently using in their own practices. In doing so we may just find the missing link connecting patients, surgeons and industry. Be well, Tom | [
"ROUKIS",
"AKOH",
"METIKALA"
] |
c102fc05d7e94f5a894731f02c9540de_Dupilumab Effects on Innate Lymphoid Cell and Helper T Cell Populations in Patients with Atopic Derm_10.1016_j.xjidi.2021.100003.xml | Dupilumab Effects on Innate Lymphoid Cell and Helper T Cell Populations in Patients with Atopic Dermatitis | [
"Imai, Yasutomo",
"Kusakabe, Minori",
"Nagai, Makoto",
"Yasuda, Koubun",
"Yamanishi, Kiyofumi"
] | Group 2 innate lymphoid cells (ILCs) are thought to contribute to the pathogenesis of atopic dermatitis (AD). IL-4 stimulates T helper type 2 (Th2) cells and ILC2s to proliferate and produce cytokines. Dupilumab, an antibody against the IL-4 receptor, is used in AD therapy. We speculated that its efficacy might involve blocking the activation of Th2 cells and ILC2s via IL-4. Here, we examined circulating Th2 cells and ILC2s in 27 Japanese patients with AD before and after the administration of dupilumab. Between 0 and 4 months after dupilumab administration, the percentages of Th2 cells and ILC2s were decreased. Notably, ILC2/3 ratio was decreased after dupilumab treatment. Interestingly, ILC2/3 ratio before dupilumab treatment were significantly higher in high responders than in low responders to dupilumab. To resolve the molecular signatures of the Th2 and ILC2s in AD, we sorted CD4+ T cells and ILCs from peripheral blood and analyzed their transcriptomes using the BD Rhapsody Single-cell RNA sequencing system. Between 0 and 4 months after dupilumab administration, the Th2 and ILC2 cluster gene signatures were downregulated. Thus, dupilumab might improve dermatitis by suppressing the Th2 cell and ILC2 populations and altering the Th2 and ILC2 repertoire in patients with AD. | Introduction Group 2 innate lymphoid cells (ILCs) are thought to contribute to the pathogenesis of atopic dermatitis (AD) ( Imai, 2019 ). ILC1s (NK cells), ILC2s, and ILC3s mirror the corresponding T helper type (Th) 1, Th2, and Th17 cells, respectively. Previously, we and others reported that IL-4 stimulates Th2 cells and ILC2s to proliferate and produce type 2 cytokines ( Imai et al., 2019 ; Motomura et al., 2014 ). Dupilumab, an antibody against IL-4 receptors, is widely used in AD therapy ( Beck et al., 2014 ; Guttman-Yassky et al., 2019 ), although we speculated that its efficacy in AD treatment might involve blocking the proliferation and activation of Th2 cells and ILC2s via IL-4. A recent report demonstrated a gradual decrease in the percentage of Th2 cells after dupilumab treatment ( Trichot et al., 2021 ). However, the effects of dupilumab on ILC2s are not fully understood. Result In this study, we examined the proportion and number of circulating Th2 cells and ILC2s in 27 Japanese patients with AD before and after dupilumab administration (details of patient information are provided in Table 1 ). Treatment began with a 600-mg loading dose of dupilumab, followed by 300 mg dupilumab every other week, combined with topical corticosteroids and/or tacrolimus. Eczema Area and Severity Index was measured to assess disease activity, and blood was collected at the baseline visit (before therapy) and week 16, with a window of assessment of approximately 7 days. Dupilumab treatment significantly improved dermatitis and decreased total serum IgE and serum CCL17 (also known as TARC) levels in patients with AD, although some patients were low responders ( Figure 1 a). We used flow cytometry to assess the populations of Th2 cells (CD4 + , CCR6 − , CXCR3 − , and CCR4 + ) and ILC2s (Lin − , CD127 + , and CRTH2 + ) ( Figure 1 b and c). Between 0 and 4 months after dupilumab administration, the percentage of Th2 cells (among total CD4 + T cells) and ILC2s (among all ILCs) were decreased from 16.4 ± 7.6% to 14.5 ± 5.8% (mean ± SD) and 28.1 ± 16.1% to 21.2 ± 12.1%, respectively ( Figure 1 d and e). The absolute numbers of Th2 cells and ILC2s were significantly decreased. Both Th2 cells and ILC2s were more depleted in high responders than in low responders to dupilumab, suggesting that these cells are important in IL-4 receptor signaling pathways in AD pathogenesis. The percentage and absolute number of Th17 cells tended to increase, although this was not statistically significant ( Figure 1 f). The percentage and absolute number of ILC3s increased significantly after dupilumab treatment ( Figure 1 g). The absolute numbers of Th1 cells slightly tended to decrease ( Figure 1 h), and the percentage and absolute number of ILC1s were unchanged after dupilumab treatment ( Figure 1 i). Next, we examined Th2/17 and ILC2/3 balance. Th2/Th17 ratio tended to decrease, although it was not statistically significant ( Figure 2 a). ILC2/3 ratio was very significantly decreased after dupilumab treatment ( Figure 2 b). ILC2/3 ratio and the absolute numbers of ILC2s before dupilumab treatment were significantly higher in high responders than in low responders to dupilumab ( Figure 2 c and d), suggesting that dupilumab might be more effective in patients whose immune balance is originally skewed toward ILC2. Transcriptomic analyses of pretreatment and post-treatment bulk tissue skin biopsy specimens from patients with AD treated with dupilumab have been reported ( Hamilton et al., 2014 ); however, these analyses did not distinguish the cell types. By contrast, single-cell RNA sequencing (scRNA-seq) is able to uncover cell-specific changes in gene expression. To resolve the molecular signatures of Th2 cells and ILC2s from patients with AD before and after dupilumab treatment, we sorted CD4 + T cells and ILCs from peripheral blood and analyzed their transcriptomes using the BD Rhapsody scRNA-seq analysis system (BD Biosciences, San Jose, CA) ( Figure 3 a–h) ( Hasegawa et al., 2019 ). By using hierarchical clustering as described in Materials and Methods, T cells were split into four clusters, namely, naive T, Th1/17, Th2, and regulatory T cells ( Figure 3 a–c). Between 0 and 16 weeks after dupilumab administration, Th2 cluster gene signatures were remarkably changed ( Figure 3 d). ILCs cells were grouped into three clusters: NK/ILC1, ILC2, and ILC3 ( Figure 3 e–g). Between 0 and 16 weeks after dupilumab administration, ILC2 cluster gene signatures were downregulated ( Figure 3 h). These findings suggest that dupilumab administration affects both Th2 and ILC2 gene signatures. Discussion There are a few reports of scRNA-seq analysis of AD skin ( He et al., 2020 ; Rojahn et al., 2020 ), but none has identified cell clusters corresponding to Th2 cells and ILC2s because those studies analyzed all cell types, including keratinocytes, fibroblasts, and lymphocytes. Therefore, we solved this problem by initially sorting CD4 + T cells and ILCs and subsequently investigated isolated cells by using scRNA-seq. The study has some limitations. We were not able to separate Th1 and Th17 populations by scRNA-seq analysis. Therefore, we used both flow cytometry and scRNA-seq analysis. The number of patients recruited is small, thus implying the need to validate our findings in a larger cohort of patients. In addition, future studies should investigate how ILC2 changes when AD is treated with anti–IL-13 antibody or Jak inhibitor. Our results demonstrating dupilumab’s tendency to decrease Th2 cells and increase Th17 cells in the peripheral blood of patients with AD are consistent with those of a previous report ( Trichot et al., 2021 ). We further showed by scRNA-seq analysis that dupilumab administration significantly altered Th2 cluster gene signatures. Increased frequency of circulating ILC2s in AD was previously reported ( Mashiko et al., 2017 ); in vitro or animal experiments demonstrated that IL-4 is involved in the proliferation and activation of ILC2s ( Imai, 2019 ). However, the actual mechanism has not been clarified in human AD. By comparing the number of ILCs in peripheral blood before and after administration of dupilumab in the same patient, we found that inhibition of IL-4 receptor signal reduces percentages and absolute numbers of ILC2s in peripheral blood of patients with AD. Furthermore, ILC2s were more depleted in high responders than in low responders to dupilumab ( Figure 1 e), suggesting that activation of ILC2s by IL-4 might be involved in human AD. ILC2s might be an indicator of the effect of dupilumab. By measuring ILC2/ILC3 ratio before administration of dupilumab, it may be possible to predict the effect of dupilumab ( Figure 2 c). Thus, dupilumab might improve dermatitis by suppressing circulating Th2 cell and ILC2 populations and altering the Th2/Th17 or ILC2/ILC3 repertoire in patients with AD. Materials and Methods Patients This study was conducted at Hyogo College of Medicine Hospital, Nishinomiya, Japan, to assess the effects of dupilumab in patients with moderate-to-severe AD who visited the hospital from April 2018 to March 2020. This study was approved by the Ethics Review Board of Hyogo College of Medicine (No. 0032 and 0212) and conformed to the Declaration of Helsinki. Blood was collected after obtaining written, informed patient consent. The diagnosis of AD was made according to the Hanifin and Rajka criteria. Definition of high and low responder to dupilumab To analyze the factors dictating high or low clinical responsiveness to dupilumab, patients were subgrouped into two groups according to the rank order of percentage change of Eczema Area and Severity Index from baseline at week 16. Based on the classification used in previous reports ( Nettis et al., 2020 ; Tavecchio et al., 2020 ), 75% of patients in this study were tentatively defined as high responders and the rest (25%) were tentatively defined as low responders ( Figure 1 a). Antibodies Anti-human Lineage Cocktail-FITC (CD3, CD14, CD16, CD19, CD20, CD56; UCHT1, HCD14, 3G8, HIB19, 2H7, HCD56; #348801), anti-human CD123-FITC (6H6, #306014), anti-human FcεRIα-FITC (AER-37, #334608), anti-human CD45-PerCP/Cy5.5 (HI30, #304028), anti-human CD117-APC (104D2, #313206), anti-human CRTH2 (CD294)-PE/Cyanine7 (BM16, #350117), anti-human CD127 (IL-7Rα)-Brilliant Violet 421 (A019D5, #351309), anti-human CD183 (CXCR3)-PE (G025H7, #353705), anti-human CD4-PerCP/Cyanine5.5 (OKT4, #317427), anti-human CD196 (CCR6)-PE/Cyanine7 (G034E3, #353417), anti-human CD194 (CCR4)-APC (L291H4, #359407), and anti-human CD45-Brilliant Violet 510 (2D1, #368525) were purchased from BioLegend (San Diego, CA). Anti-CD16/32 antibody was from Miltenyi Biotec (Auburn, CA). Flow cytometry We collected peripheral blood samples from the patients and isolated lymphocytes from human whole blood using a lymphocyte separation solution kit (#20839-04, Nacalai Tesque, Kyoto, Japan), according to the manufacturer’s instructions. Residual erythrocytes were lysed using ACK Lysing Buffer (Thermo Fisher Scientific, Waltham, MA). Cells were preincubated with anti-CD16/32 antibody for blocking and were subsequently stained with the appropriate antibody for 30 minutes at 4 °C. Stained cells were analyzed using a FACSAria III flow cytometer (BD Biosciences), and data were analyzed using FlowJo software (v10.5) (Tree Star, Ashland, OR). The classification of cells is as follows: ILC2s, lineage markers (Lin) (CD3, CD14, CD16, CD19, CD20, CD56, CD123, FcεRIα) – CD45 + CD127 + CRTH2 + cells; ILC3s, Lin – CD45 + CD127 + CD117 + CRTH2 – cells; total ILCs, Lin − CD45 + CD127 + cells; Th2 cells, CD45 + CD4 + CCR6 − CXCR3 − CCR4 + cells; and Th17 cells, CD45 + CD4 + CCR6 + CCR4 + cells. scRNA-seq analysis CD45 + CD4 + T cells or Lin − CD45 + CD127 + cells were isolated from blood samples using a FACSAria III cell sorter (BD Biosciences). Targeted scRNA-seq analysis was performed using the BD Rhapsody Single-Cell Analysis System (BD Biosciences), according to the manufacturer’s instructions. For the library construction, we used the BD Human Single-Cell Multiplexing Kit (#633781, BD Biosciences) and the BD Rhapsody Immune Response Targeted Panel for Human (#633750, BD Biosciences), which consisted of primer sets for 399 genes. Sequencing was performed using an Illumina HiSeq X (Illumina, San Diego, CA), and the fastq files were converted using BD Rhapsody Analysis Pipeline (BD Biosciences) and analyzed using the BD DataView software v.1.2.2 (BD Biosciences). Hierarchical clustering Hierarchical clustering is a method of cluster analysis that attempts to build a hierarchy of clusters. All analyses were performed in BD DataView software using MATLAB (R2014a, MathWorks, Natick, MA) ( Joshi et al., 2017 ) built-in cluster function with the maxclust option, according to the manufacturer’s instructions. The specific formula is described elsewhere ( Joshi et al., 2017 ; Zhang et al., 2018 ). Statistical analysis Data were analyzed using GraphPad Prism version 8 (GraphPad Software, San Diego, CA). A two-tailed unpaired t -test was used for single comparisons. The Wilcoxon matched-pairs signed rank test was used for intragroup comparison. A P -value < 0.05 was considered statistically significant. Data availability statement No datasets were generated during this study. ORCIDs Yasutomo Imai: http://orcid.org/0000-0003-3169-5717 Minori Kusakabe: http://orcid.org/0000-0003-4422-4471 Makoto Nagai: http://orcid.org/0000-0003-3638-706X Koubun Yasuda: http://orcid.org/0000-0002-3533-1702 Kiyofumi Yamanishi: http://orcid.org/0000-0003-0484-2320 Author Contributions Conceptualization: YI; Data Curation: MK, MN, YI; Formal Analysis: YI; Funding Acquisition: MN, YI; Investigation: KYas, MK, MN, YI; Methodology: KYas, YI; Project Administration: YI; Resources: MK, NM, YI, KYam; Supervision: KYam; Validation: KYas, YI; Visualization: MK, YI; Writing - Original Draft Preparation: YI, MK; Writing - Review and Editing: KYas, YI Acknowledgments We thank N. Kanazawa at Hyogo College of Medicine for the enthusiastic discussions and Y. Sakaguchi and members of the Joint-Use Research Facilities and the Center for Comparative Medicine, Hyogo College of Medicine, for their assistance. This work was supported in part by JSPS KAKENHI 20K17331 for MN, 18K08284 for YI, Japanese Dermatological Association Dermatological Research Fund (supported by ROHTO Pharmaceutical Co, Ltd) for YI, and Hyogo College of Medicine Grant for Research Promotion 2020 for YI. Conflict of Interest The authors state no conflict of interest. | [
"BECK",
"GUTTMANYASSKY",
"HAMILTON",
"HASEGAWA",
"HE",
"IMAI",
"IMAI",
"JOSHI",
"MASHIKO",
"MOTOMURA",
"NETTIS",
"ROJAHN",
"TAVECCHIO",
"TRICHOT",
"ZHANG"
] |
78ec4f9133de4f5caaf253044ff2b087_Bacteremia in Patients Undergoing Debridement Antibiotics and Implant Retention Leads to Increased R_10.1016_j.artd.2022.05.014.xml | Bacteremia in Patients Undergoing Debridement, Antibiotics, and Implant Retention Leads to Increased Reinfections and Costs | [
"Rosas, Samuel",
"Hegde, Vishal",
"Plate, F. Johannes",
"Dennis, Douglas",
"Jennings, Jason",
"Bracey, Daniel N."
] | Background
Debridement, antibiotics, and implant retention (DAIR) is a common treatment for acute prosthetic joint infection (PJI). The effects of concurrent bacteremia at the time of DAIR are poorly understood. This study sought to determine whether patients with bacteremia at the time of DAIR have higher reinfection rates.
Material and methods
A retrospective review of a national database was performed. Patients treated with DAIR (hip or knee arthroplasty) after a diagnosis of PJI were identified. DAIR patients who also had a diagnosis of bacteremia were matched to patients without bacteremia by comorbidities and Charlson Comorbidity Index score. The primary outcome was reinfection or continued infection at 90 days and 6, 12, and 24 months after DAIR. Ninety-day Medicare charges were compared between groups. Survival probabilities were used for survival comparisons.
Results
A total of 9945 patients underwent DAIR after a diagnosis of PJI. Seven hundred seven patients underwent DAIR with an associated diagnosis of bacteremia. Three hundred thirty-four DAIR patients with bacteremia were successfully matched to patients without bacteremia by age, gender, and comorbidities. DAIR survivorship was significantly worse in those with bacteremia at 90 days (51.5% vs 65.9%) and 6 (43.1% vs 60.5%), 12 (36.5% vs 56.0%), and 24 months (32.6% vs 53.3%) after DAIR. The 90-day costs of DAIR were significantly greater in PJI patients with bacteremia (mean: $14,722 standard deviation: $4086 vs mean: $8,052, standard deviation: $4,153, P < .01).
Conclusions
Patients undergoing DAIR with bacteremia are at an increased risk of reinfection or continued infection. Ninety-day costs are significantly increased (over 50%) in patients with bacteremia vs those without bacteremia. | Introduction With the rising number of hip and knee arthroplasties, a concomitant increase in the number of complications including prosthetic joint infections (PJIs) is expected. Various treatment modalities for PJI are available based on patient comorbidities, time since arthroplasty, patient-specific risk factors for further complications, and overall health status. Current treatment options include single-stage revision, 2-stage revision, chronic antibiotic suppression, or debridement with irrigation, antibiotics, and implant retention (DAIR). DAIR is commonly utilized in patients medically unfit to undergo a 2-stage revision and for those with acute-onset PJI. Retrospective Medicare data presented by Boyle et al. showed increasing utilization of DAIR in treatment of PJIs in the US in total knee arthroplasty (TKA) patients with a high number of comorbidities and of older age [ 1 ]. The reported success rate with DAIR has been highly variable, ranging anywhere from 16% [ 2 ] to 77% [ 3 ]. These studies highlight our inability to predict which PJI patients can be successfully treated with DAIR [ 4 ]. One patient variable given closer attention in recent years is the presence of bacteremia at the time of DAIR. Two previous cohort studies of 22 and 43 patients showed that positive blood cultures decrease treatment success of PJIs [ 5 , 6 ]. To date, no study has utilized Medicare database queries to determine if bacteremia decreases success of DAIR for acute PJIs, and no study has investigated the potentially increased PJI costs in this setting. The current study cohort of 334 DAIR patients with concurrent bacteremia is the largest reported in any literature. The purpose of this study was to evaluate whether patients undergoing DAIR for PJI after total hip arthroplasty (THA) or TKA are at an increased risk of treatment failure compared to those without bacteremia. We hypothesized that bacteremia, defined by positive blood cultures at the time of DAIR for THA or TKA PJI, would adversely affect outcomes of DAIR leading to decreased survivorship in that patient cohort compared to DAIR patients without bacteremia. Similarly, we hypothesized that the presence of bacteremia would significantly increase costs associated with PJI treatment. Material and methods A retrospective case-control study was conducted utilizing a commercially accessible server to query the Medicare Dataset of the Standard Analytical Files. Institutional review board approval from our institution was obtained prior to this study. This dataset contains the entire patient population of Medicare patients during the duration of the patients’ enrollment in Medicare. Briefly, the PearDiver Server (Boulder, CO) is a commercially available server that houses patient records in a Health Insurance Portability and Accountability (HIPAA)–compliant fashion. The server houses data from private payers and Medicare and allows for longitudinal evaluation of patient cohorts. Patients were identified through International Classification of Disease 9th Revision (ICD-9) and ICD-9 procedure codes ( Appendix 1 ). The current study utilized the Medicare dataset housed within the server given that most arthroplasties occurring in the United States are performed in patients over the age of 65. The dataset contains over 55 million patient records from 2005 to 2014 which represents 100% of the Medicare sample. Study cohort identification Patients with a diagnosis of PJIs were identified in the database based on ICD codes. From this cohort, patients who underwent DAIR for THA (revision with femoral head and acetabular liner exchange) or TKA (revision with tibial insert exchange) were extracted by ICD-9 procedure codes ( Appendix 1 ). Patients with bacteremia at the time of DAIR for PJIs were then identified. These patients were matched to a cohort of patients who underwent DAIR for the same indication without bacteremia based on age, gender, comorbidities, and Charlson Comorbidity Index (CCI) score. Patients were matched to control for comorbidities believed to increase risk of infection or limit a patient’s ability to clear infection with surgical debridement. For example, it is well known that alcohol abuse has been linked to worse outcomes after TJA [ 7 ] or that patients with HIV have an increased risk of deep vein thrombosis [ 8 ]. We attempted to match patients on these comorbidities to decrease effects of confounding comorbidities. We recognized this would limit our ultimate sample size but felt it was necessary for appropriate cohort comparison in a study already limited by claims-based data. Other comorbidities that have been previously correlated with PJIs were thus also included [ 9–13 ]. Figure 1 demonstrates the study design as suggested by the CONSORT guidelines. Costs were evaluated based on Medicare reimbursements. This is a previously used method of describing costs that allows for external description of expenditure by Medicare [ 14 ]. The 90-day costs were used based on current bundled payment initiatives. Survival was assessed as a new diagnosis of PJIs after the DAIR was performed. Endpoint assessment was performed at 90 days and 6, 12, and 24 months. This study was designed to comply with the recently published guidelines for database studies published by the leadership of the Journal of Arthroplasty [ 15 ]. Statistical evaluation The statistical package of R study available within the PearlDiver server was used to conduct multivariate and univariate analysis. Parametric and nonparametric testing on continuous data was performed with SPSS, version 20 (IBM Corp, Armonk, NY) and by way of students t-tests and Mann-Whitney tests. Chi-Square testing was used to compare percentage of individual comorbidities within the matched cohorts. A multivariate regression was conducted to determine whether bacteremia was associated with reinfection when accounting for age, gender, and CCI. Finally, Kaplan-Meier survival curves were used to assess survival of surgery after DAIR. Survival was defined as the time free of reinfection from the date of DAIR until a new PJI diagnosis was identified by a new ICD-9 code. Results Study population Within the Medicare records, 73,435 patients who underwent modular component exchange of a THA or TKA were identified between the years of 2005 and 2014. During that same time period, 1,519,749 patients had a diagnosis of PJIs and 9945 patients underwent DAIR after a diagnosis of PJIs. Ultimately, 707 patients (7.1%) were diagnosed with bacteremia at the time of DAIR. Of these 707 patients, 334 were successfully matched by age, gender, and comorbidities to patients undergoing DAIR without concurrent bacteremia. Study cohort characteristics Each cohort was comprised of 43% females. The majority of patients were aged 65 to 69 years (28%), and those aged 64 years and younger comprised 25%. Table 1 demonstrates the characteristics of the patients included in the final cohort. Of note, gender, age, and region where the procedure took place were all similar within the 2 compared cohorts ( P > .05 for all). Similarly, when comparing the distribution of 21 comorbidities within the 2 groups, we found no significant differences between both patient cohorts, demonstrating a successful matching process ( Table 2 , P > .05 for all). Outcome comparison The mean inpatient length of stay in patients undergoing DAIR with bacteremia (6.47 days, standard deviation [SD]: 2.3) was significantly greater than that in patients without bacteremia (3.83 days, SD: 1.06, P = .004). The multivariate regression analysis demonstrated that age under 65 years and age over 84 years were associated with decreased risk of reinfection following DAIR for PJIs, while increasing CCI, male gender, and bacteremia at the time of DAIR were all significant predictors of reinfection. Table 3 demonstrates the adjusted odds ratios for reinfection after DAIR with the respective 95% confidence intervals. Most notably, bacteremia at the time of DAIR had the highest odds ratio (OR) at 24 (95% CI: 18.37 – 31.45). Similarly, comparative survivorship curves between the 2 patient cohorts ( Fig. 2 ) showed significantly worse survivorship in bacteremic patients at 90 days (51.5% vs 65.9%, P = .001) and 6 (43.1% vs 60.5%, P < .001), 12 (36.5% vs 56.0%, P < .001), and 24 months (32.6% vs 53.3%, P < .001) ( Table 4 ). Reimbursement comparison demonstrated that mean reimbursements for those who were bacteremic at the time of DAIR were significantly greater (mean: $14,722, SD: $4086 vs mean: $8,052, SD: $4,153, P = .001), representing an increase of 183% with an annual variation from 121% to 406%. Discussion The current study sought to identify whether bacteremia at the time of DAIR for treatment of PJIs was associated with increased reinfection rates and costs within a cohort of Medicare patients treated in the United States. The failure rate of DAIR in patients with bacteremia was 49.5% at 90 days compared to 35.1% in those without bacteremia. The failure rate in the bacteremia cohort continued increasing to as high as 67.4% at 2 years vs 47.7% in the matched cohort. The current study adjusted for 21 confounding variables that have previously been suggested to increase the risk of failure following DAIR and ultimately found that concurrent bacteremia is a single independent risk factor for failure after DAIR (odds ratio: 24.03). Treatment of PJIs with DAIR has previously been addressed, with most literature focusing on how treatment success rates correlate with virulence of the isolated pathogen or host comorbidities [ 16 , 17 ]. Only a limited subset of literature has previously considered how bacteremia at the time of DAIR may affect success rates of PJI treatment. Klement et al. retrospectively reviewed the records of 320 patients treated for THA and TKA PJI at 2 academic institutions and found that blood cultures were obtained in 57% of patients [ 6 ]. In the 43 patients with positive blood cultures, blood and synovial culture data matched in 82% of cases. Logistic regression analysis showed that decreasing treatment successes was associated with increased comorbidity index, 2-stage treatment, and positive blood culture at the time of treatment. Treatment success was only 65.1% in those with positive blood cultures (n = 43) compared to 85% in blood culture–negative patients ( P = .013). Positive blood cultures were associated with what the authors considered a greater disease burden indicated by higher synovial white blood cell counts, higher serum C-reactive protein levels, and higher mortality rates than the nonbacteremic cohort. A similar study by Kuo et al. [ 5 ] used a similar retrospective design as Klement et al. [ 6 ] but focused their hypothesis on PJI patients treated specifically with DAIR. Preoperative blood cultures were obtained in 49 acute PJI patients treated with DAIR, and 22 of these patients (45%) had positive blood cultures. Patients with positive blood cultures again had higher comorbidity indices and elevated WBC counts compared the PJI patients with negative blood cultures, similar to the findings by Klement et al. DAIR treatment success 1 year postoperatively was significantly lower in bacteremic patients (36.3%) than in patients with negative blood cultures (66.7%, P = .047). Their analysis found that positive blood cultures, polymicrobial infections, and elevated comorbidity indices were all significant predictors of failed treatment, but after stepwise multivariate logistic regression analysis, only positive blood cultures were a significant predictor of failed treatment. The current study presents the largest cohort of PJI patients treated with DAIR in the setting of concurrent bacteremia and is the first investigation to pool Medicare data for this study population. The failure rates described in our study are higher than those previously reported. Our study reported 2-year outcomes, included only patients treated with DAIR and excluded PJI patients treated with 2-stage procedures which are traditionally more successful in eradicating infections than DAIR. Additionally, the data presented by Kuo et al. focused on acute hematogenous PJI diagnosed within 3 months of the index procedure, while our dataset included all patients treated with DAIR, which could include either acute or chronic PJI. Consistent with the conclusions of Klement et al and Kuo et al., we found that DAIR in the setting of bacteremia results in significantly worse survivorship. Our 2-year survivorship rates are the longest reported to date. The limited success of DAIR in patients with bacteremia may encourage providers to more routinely obtain blood cultures when evaluating PJI patients to better risk stratify which patients are more appropriate for DAIR vs 2-stage exchange. While we previously believed that treatment success was largely dependent on virulence of the isolated organism, more recent data could suggest that the disease burden in the setting of bacteremia is a more relevant predictor of treatment success. Further study is required to understand the timing of bacteremia and effects on eradication of the infection. Patients may benefit from clearance of bacteremia prior to PJI debridement and should only undergo definitive DAIR once they have negative blood cultures. Alternatively, bacteremic patients may be indicated for 2-stage revision regardless of blood cultures being positive before or after DAIR. The current study found increased costs in the 90-day episode of care for patients with bacteremia. This finding reveals the expected increase in resource utilization required to care for these patients. Multiple consulting services, possibly higher-level of care, long-term intravenous antibiotics, additional surgeries, additional implants, and greater use of resources in both the inpatient and outpatient care settings, all significantly increase healthcare expenditure. No previous study has assessed differences in reimbursement between PJI patients undergoing DAIR with or without bacteremia. The findings of the current study should alert policymakers and practice leaders about this resource-intense cohort of patients. Furthermore, the cost evaluation in this study can help future studies establish cost-effectiveness of DAIR vs single-stage exchange vs 2-stage exchange in certain patients. Limitations The results of this retrospective study should be interpreted recognizing the inherent limitations of a large database analysis. The cohort of patients with bacteremia was identified through claims-based diagnosis and procedural codes that are subject to error in coding, overcoding and/or under-coding. Without access to medical records and operative notes, we are limited in what we know about the severity of infection, extent of debridement, and clinical decision-making. For example, without access to the blood culture data, we are unable to determine if bacteremia was always diagnosed with positive blood cultures. Additionally, we are unable to know when blood cultures were taken and what percentage of patients undergoing DAIR had blood cultures. All patients undergoing DAIR were included in our cohort, and we were unable to determine if this was performed for acute PJI, acute hematogenous seeded PJI, or possibly chronic PJI. Acuity of infection would likely confound the efficacy of DAIR in the setting of bacteremia. Furthermore, the retrospective nature of this study and the relatively limited sample size are also factors to consider when interpreting results of this study. Our ultimate sample size was significantly reduced after matching, but we felt appropriate matching was necessary in a study already limited by claims-based data. We did not have access to the culture data to accurately assess effects of different organisms on treatment success. Additionally, there were no standardized criteria for obtaining blood cultures which is a limitation inherent to all retrospective studies previously published on this topic. Reimbursement data may also fail to capture a large portion of the indirect healthcare-related costs associated with care of these patients attributed to the long recovery process and inability to work. Also, the nature of Medicare reimbursement analysis decreases the ability to compare costs between payers. Other medical and social determinants of outcomes such as time to infection, microorganism involved, and host factors such as immune status were not evaluated and could potentially alter our findings. This is the largest matched study cohort to date, however, and findings are consistent with previous reports. Conclusions Patients with PJI with concurrent bacteremia at the time of DAIR have worse survivorship and incur increased costs during the episode of care compared to a comorbidity matched cohort of PJI patients undergoing DAIR without bacteremia. Additional study is needed to determine if bacteremia patients are indicated for 2-stage exchange or if they would still be appropriate for DAIR after their bacteremia has resolved. Appendix A Supplementary data Conflict of Interest Statement for Plate Conflict of Interest Statement for Dennis Conflict of Interest Statement for Jennings Conflict of Interest Statement for Rosas Conflict of Interest Statement for Bracey Conflict of Interest Statement for Hegde Conflicts of interest F. Johannes Plate is a paid consultant at Smith & Nephew, holds stock options at Eventum Orthopaedics, receives research support from Biocomposites Inc. and Aerobiotix Inc., and is a member of the editorial/governing board of the Journal of Arthroplasty; Douglas Dennis receives royalties from DePuy, A Johnson & Johnson Company, is a member of the speakers bureau of Corin U.S.A; DePuy, and A Johnson & Johnson Company, is a paid consultant at Corin U.S.A; DePuy, A Johnson & Johnson Company, holds stock options at Corin U.S.A and Joint Vue, receives research support as a principal investigator from DePuy, A Johnson & Johnson Company, Corin U.S.A, and Porter Adventist Hospital, receives royalties from Wolters Kluwer Health - Lippincott Williams & Wilkins, and is a member of the editorial/governing board of the Clinical Orthopaedics and Related Research, Journal of Arthroplasty, Journal of Bone and Joint Surgery – American, and Orthopedics Today; Jason Jennings is a paid consultant at Total Joint Orthopedics and Xenex, holds stock options at Xenex, and receives research support from DePuy, A Johnson & Johnson Company, Corin U.S.A, and Porter Adventist Hospital; Samuel Rosas is an elite reviewer of the Journal of Arthroplasty and reviewer of the AJSM, JOEI, JSES, and OJSM; all other authors declare no potential conflicts of interest. For full disclosure statements refer to https://doi.org/10.1016/j.artd.2019.12.004 . Appendix 1 List of ICD-9 codes used for this study: 996.66 Infection due to internal joint prosthesis 996.67 Infection due to other internal orthopaedic device, implant and graft 998.51 Infected postoperative seroma 998.59 Other postoperative infection 730.36 Periostitis without mention of osteomyelitis, lower leg 008.4 Other specified bacteria 790.7 Bacteremia 796.4 Other abnormal clinical findings 795.39 Nonspecific positive culture findings List of ICD-9 Procedure codes used for this study: 0073 Revision of total hip replacement acetabular liner and/or femoral head only 0084 Revision of total knee replacement tibial insert (liner) only | [
"BOYLE",
"BRADBURY",
"IZA",
"AZZAM",
"KUO",
"KLEMENT",
"BEST",
"OLSON",
"BREDECHE",
"BULLER",
"GOLD",
"LEE",
"NA",
"MOHER",
"CALLAGHAN",
"KONIGSBERG",
"TRIANTAFYLLOPOULOS"
] |
ece52c262138497f81725dc74aec6194_Lysophosphatidylcholine-induced mitochondrial fission contributes to collagen production in human ca_10.1194_jlr.RA119000141.xml | Lysophosphatidylcholine-induced mitochondrial fission contributes to collagen production in human cardiac fibroblasts[S]
| [
"Tseng, Hui-Ching",
"Lin, Chih-Chung",
"Hsiao, Li-Der",
"Yang, Chuen-Mao"
] | Lysophosphatidylcholine (LPC) may accumulate in the heart to cause fibrotic events, which is mediated through fibroblast activation and collagen accumulation. Here, we evaluated the mechanisms underlying LPC-mediated collagen induction via mitochondrial events in human cardiac fibroblasts (HCFs), coupling application of the pharmacologic cyclooxygenase-2 (COX-2) inhibitor, celecoxib, and genetic mutations in FOXO1 on the fibrosis pathway. In HCFs, LPC caused prostaglandin E2 (PGE2)/PGE2 receptor 4 (EP4)-dependent collagen induction via activation of transcriptional activity of forkhead box protein O1 (FoxO1) on COX-2 gene expression. These responses were mediated through LPC-induced generation of mitochondrial reactive oxygen species (mitoROS), as confirmed by ex vivo studies, which indicated that LPC increased COX-2 expression and oxidative stress. LPC-induced mitoROS mediated the activation of protein kinase C (PKC)α, which interacted with and phosphorylated dynamin-related protein 1 (Drp1) at Ser616, thereby increasing Drp1-mediated mitochondrial fission and mitochondrial depolarization. Furthermore, inhibition of PKCα and Drp1 reduced FoxO1-mediated phosphorylation at Ser256 and nuclear accumulation, which suppressed COX-2/PGE2 expression and collagen production. Moreover, pretreatment with celecoxib or COX-2 siRNA suppressed WT FoxO1; mutated Ser256-to-Asp256 FoxO1-enhanced collagen induction, which was reversed by addition of PGE2. Our results demonstrate that LPC-induced generation of mitoROS regulates PKCα-mediated Drp1-dependent mitochondrial fission and COX-2 expression via a PKCα/Drp1/FoxO1 cascade, leading to PGE2/EP4-mediated collagen induction. These findings provide new insights about the role of LPC in the pathway of fibrotic injury in HCFs. | Cardiac fibrosis is characterized by activation of cardiac fibroblasts (CFs), persistence of differentiated myofibroblasts, and synthesis of excessive extracellular matrix (ECM) triggered by various factors ( 1 , 2 ). Lysophosphatidylcholine (LPC) is hydrolyzed by phospholipase A 2 that is generated from cell membrane-derived phosphatidylcholine and accumulates in ischemic and injured myocardium, associating with cardiomyocyte apoptosis in fibrotic hearts ( 3 , 4 ). LPC acts as a pro-inflammatory mediator and induces interleukin-6 (IL-6) expression ( 5 ), whereas IL-6 is involved in fibroblast activation ( 6 ). Therefore, we investigated to determine whether LPC-regulated fibrotic events resulted from collagen production in human CFs (HCFs). Mitochondrial fission increases mitochondrial fragmentation that reflects mitochondrial membrane depolarization ( 7 ) and excessive mitochondrial reactive oxygen species (mitoROS) production ( 8–10 ). Dynamin-related protein 1 (Drp1) serves as an initiator of mitochondrial fission when it is translocated from the cytosol to the mitochondrial outer membrane ( 10 , 11 ). Drp1 is regulated by posttranslational modifications, including phosphorylation and dephosphorylation ( 12 ). The phosphorylation of Drp1 at different amino acid residues plays opposite roles in mitochondrial fission: phosphorylation at Ser 616 and Ser 637 is responsible for the initiation and inhibition of fission, respectively ( 10 , 13 ). In contrast, overproduction of ROS also contributes to Drp1-mediated mitochondrial fission ( 13 ). LPC has been reported to facilitate mitoROS generation ( 14 , 15 ) and mitochondrial membrane depolarization ( 14 ). Nonetheless, whether LPC impairs mitochondrial function, in particular, via Drp1-mediated mitochondrial fission in HCFs, remains unknown. Protein kinase Cs (PKCs) consist of a catalytic domain ( 16 ), which requires phospholipids, such as LPC, to promote PKCα activation ( 17 ). However, the role of ROS in interaction between PKCα and Drp1 is not defined in HCFs. Therefore, we investigated the interaction between PKCα and Drp1 leading to Drp1-mediated mitochondrial fragmentation. Cyclooxygenase-2 (COX-2) is responsible for the synthesis of prostaglandins (PGs), including prostaglandin E 2 (PGE 2 ) ( 18 ). The COX-2/PGE 2 axis is involved in various pathophysiological processes, including inflammation, tumorigenesis, and proliferation ( 18–20 ). The upregulation of COX-2 in the myocardium is associated with heart failure ( 21 ). The levels of PGE 2 at tissue sites are accompanied by collagen deposition ( 22 , 23 ). Although the contribution of the LPC-induced COX-2/PG axis in collagen production is not well-established, induction of PGE 2 can auto-regulate PGE 2 receptors (EPs), including EP 1 –EP 4 . EP 2 and EP 3 have been shown to inhibit collagen synthesis ( 24–26 ). In contrast, activation of EP 4 can increase collagen synthesis ( 23 ). However, whether LPC-induced COX-2/PGE 2 -dependent IL-6 expression could promote collagen induction is not completely elucidated in HCFs. Here, we demonstrated that LPC-induced mitoROS generation contributed to PGE 2 /EP 4 -dependent collagen induction. Mechanistically, mitoROS, induced by LPC, were found to regulate activation of PKCα that interacted with Drp1, leading to mitochondrial fragmentation and depolarization. In addition, LPC-regulated COX-2 expression was mediated via the mitoROS/PKCα/Drp1 cascade in HCFs. Our study provided new insights into a relationship between mitochondrial events and COX-2-dependent collagen induction in HCFs exposed to LPC. METHODS Reagents and antibodies Anti-phospho-forkhead box protein O1 (FoxO1) (Ser 256 ) (rabbit polyclonal antibody, Cat# 9461), anti-phospho-JNK1/2 (rabbit monoclonal antibody, Cat# 4668), anti-phospho-Drp1 (Ser 616 ) (rabbit polyclonal antibody, Cat# 3455), anti-phospho-Drp1 (Ser 637 ) (rabbit monoclonal antibody, Cat# 6319), anti-Drp1 (rabbit monoclonal antibody, Cat# 5391), and anti-FoxO1 (rabbit monoclonal antibody, Cat# 2880) antibodies were obtained from Cell Signaling (Danvers, MA). Anti-COX-2 (rabbit monoclonal antibody, Cat# ab62331), anti-phospho-PKCα (rabbit monoclonal antibody, Cat# ab180848), and anti-TOM20 (mouse monoclonal antibody, Cat# ab56783) antibodies were obtained from Abcam (Cambridge, UK). Anti-JNK1/2 (mouse monoclonal antibody, Cat# sc-137020), anti-PKCα (rabbit polyclonal antibody, Cat# sc-208), and anti-lamin A (rabbit polyclonal antibody, Cat# sc-20680) antibodies were obtained from Santa Cruz (Santa Cruz, CA). Anti-GAPDH (mouse monoclonal antibody, Cat# MCA-1D4) antibody was obtained from EnCor Biotechnology (Gainesville, FL). LPC (L-0906) was obtained from Sigma-Aldrich (St. Louis, MO). LPC was dissolved in 50% ethanol and filtered through a 0.22 μm syringe filter. A final concentration of 0.5% ethanol was used for the experiments. NS-398, Gö 6976, SP600125, and celecoxib were obtained from Biomol (Plymouth Meeting, PA). MitoQ and Gö 6983 were obtained from Cayman Chemicals (Ann Arbor, MI). AS1842856 was obtained from EMD Millipore (Billerica, MA). MitoTEMPO, dynasore, and mdivi-1 were obtained from Santa Cruz. The pharmacological inhibitors were dissolved in DMSO at a working concentration of 0.5% DMSO used for the all experiments. SDS-PAGE reagents were obtained from MDBio Inc. (Taipei, Taiwan). Animal care and experimental procedures All animal care and experimental procedures complied with the UK Animals (Scientific Procedures) Act (19860, Directive 2010/63/EU) of the European Parliament and the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (National Institutes of Health Publication No. 85-23, revised 1996). Animal studies are reported in compliance with the ARRIVE. Male Institute of Cancer Research mice (25–30 g, 8 weeks old) were purchased from the National Laboratory Animal Centre (Taipei, Taiwan) and randomly assigned to standard cages, with five animals per cage and kept in standard housing conditions with food and water ad libitum, according to the guidelines of Animal Care Committee of Chang Gung University (Approval Document No. CGU 16-046) and National Institutes of Health Guide for the Care and Use of Laboratory Animals . Institute of Cancer Research mice were anesthetized with one injection of Zoletil (40 mg/kg ip) and xylazine (10 mg/kg ip). After anesthesia, mice were withdrawn with lined forceps on the paws, and then their chests were opened and the hearts were quickly removed for the experiments. The cardiac apexes of the mice were sliced into three segments and assigned randomly into three groups: vehicle [containing 0.5% (v/v) ethanol and 0.5% (v/v) DMSO], LPC treatment [containing 40 μM LPC with 0.5% (v/v) ethanol and 0.5% (v/v) DMSO], and MitoTEMPO plus LPC treatment [containing 10 μM MitoTEMPO plus 40 μM LPC with 0.5% (v/v) ethanol and 0.5% (v/v) DMSO]; five slices were chosen from each group. The slices of cardiac apexes were pretreated with the inhibitors for 1 h, and then incubated with LPC for 6 h in Krebs solution (pH 7.4 at 37°C). The homogenates of cardiac apexes were prepared and lysed in a lysis buffer and subjected to Western blot analysis and RT-quantitative (q)PCR, as previously described ( 5 ). Measurement of GSH/GSSG ratio The ex vivo heart apexes, with or without respective inhibitor treatment for 1 h, were incubated with 40 μM LPC for 6 h. The homogenates were used to measure the ratio of GSH/GSSG as the marker of oxidative stress in the heart tissues, which was determined using a glutathione detection kit according to the manufacturer's instructions (Enzo Life Sciences, Farmingdale, NY). Cell cultures HCFs were purchased from ScienCell Research Laboratories (San Diego, CA) and maintained in DMEM/nutrient mixture F-12 (DMEM/F-12) medium supplemented with 10% FBS, as previously described ( 27 ). Preparation of samples and Western blot analysis Growth-arrested HCFs were incubated without or with different concentrations of LPC at 37°C for the indicated time intervals. When pharmacological inhibitors were used, they were added 1 h prior to the exposure to LPC. After incubation, the cells were rapidly washed with ice-cold PBS and lysed with a sample buffer containing 125 mM Tris-HCl, 1.25% SDS, 6.25% glycerol, 3.2% β-mercaptoethanol, and 7.5 nM bromophenol blue with pH 6.8. Samples were denatured, subjected to SDS-PAGE using a 10% (w/v) running gel, and transferred to nitrocellulose membrane (BioTrace™ NT membrane, Pall Life Sciences, Ann Arbor, MI). The membranes were immunoblotted with one of the primary antibodies (1:1,000 dilution) overnight at 4°C, followed by incubation with a peroxidase-conjugated secondary antibody at room temperature for 2 h. The immunoreactive bands were visualized by enhanced chemiluminescence reagent (Western Lighting Plus; Perkin Elmer, Waltham, MA). The images of the immunoblots were acquired using a UVP BioSpectrum 500 imaging system (Upland, CA), and densitometric analysis was conducted using UN-SCAN-IT gel software (Orem, UT). RT-PCR and qPCR analyses Total RNA was extracted with TRIzol (Sigma-Aldrich) according to the manufacturer's instructions. First-strand cDNA synthesis was performed with 5 μg of total RNA using Oligo(dT)15 as primer in a final volume of 20 μl [25 ng/μl Oligo(dT)15, 0.5 mM dNTPs, 10 mM DTT, 2 units/μl RNase inhibitor, and 10 units/μl of Superscript II reverse transcriptase (Invitrogen, Carlsbad, CA)]. The synthesized cDNAs were used as templates for PCR reaction using Q-Amp™ 2× Screening Fire Taq Master Mix (Bio-Genesis Technologies, Taipei, Taiwan) and primers for the target genes. qPCR was performed using Luna Universal probe qPCR Master Mix (M3004; New England BioLabs, Beverly, MA) on a StepOnePlus™ real-time PCR system (Applied Biosystems, Foster City, CA). The relative amount of the target gene was calculated using 2 (Ct test gene-Ct GAPDH) (Ct = threshold cycle). The sequences of the primers used are shown in supplemental Table S1 . Measurement of soluble collagen secretion HCFs were seeded in 6-well culture plates. After reaching 90% confluence, the cells were shifted into serum-free DMEM/F-12 overnight and treated with 40 μM of LPC for the indicated time intervals in either the presence or the absence of pharmacological inhibitors. The media were collected and the levels of soluble collagens (type I–V collagen) were analyzed using a Sircol collagen assay kit (Biocolor, Northern Ireland, UK). Determination of mitoROS HCFs were seeded in 6-well culture plates with coverslips. After the cells reached 90% confluence, they were shifted to serum-free DMEM/F-12 and then incubated with LPC for the indicated time intervals in either the presence or the absence of pharmacological inhibitors 1 h prior to LPC exposure. Then, cells were incubated in DMEM/F12 medium containing 5 μM of MitoSOX Red at 37°C for 10 min. The fluorescence signals were recorded (excitation/emission: 510 nm/570 nm) using a fluorescence plate reader (Synergy HT1; BioTek, Winooski, VT). To visualize mitoROS generation, the cells were washed thrice with media, and their fluorescence intensity was determined by fluorescence microscopy with a rhodamine filter (Axiovert 200M; Carl Zeiss, Thornwood, NY) and quantified using ImageJ software (1.41v; US National Institutes of Health). Determination of mitochondrial morphology HCFs were seeded in 6-well culture plates with coverslips. After the cells reached 90% confluence, they were transferred to serum-free DMEM/F-12 overnight. When pharmacological inhibitors were used, they were added 1 h prior to the exposure to LPC. After LPC treatment, the cells were washed with media, and then incubated in DMEM/F12 medium containing 500 nM MitoTracker Green (Invitrogen) for 10 min. To visualize mitochondrial morphology, the cells were washed thrice with media, and then observed using a fluorescence microscope with a FITC filter (Axiovert 200M; Carl Zeiss). Analysis of mitochondrial membrane potential HCFs were seeded in 6-well culture plates with coverslips or 24-well plates. After the cells reached 90% confluence, they were cultured in serum-free DMEM/F-12, and then stimulated with LPC for the indicated time intervals in either the presence or the absence of pharmacological inhibitors for 1 h prior to LPC exposure. To detect mitochondrial membrane potential (Δψm), cells were subjected to a mitochondria staining kit (CS0390; Sigma-Aldrich). Cells were cultured in 10 μg/ml 5,5′,6,6′-tetrachloro-1,1′,3,3′-tetraethylbenzimidazolocarbocyanine iodide (JC-1) in DMEM/F-12 containing 10% FBS at 37°C for 10 min. In healthy cells, JC-1 monomers accumulated as aggregates in the mitochondria due to existing mitochondrial polarization. These aggregates were visible on the red channel (rhodamine) when viewed with a fluorescence microscope. The JC-1 exists as a monomer and was visible on the green channel (FITC) when mitochondria were depolarized. The fluorescent images were captured with a fluorescence microscope (Axiovert 200M; Carl Zeiss) and quantified using ImageJ software (1.41v; US National Institutes of Health). Moreover, JC-1 fluorescence was also measured in a fluorescence plate reader (Synergy™ M H1 Hybrid Reader; BioTek). The fluorescence intensities of JC-1 monomer were measured by excitation at 490 nm and emission at 530 nm, and fluorescence intensities of JC-1 aggregates were measured by excitation at 520 nm and emission at 590 nm. Transient transfection with siRNAs HCFs were plated in 12-well plates, 6-well plates, or 10 cm dishes and after reaching about 90% confluence, were transferred to fresh serum-free DMEM/F-12 medium before transfection. The siRNAs of COX-2 (SASI_Hs01_00152843), EP 2 (SASI_Hs01_00158176), EP 3 (SASI_Hs02_00303570), EP 4 (SASI_Hs02_00105505), FoxO1 (SASI_Hs01_0076732), and scrambled siRNA were obtained from Sigma-Aldrich. The sequences of siRNAs are shown in supplemental Table S2 . Transient transfection of siRNA was conducted using GenMute™ siRNA transfection reagent according to the manufacturer's instructions (SignaGen Laboratories, Gaithersburg, MD). The siRNA (100 nM) was added to each well, and then the cells were incubated at 37°C for 6 h. The cells were transferred to DMEM/F-12 medium containing 10% FBS for an additional 6 h, washed twice with PBS, and then maintained in serum-free DMEM/F-12 medium for 24 h before treatment with LPC. Construction of FoxO1 plasmid DNA Ser 256 -to-Ala 256 (S256A) FoxO1 mutant and Ser 256 -to-Asp 256 (S256D) FoxO1 mutant were cloned into the EcoRV-HindIII site of the pCMV-Tag2B vector, as previously described ( 5 ). Transient transfection of plasmid DNA HCFs were seeded in 6-well plates or 10 cm dishes and after they reached 90% confluence, they were transferred to serum-free DMEM/F-12 medium and transiently transfected with plasmid DNA using an X-tremeGENE™ HP DNA transfection reagent (Roche Applied Science, Indianapolis, IN), as previously described ( 5 ). Measurement of COX-2 promoter activity For construction of the COX-2-luc plasmid, a human COX-2 promoter, a region spanning from −484 to +37 was cloned into pGL3-basic vector, as previously described ( 5 ). HCFs were cotransfected with pGL3b-cox-2 and pCMV-β-gal plasmid (as an internal control). Promoter activities of COX-2 were determined using a luciferase assay HIT kit (BioThema, Handen, Sweden) and normalized with β-Gal reporter gene as determined by using a Galacto-Light Plus™ system (Applied Biosystems, Bedford, MA). Immunofluorescence staining HCFs were seeded on coverslips in 6-well culture plates, and after they reached 90% confluence, they were transferred to serum-free DMEM/F-12 medium overnight, and then stimulated with 40 μM of LPC for the indicated time intervals. After washing twice with ice-cold PBS, the cells were fixed with 4% (w/v) paraformaldehyde in PBS for 30 min, and then permeabilized with 0.1% Triton X-100 in PBS for 15 min. The staining was performed by incubating with 5% BSA for 2 h at 37°C, followed by incubation with a primary anti-phospho-Drp1 S616 rabbit polyclonal antibody (1:100 dilution) and anti-TOM20 mouse monoclonal antibody (1:1,000) overnight in PBS containing 1% BSA. The cells were washed thrice with PBS and incubated for 2 h with a FITC-conjugated goat anti-rabbit antibody and rhodamine-conjugated goat anti-mouse antibody (1:100 dilution; Jackson ImmunoResearch, West Grove, PA) in PBS containing 1% BSA. Finally, cells were washed thrice with PBS, and then mounted with aqueous mounting medium containing DAPI (H1200; Vector Laboratories, Burlingame, CA). Images were captured with a fluorescence microscope (Axiovert 200 M; Carl Zeiss). Chromatin immunoprecipitation assay To detect the association of transcription factors with human COX-2 promoter, chromatin immunoprecipitation analysis was performed. Protein-DNA complexes were fixed by 1% formaldehyde in DMEM/F-12 medium and the reaction was terminated with 125 mM glycine. The sample was lysed, immunoprecipitated, washed, and eluted, as previously described ( 5 ). The enrichment of specific DNA and input DNA (as an internal control) were subjected to PCR amplification. The primer sequences were: FoxO1 forward primer 5′-AAGACATCTGGCGGAAACC-3′ and reverse primer 5′-ACAATTGGTCGCTAACCGAG-3′, which were specifically designed from the COX-2 promoter region (–300 to +2). qPCR was performed using Luna Universal qPCR master mix kit (M3003; New England BioLabs) on a StepOnePlus™ real-time PCR system (Applied Biosystems, Foster City, CA). Isolation of subcellular fractions HCFs were seeded in 10 cm dishes, and after they reached 90% confluence, they were transferred to serum-free DMEM/F-12 medium for 24 h, and then incubated with LPC for the indicated time intervals. The subcellular fractions were prepared using a NE-PER nuclear and cytoplasmic extraction kit according to the manufacturer's instructions (Thermo Scientific, Rockford, IL), as previously described ( 5 ). The protein concentrations of sample were determined by BCA assay and Western blotting. Data and statistical analysis All the data were estimated using GraphPad Prism Program 5 (GraphPad, San Diego, CA). Quantitative data were expressed as the mean ± SEM of at least three individual experiments (n ≥ 3), and analyzed with a one-way ANOVA followed by Tukey's post hoc test at a * P < 0.05 or # P < 0.01 level of significance. Error bars were omitted when they fell within the dimensions of the symbols. RESULTS LPC-induced mitoROS generation mediates COX-2-dependent collagen secretion LPC has been shown to stimulate mitoROS production as a result of proton leakage from the electron respiratory chain in various cell types ( 14 , 15 ). With respect to the effect of LPC treatment on mitoROS production in HCFs, the data revealed that mitoROS were generated ( Fig. 1A , B ) and were attenuated by mitochondrial antioxidants (MitoTEMPO and MitoQ). mitoROS have been demonstrated to mediate COX-2 expression in various cell types ( 28–30 ). Thus, we determined whether mitoROS regulated the LPC-induced COX-2 expression in HCFs. Pretreatment with MitoTEMPO or MitoQ attenuated COX-2 protein induction in a concentration-dependent manner ( Fig. 1C, D ). In addition, scavenging of mitoROS by MitoTEMPO or MitoQ reduced the LPC-mediated COX-2 mRNA expression and promoter activity ( Fig. 1E ), suggesting that mitoROS are key players in the induction of COX-2 by LPC in HCFs. GSH protects against cellular ROS and forms GSSG when GSH is oxidized; thereby, the ratio of GSH:GSSG is used as a marker of oxidative stress ( 31 ). We further confirmed that LPC decreased the ratio of GSH:GSSG ( Fig. 1F ) and increased COX-2 protein and mRNA expression ( Fig. 1G ) in ex vivo mouse heart apexes, which were reduced by pretreatment with MitoTEMPO ( Fig. 1F, G ). These results suggested that the LPC-induced increase in mitoROS production is associated with COX-2 expression. CFs act as central modulators, maintaining the heart structure and function. The imbalance between the synthesis and degradation of ECM components leads to cardiac fibrosis ( 32 ). Here, HCFs were used as a model that mimicked cardiac fibrosis in vitro. The levels of soluble collagen were markedly increased in the cultured media of HCFs treated with LPC ( Fig. 1H ). COX-2 metabolites, such as PGs, have been shown to mediate tissue fibrosis, including collagen deposition ( 22 ). Therefore, we determined whether LPC-induced COX-2 expression contributed to the increase in extracellular collagen content in HCFs. The levels of collagen content were upregulated after LPC treatment, which was significantly alleviated after pretreatment with celecoxib and NS-398, inhibitors of COX-2, and MitoTEMPO ( Fig. 1H ), suggesting that the levels of extracellular collagen are associated with mitoROS-mediated COX-2 expression in HCFs upon LPC exposure. Moreover, we also found that addition of PGE 2 induced collagen production from HCFs ( Fig. 1H ), suggesting that the COX-2/PGE 2 axis could be involved in LPC-induced collagen secretion. Drp1 is involved in mitochondrial fission and depolarization and COX-2 expression induced by LPC mitoROS have been shown to promote mitochondrial fragmentation and promptly induce mitochondrial depolarization ( 7 , 13 ). LPC has been demonstrated to stimulate mitochondrial membrane depolarization ( 14 ). We therefore examined whether LPC impaired Δψm using JC-1 staining. The green intensity of JC-1 monomers was increased in HCFs stimulated by LPC, while the red intensity of JC-1 aggregates was decreased ( Fig. 2A , supplemental Fig. S1A ), indicating that Δψm of HCFs was depolarized after exposure to LPC. Furthermore, LPC induced changes in mitochondrial morphologies from tube-shaped to fragmented mitochondria ( Fig. 2B ). Treatment of HCFs with 0.5% ethanol (EtOH) alone had no effect on Δψm and mitochondrial morphology. Mitochondrial fragmentation has been shown to be regulated by cytosolic Drp1, which is recruited to the mitochondrial outer membrane via posttranslational modification ( 10 , 12 , 13 ). To determine the levels of phosphorylated-Drp1 protein upon LPC treatment in HCFs, the results revealed that LPC significantly increased phosphorylation of Drp1 at Ser 616 but decreased its phosphorylation at Ser 637 ( Fig. 2C ). Fluorescent images further supported that phosphorylation of Drp1 at Ser 616 was also increased, which consequently led to its translocation into mitochondria ( Fig. 3A , supplemental Fig. S1B ). Together, these results indicated that LPC-induced mitochondrial fragmentation was mediated via Drp1-dependent mitochondrial fission in HCFs. Drp1 has been found to be involved in the expression of pro-inflammatory genes via ROS-mediated NF-κB activation ( 33 ). Our recent study also demonstrated that LPC-induced NF-κB activation regulates transcriptional activity of COX-2 in HCFs ( 5 ). Here, pretreatment with the inhibitor of Drp1 (mdivi-1) or GTPase (dynasore) attenuated LPC-induced COX-2 protein ( Fig. 3B, C ) and mRNA ( Fig. 3D ) expression and promoter activity ( Fig. 3D ). Furthermore, we corroborated our findings that Drp1 was involved in COX-2-mediated collagen induction upon LPC treatment in HCFs, which was attenuated by either mdivi-1 or dynasore ( Fig. 3E ). PKCα is involved in LPC-induced COX-2 expression and collagen secretion JNK1/2 have been shown to be involved in LPC-induced COX-2 expression, and also to be activated by PKCα ( 34 ). Therefore, we determined the role of the interaction between PKCα and JNK1/2 in the LPC-induced COX-2 expression in HCFs. Pretreatment with Gö 6976 (an inhibitor of PKCα) or Gö 6983 (a pan PKC inhibitor) attenuated COX-2 protein expression in a concentration-dependent manner ( Fig. 4A , B ), suggesting that PKCα is involved in the LPC-induced COX-2 expression in HCFs. Furthermore, LPC-induced COX-2 mRNA expression and promoter activity was attenuated by pretreatment with either Gö 6976, Gö 6983, or SP600125 (an inhibitor of JNK1/2) ( Fig. 4C ). Moreover, pretreatment with either Gö 6976 or SP600125 significantly reduced collagen induction by LPC in HCFs ( Fig. 4D ), indicating that both PKCα and JNK1/2 participate in COX-2-mediated collagen induction by LPC in HCFs. mitoROS initiates PKCα and Drp1 activation Redox signaling pathways are regarded as activators for JNK1/2 ( 27 ) and PKC ( 35 ) in various cell types. We further investigated the roles of mitoROS, PKCα, and JNK1/2 in the LPC-induced COX-2 expression in HCFs. Pretreatment with mdivi-1, Gö 6976, or SP600125 had no effect on mitoROS production stimulated by LPC for 30 min; however, treatment with either mdivi-1 or Gö 6976 significantly attenuated the mitoROS generation induced by LPC for 60 min ( Fig. 4E ). In addition, Western blot analysis revealed that the levels of PKCα and JNK1/2 phosphorylation were attenuated by MitoTEMPO ( Fig. 4F , supplemental Fig. S2A ). Furthermore, LPC-stimulated PKCα phosphorylation was attenuated by Gö 6976, but not by SP600125 ( Fig. 4F ). Moreover, the levels of PKCα and JNK1/2 phosphorylation were reduced by transfection with siPKCα ( supplemental Fig. S3 ). Taken together, mitoROS signaling is an upstream component of the PKCα-JNK1/2 cascade in HCFs stimulated by LPC. mitoROS have been shown to activate Drp1 and, consequently, contribute to mitochondrial fission ( 13 ). In contrast, inhibition of mitochondrial fission could diminish mitoROS generation ( 8 , 9 ). Therefore, we determined the role of mitoROS in Drp1 activation. mitoROS were not significantly altered by pretreatment with mdivi-1 upon LPC stimulation for 30 min ( Fig. 4E ). Moreover, phosphorylation of Drp1 at Ser 616 was inhibited by pretreatment with MitoTEMPO and mdivi-1 upon LPC treatment for 60 min ( Fig. 4F , supplemental Fig. S2A ), suggesting that mitoROS are an upstream regulator of Drp1 activation. Pretreatment with Gö 6976 attenuated phosphorylation of Drp1 at Ser 616 , which was further confirmed by transfection with siPKCα ( Fig. 4F ; supplemental Figs. S2A, S3 ). Interestingly, mdivi-1 also reduced PKCα phosphorylation ( Fig. 4F , supplemental Fig. S2A ), indicating that PKCα and Drp1 reciprocally regulate each other upon LPC treatment for 60 min. In addition, both mdivi-1 and Gö 6976 reduced mitoROS generation upon LPC stimulation for 60 min ( Fig. 4E , supplemental Fig. S2B ). These data suggested that mitoROS may mediate Drp1 and PKCα activation at an early stage in response to LPC treatment, whereas blocking of mitochondrial fission could ameliorate mitoROS generation and, consequently, reduce positive feedback of Drp1 and PKCα activation in HCFs. mitoROS enhance the interaction between PKCα and Drp1 leading to mitochondrial fission stimulated by LPC Drp1 is recognized as a substrate for PKCs in various cell types ( 11 , 36 ). Therefore, we investigated to determine whether PKCα directly bound to Drp1, which was a downstream component of PKCα in HCFs. We first found that LPC enhanced the interaction between PKCα and Drp1 in the PKCα- or Drp1-immunoprecipitated complexes in a time-dependent manner ( Fig. 5A , B ). Furthermore, the levels of phosphorylated Drp1 Ser 616 were increased in the PKCα-immunoprecipitated complexes ( Fig. 5A ). We further investigated the involvement of mitoROS, PKCα, Drp1, and JNK1/2 in the interaction between PKCα and Drp1 stimulated by LPC. Pretreatment with MitoTEMPO or Gö 6976 reduced the levels of the interaction between PKCα and Drp1 in the PKCα- or Drp1-immunoprecipitated complexes ( Fig. 5C, D ). On the other hand, the levels of phosphorylated Drp1 Ser 616 in the Drp1-immunoprecipitated complexes were attenuated after pretreatment with MitoTEMPO, Gö 6976, or mdivi-1 ( Fig. 5D ). However, LPC-mediated responses were not changed after pretreatment with mdivi-1 in the PKCα-immunoprecipitated complexes ( Fig. 5C ). These results suggested that mitoROS-dependent PKCα activation enhances the phosphorylation at Drp1 Ser 616 in HCFs stimulated by LPC. Next, we investigated to determine whether Drp1-mediated fission and mitochondrial depolarization were mediated via a mitoROS-PKCα-Drp1 pathway in HCFs challenged with LPC. Pretreatment with MitoTEMPO, mdivi-1, or Gö 6976, but not with SP600125, attenuated mitochondrial fragmentation ( Fig. 5E ) and restored the Δψm ( supplemental Fig. S4A–C ). Moreover, the phosphorylation of Drp1 Ser 616 was inhibited after pretreatment with MitoTEMPO, mdivi-1, or Gö 6976, but not with SP600125 ( Fig. 5F, G ). Together, these results suggested that LPC induced mitoROS-dependent PKCα-Drp1 activation and, consequently, contributed to mitochondrial depolarization that was associated with mitochondrial fission in HCFs. LPC-induced COX-2 expression is mediated via a mitoROS-PKCα-Drp1-JNK1/2-dependent FoxO1 pathway LPC-stimulated FoxO1 Ser 256 phosphorylation results in FoxO1 nuclear accumulation and enhances its binding to the regions of the COX-2 promoter via ROS-mediated JNK1/2 activation ( 5 ). We confirmed that FoxO1 was involved in the LPC-induced COX-2 expression, while pretreatment with AS1842856 (an inhibitor of FoxO1) decreased COX-2 protein expression (in a concentration-dependent manner), mRNA expression, and promoter activity ( Fig. 6A , B ) in HCFs. Furthermore, LPC-stimulated phosphorylation of FoxO1 Ser 256 in the nuclear fraction was reduced in the presence of MitoTEMPO, Gö 6976, SP600125, mdivi-1, or AS1842856 ( Fig. 6C ). In addition, the transcriptional activity of p-FoxO1 Ser 256 was also attenuated by these inhibitors ( Fig. 6D ). These results suggested that LPC-enhanced FoxO1 transcriptional activity on COX-2 expression is mediated via a mitoROS-PKCα-Drp1-JNK1/2-dependent cascade in HCFs. The role of FoxO1 in LPC-induced COX-2-dependent collagen production FoxO1 plays an important role in the process of fibrosis, including collagen expression ( 37 ). Thus, we determined whether FoxO1 was involved in the LPC-induced COX-2-dependent collagen induction in HCFs. The induction of collagen by LPC was reduced by pretreatment with AS1842856 ( Fig. 7A ). The involvement of FoxO1-dependent COX-2 activation in collagen production was further confirmed by transfection with either FoxO1 or COX-2 siRNA and subsequently attenuated the LPC-induced collagen production ( Fig. 7B ). In addition, our data revealed that the decrease of collagen content was reversed by addition of PGE 2 ( Fig. 7B ). These results suggested that LPC-induced collagen production is mediated through the FoxO1-dependent COX-2/PGE 2 axis in HCFs. Our previous report revealed that overexpression of WT FoxO1 or S256D FoxO1 (a phospho-mimic mutant), but not S256A FoxO1 (a phospho-silencing mutant), enhances the binding activity of FoxO1 with the COX-2 promoter, leading to COX-2 expression ( 5 ). We further investigated to determine whether FoxO1 was involved in COX-2-dependent collagen production through the overexpression of WT FoxO1, S256A FoxO1, or S256D FoxO1 in HCFs. We confirmed that overexpression of WT FoxO1 or S256D FoxO1 increased COX-2 expression in HCFs ( supplemental Fig. S5A ). Moreover, overexpression of either WT FoxO1 or S256D FoxO1 significantly enhanced collagen production ( Fig. 7C ), which was attenuated by transfection with COX-2 siRNA in HCFs ( Fig. 7D ) and also reduced after pretreatment with celecoxib ( supplemental Fig. S5B ). These results suggested that phosphorylation of FoxO1 at Ser 256 residue contributes to collagen production in a COX-2-dependent manner in HCFs. LPC-induced COX-2-dependent collagen secretion is mediated via EP 4 receptors The COX-2/PGE 2 axis has been shown to mediate biological events, including ECM synthesis, via prostanoid EP receptors ( 22–26 ). While investigating whether production of PGE 2 was involved in the LPC-triggered responses in HCFs, we found that LPC-induced PGE 2 production was inhibited by pretreatment with either MitoTEMPO, Gö 6976, mdivi-1, SP600125, AS1842856, dynasore, celecoxib, or NS-398 ( Fig. 8A ). Next, we characterized the expression of EP receptors in HCFs. The prostanoid EP receptors, including EP 2 –EP 4 were expressed on HCFs ( supplemental Fig. S6A ). Further, we determined which of the EP receptors mediated the COX-2/PGE 2 axis on LPC-induced collagen production. Transfection with EP 2 –EP 4 siRNA knocked down the level of respective EP mRNA ( supplemental Fig, S6B ). We found that knockdown of EP 4 but not EP 2 and EP 3 receptors significantly attenuated the LPC-induced collagen secretion ( Fig. 8B ). Taken together, our results suggested that LPC-induced collagen secretion via a COX-2/PGE 2 axis is predominantly mediated through EP 4 receptors in HCFs. DISCUSSION The excessive production and deposition of scar tissues are well-known characteristics of cardiac fibrosis. This pathological event is characterized by irreversible injury in cardiomyocytes, while noncardiomyocytes, such as CFs, are less susceptible to injury ( 32 ). Although CFs are quiescent, they are responsible for maintaining heart functions during injury and remodeling. We attempted to explore the mechanisms underlying LPC-induced COX-2/PGE 2 -dependent collagen expression in HCFs. First, we found that the increased COX-2 induction was accompanied by PGE 2 accumulation upon LPC treatment. Subsequently, the COX-2/PGE 2 axis contributed to extracellular collagen induction via EP 4 receptors. Second, LPC-induced COX-2 expression was mediated through mitoROS production. Furthermore, the increase in mitoROS altered the mitochondrial morphology via the translocation of Drp1 from the cytoplasm into the mitochondria, leading to mitochondrial fission and depolarization. Third, PKCα was an upstream regulator of Drp1 and interacted with Drp1 in response to LPC-mediated mitoROS generation in HCFs. Activation of signaling components involved in COX-2 expression by LPC was mediated through a mitoROS-dependent PKCα-Drp1-JNK1/2 cascade. Fourth, phosphorylation of FoxO1 at Ser 256 was regulated by a mitoROS-mediated PKCα-Drp1-JNK1/2 pathway and resulted in the accumulation of FoxO1 in the nucleus, which led to COX-2/PGE 2 expression. Fifth, inhibition of COX-2 expression by pharmacological inhibitors or silencing of COX-2 attenuated WT FoxO1 and S256D FoxO1-mediated collagen induction. We concluded that LPC promoted mitoROS generation and, consequently, activated a PKCα-Drp1-JNK1/2 cascade, which contributed to COX-2/PGE 2 /EP 4 -mediated collagen induction by enhancing FoxO1 transcriptional activity in HCFs. Moreover, mitoROS promoted PKCα-Drp1-mediated mitochondrial fission and depolarization in HCFs ( Fig. 7C ). LPC has been shown to be accumulated in ischemic heart ( 4 , 38 ), and the concentration of LPC in the ischemic heart is approximately 100 to 200 μM ( 39–41 ), which is dependent on the species and tissue compartments. In in vivo conditions, LPC may be nonspecifically bound to plasma proteins in the myocardium, while the free form of LPC has a lower concentration than that of protein bound LPC by about five to ten times ( 42 , 43 ). Under cell-cultured experiments, LPC could affect cell viability ranging from 10 to 80 μM in rat cardiomyocytes ( 3 , 44 ). Therefore, we determined cell viability in HCFs exposed to LPC: the concentrations of LPC less than 50 μM failed to affect cell viability (data not shown). Fibrotic events have been reported to be regulated by various events that promote fibroblast activation and ECM expression ( 2 ). The LPC content is associated with fibrotic responses in the animal model of cardiac fibrosis ( 3 , 4 , 38 ); however, the role of LPC in the progression of fibrotic responses has not been well defined. Previous studies have demonstrated that LPC is involved in valvular sclerosis in aortic valve interstitial cells via expression and deposition of collagen I ( 45 ). Here, we suggested that LPC increases extracellular collagen levels, which potentially enhances collagen deposition in the heart. In fact, LPC is produced from the hydrolysis of phospholipids by PLA 2 . In in vivo studies, myocardial I/R injury increases cPLA 2 activity and cardiomyocyte apoptosis, which are reduced with group V sPLA 2 knockout ( 46 ). In addition, circulating sPLA 2 is associated with LPC generation in the animal model of cardiac fibrosis. Therefore, we speculated that expression of PLA 2 protein may be upregulated and hydrolyze phospholipids to LPC in the setting of injury. There are two possible mechanisms for LPC production: i ) LPC is released from apoptotic myocytes, which are generated by cPLA 2 ; or ii ) elevation of sPLA 2 hydrolytic activity increases LPC production. The involvement of COX-2/PGEs in different diseases via EP 1–4 receptors has been demonstrated in several cell culture and animal models. Treatment with both celecoxib and the EP 1 receptor antagonist, SC 19220, and knockdown of the EP 1 receptor have been demonstrated to attenuate the deposition of vascular collagen in hypertensive animals ( 22 ). COX-2 expression, associated with PGE 2 secretion, has been shown to promote profibrotic activities via the EP 4 receptor in human pancreatic stellate cells ( 23 ). In contrast, knockout of the EP 3 receptors promotes eccentric cardiac hypertrophy and fibrosis in mice ( 24 ). COX-2-mediated PGE 2 production suppresses the synthesis of collagen via the EP 2 receptors in pancreatic stellate cells ( 25 ) and acts as an autocrine inhibitor of fibrogenesis in lung fibroblasts in vitro ( 47 ). Activation of the EP 2 receptors by PGE 2 inhibits TGF-β1-induced collagen synthesis in dermal fibroblasts ( 26 ). Our previous study has also shown that LPC induces COX-2 and IL-6 expression, which is associated with cardiac fibrosis in HCFs ( 5 ). In this study, we further revealed that LPC-induced COX-2/PGE 2 expression was accompanied by secretion of collagen in HCFs, which was predominantly mediated via the EP 4 receptors in HCFs. Using antioxidant with nitrate is a potentially beneficial combinational therapy to improve myocardial infarct size for heart patients ( 48 ). Pretreatment with MitoTEMPO could reduce mitoROS levels and prevent chronic heart remodeling and myocyte viability and fibrotic area in experimental animals ( 49 ). Our data confirmed that MitoTEMPO improved LPC-induced collagen production in HCFs. Previous mechanistic evidence demonstrated that pretreatment with MitoTEMPO restores vascular function via suppression of mitoROS-mediated COX-2 expression ( 29 ). Consistently, our data demonstrated that LPC-induced mitoROS initially promoted signaling cascades and, consequently, activated COX-2/EP 4 -dependent collagen induction in HCFs. Further, our ex vivo data suggested that scavenging of mitoROS reduced oxidative stress and COX-2 expression in HCFs. Abnormal activation of PKCα is associated with the occurrence of heart failure via excessive collagen synthesis ( 50 ). Our results indicated that inhibition of PKCα activation attenuated LPC-induced collagen production. In addition, LPC-activated PKCα was necessary for the COX-2 expression in HCFs, similar to the other cell types ( 51 ). Our previous data demonstrated that PKCα is an upstream component that activates NADPH oxidase activity leading to ROS generation ( 27 ); whereas, ROS are considered as an activator of PKC ( 35 ). Our present data revealed that scavenging of mitoROS attenuated PKC activation and suppressed the interaction between PKCα and Drp1. Previous data have also demonstrated that PKCδ enhances interaction with Drp1 and translocation into mitochondria and causes mitochondrial fragmentation in neuronal cell lines under oxidative stress or in hypertensive rats ( 11 ). On the other hand, sevoflurane-induced cardioprotection depends on PKCα activation via production of ROS ( 52 ). Although we have not yet demonstrated that activation of PKCα may enhance mitochondrial translocation of Drp1, our present data revealed that activation of PKCα enhanced its interaction with Drp1, which, consequently, led to mitochondrial fission and depolarization. Mitochondrial membrane dynamics are regulated by Drp1, which is phosphorylated at different sites and, accordingly, exerts opposite effects in LPC-mediated responses. Phosphorylation of Drp1 at Ser 616 and dephosphorylation of Drp1 at Ser 637 enhances its activity and favors fission that results in mitochondrial dysfunction ( 13 , 53 ). Drp1, which is phosphorylated at Ser 616 , is a substrate of CaMKII or PKCδ ( 11 , 54 ). The Ser 637 of Drp1 is phosphorylated by PKA, which suppresses its GTPase activity and mitochondrial translocation ( 55 , 56 ). In addition, dephosphorylation of Drp1 at Ser 637 promotes mitochondrial fragmentation ( 57 ). Our data demonstrated PKCα interacted with Drp1 and phosphorylated Drp1 at Ser 616 . In addition, pretreatment with an inhibitor of CaMKII failed to inhibit Drp1 phosphorylation at Ser 616 (data not shown). Mitochondrial fission has been reported to cause mitoROS generation, while suppression of Drp1 activation by mdivi-1 attenuates mitochondrial fission and ROS production that is a promising target for treatment of cardiomyopathy and IR injury ( 9 ). Recent studies have reported that mitoROS initiates signaling cascades to mediate phosphorylation of Drp1 at Ser 616 ( 13 ). Our data revealed that scavenging of mitoROS or inhibition of PKCα attenuated the formation of a PKCα-Drp1 complex and led to mitochondrial fission. Surprisingly, pretreatment with Gö 6976 or mdivi-1 could reduce mitoROS generation upon LPC stimulation for 60 min, whereas pretreatment with these inhibitors had no effect on mitoROS generation upon LPC stimulation for 30 min. Therefore, we speculated that LPC stimulated mitoROS generation, which regulated the formation of a PKCα-Drp1 complex and, consequently, contributed to mitochondrial fission. Drp1 mediates the changes in mitochondrial morphology and regulates gene expression. Previously, suppression of Drp1 activation by midiv-1 reduced NF-κB-dependent gene expression by inhibiting ROS generation ( 33 ). Our previous data demonstrated that NADPH oxidase-derived ROS mediated COX-2 expression by regulating JNK1/2-FoxO1 and -NF-κB transcriptional activity ( 5 ). We further demonstrated that inhibition of Drp1 activation attenuated nuclear levels of FoxO1, phosphorylated at Ser 256 , and FoxO1-mediated COX-2 expression by downregulating ROS-mediated JNK1/2 activation. Furthermore, pretreatment with mdivi-1 reduced collagen expression as observed in ventricular fibroblasts ( 58 ). Our data suggested that Drp1 is associated with COX-2/PGE 2 -dependent collagen production in HCFs. It has been demonstrated that knockout of Drp1 in a mouse model suppressed JNK1/2 phosphorylation in drug-induced liver toxicity ( 59 ). It is considered that activation of JNK 1/2 was regulated by mitoROS ( 59 ). Here, we demonstrated that pretreatment with a scavenger of mitoROS or an inhibitor of Drp1 phosphorylation at Ser 616 attenuated JNK1/2 phosphorylation, suggesting that Drp1 could indirectly regulate JNK1/2 activation via ROS signaling of mitochondria. In addition, PKCα signaling has been observed to cross-talk with the JNK 1/2 pathway via activation of MEKK-SEK-MKK cascade. ( 60 , 61 ). Our data demonstrated that both Gö 6976 and mdivi-1 attenuated JNK1/2 phosphorylation. These observations suggested that PKCα and Drp1 are the regulators upstream of JNK1/2 activation. In particular, we have reported the involvement of the ROS-JNK1/2-FoxO1 axis in LPC-induced COX-2 expression ( 5 ). Moreover, LPC-induced collagen production via FoxO regulation was demonstrated in HCFs. Therefore, we further confirmed that the increase of collagen production induced by LPC was inhibited by respective pharmacological inhibitors. The transcriptional activity of FoxO1 is related to its phosphorylation status and nuclear localization. FoxO1 accumulates in the nuclear fraction and contributes to differentiation and pro-collagen expression under TGF-β stimulation in CFs ( 37 ). Our data indicated that inhibition of FoxO1 attenuated extracellular collagen induction. Although TGF-β has been shown to inhibit FoxO1 phosphorylation, leading to its nuclear retention, our previous data demonstrated that accumulation of nuclear FoxO1 phosphorylated at Ser 256 increased COX-2 expression via ROS induced by LPC ( 5 ). Furthermore, we demonstrated that induction of the COX-2/PGE 2 /EP 4 axis by WT FoxO1 and S256D FoxO1 was necessary to induce collagen production, which was attenuated by silencing COX-2 or by celecoxib treatment in HCFs. There are several limitations of this study. First, although the roles of these signaling molecules in the present study were examined using different pharmacological inhibitors, these may not be specific for these components. Second, the mechanisms of ROS generation from mitochondria are not clear in the present study. Third, although we investigate the cell models of HCFs that are derived from the human heart and in the setting of injury ex vivo, the present study cannot completely dissect the effects of LPC in vivo. Therefore, further studies are necessary to substantiate these findings and investigate the potential contribution of different cells in the heart, including cardiomyocytes and immune cells. In conclusion, our findings demonstrated that the molecular mechanisms underlying LPC-induced mitoROS mediated the PKCα-Drp1-FoxO1 cascade and mitochondrial fission, which contributed to COX-2/PGE 2 -dependent collagen production via regulation of the EP 4 receptor in HCFs, suggesting that pharmacological interventions that target EP 4 receptors, scavenging of mitoROS, and attenuation of mitochondrial fission present therapeutic targets for strategies aimed at preventing cardiac fibrosis. Supplementary Material | [
"GONZALEZ",
"LEASK",
"HUANG",
"NAM",
"TSENG",
"WANG",
"TWIG",
"HUANG",
"SHARP",
"ZHOU",
"QI",
"FLIPPO",
"TSUSHIMA",
"JIANG",
"LI",
"STEINBERG",
"MOTLEY",
"GREENHOUGH",
"CHIEN",
"HSU",
"WONG",
"AVENDANO",
"CHARO",
"LIU",
"POMIANOWSKA",
"ZHAO",
"LIN",
"KIRITOSHI",... |
a1be826fe41c41cc814a1da386ffef5a_Duration of sustained remission after treatment by induction with exclusive enteral nutrition and az_10.1016_j.anpede.2020.03.017.xml | Duration of sustained remission after treatment by induction with exclusive enteral nutrition and azathioprine in patients with Crohn’s disease | [
"Pascual Pérez, Alicia Isabel",
"Pujol Muncunill, Gemma",
"Domínguez Sánchez, Patricia",
"Feo Ortega, Sara",
"Martín de Carpi, Javier"
] | null | Dear Editor: Numerous studies have evinced the efficacy of exclusive enteral nutrition (EEN) to induce remission in paediatric-onset Crohn’s disease (POCD). The guidelines of the European Crohn's and Colitis Organization and the European Society of Pediatric Gastroenterology, Hepatology and Nutrition (ECCO-ESPGHAN) recommend the use of EEN combined with early initiation of immunosuppression in patients with mild to moderate forms of disease. 1–4 However, there are no data on the long-term efficacy of this strategy in preventing or postponing the use of biological therapy. 5 Thus, to determine the proportion of our patients with POCD that require initiation of anti-tumour necrosis factor (TNF) therapy after achieving clinical remission with the aforementioned approach, we conducted the observational retrospective study presented in this article. We reviewed the medical records of patients with POCD that were diagnosed in our unit between 2003 and 2017 and achieved clinical remission at onset with a combination of EEN and thiopurine drug therapy (azathioprine, mercaptopurine). We collected demographic, clinical and outcome data for these patients until February 2019 or their transition to the adult care unit. We included 91 patients (68.1% male; mean age at onset, 12.29 years, median age, 13 years; age range, 8 months–17 years) ( Fig. 1 ). The mean duration of follow-up of patients in our unit was 60.45 months (range, 8–165 months). During this time, 66 of the 91 patients (72.53%) relapsed. The strategy used to manage the relapse in 17 patients (25.76%) was a second cycle of EEN, which was effective in 7 of them (41.18%). The mean time elapsed from diagnosis to the second cycle of EEN was 13.76 months (maximum, 110 months). During the follow-up, 65.6% of the patients required escalation of treatment to anti-TNF therapy due to failure of maintenance with thiopurines, with a mean time elapsed from onset to initiation of anti-TNF therapy of 15.29 months (median, 9 months). Of all patients that required biological therapy, 72.9% started with adalimumab (ADA). After a period of combined treatment (anti-TNF and thiopurines), the immunosuppressive treatment was discontinued in 42.2% of patients once they exhibited sustained clinical and endoscopic remission, thus switching to anti-TNF as monotherapy. Despite the limitations intrinsic in the retrospective design of the study, the results obtained in a large sample of patients ( n = 91 patients) show that although EEN is an effective approach for induction of remission in POCD, a successful-enough approach has yet to be established for subsequent maintenance to prevent the need of biological therapy in the medium term in a significant proportion of patients. Such an approach should involve more strict criteria for the definition of remission and a thorough evaluation of the latter so enable the establishment of more appropriate maintenance therapy. Funding This study did not receive any funding. Conflicts of interest The authors have no conflicts of interest to declare. | [
"NAVASLOPEZ",
"GROVER",
"SWAMINATH",
"RUEMMELE"
] |
28abca14e5424eec974c9c854e29d07e_Energy and exergy studies of a Sulphur recovery unit in normal and optimized cases A real starting u_10.1016_j.ecmx.2022.100241.xml | Energy and exergy studies of a Sulphur recovery unit in normal and optimized cases: A real starting up plant | [
"Ibrahim, Ahmed Y.",
"Ashour, Fatma H.",
"Gadalla, Mamdouh A."
] | Sulphur recovery units produce Sulphur from H2S gas to prevent any acidic gas emission against environmental regulation. A refinery plant in the middle east that started its official production in 2020 has an SRU plant to do this role. The plant was simulated and optimized using a special package for Sulphur in HYSYS named SULSIM. While energy is transformed into other forms, exergy is destructed in an irreversible process. A complete exergy study was conducted on different plant equipment to calculate the exergy destruction, the exergy efficiency and the percentage share of destruction. The study included the normal case, optimized case and a comparison between both cases. The thermal reactor has the highest destruction rate of 39551.79 kW, with a percentage share of 56.481% of total destruction. Then catalytic reactor1 with a destruction rate of 14000.22 kW and 19.993 percentage share. In general, the reactors participated with 88.88%, the reactors with the addition of the waste heat boiler shared with 97.07% of total destruction. The destruction distribution was approximately the same in optimized conditions. The total destruction rate of equipment was 70026.31 kW and 70301.72 in normal and optimized conditions respectively. Although from an energy point of view optimized conditions were better, the destruction exergy rate exceeds the normal conditions by 275.41 kW. In optimized conditions, some equipment destruction rate exceeds their values in normal conditions. From the energy point of view, 1.04 ton/h of steam was decreased in the optimized conditions saving 69247.03 $/year. The calculated destructed value from SULSIM for “Amine & Regenerator” is −115.56 kW, which is incorrect; the correct value is 2404.85 kW, which was calculated using standard HYSYS V.11. | Nomenclature e Specific exergy) E Exergy rate E ∙ k Kinetic energy E ∙ p Potential energy ε Exergy efficiency g Gravitational Acceleration constant H Enthalpy H ∙ enthalpy rate H ̂ Specific enthalpy m ∙ Mass rate Q ∙ Heat duty S Entropy T Temperature R Gas constant v velocity W ∙ s Shaft work Abbreviations AAG Amine Acid Gas ADA Air Demand Analyzer CFD Computational Fluid Dynamics CR Catalytic Reactor DA Degassed Air DEA diethanolamine E Exchanger FG Fuel Gas IA Instrument Air Inc Incinerator LA Lean Amine Liq Liquid MDEA Methyl Diethanolamine O Outlet QT Quench Tower RA Rich Amine RR Reduction Reactor S Sulphur SC Sulphur Condenser SRE Sulfur Recovery Efficiency SRU Sulphur Recovery Unit SWS Sour Water Stripper SWSAG Sour Water Stripped Acid Gas TG Tail Gas TGT Tail Gas Treatment Section TGTU Tail Gas Treatment WHB Waste Heat Boiler Wt. Weight Greek letter Δ The difference between inlet and outlet Subscripts che Chemical e Exit i Inlet, specie in a mixture k Kinetic p Potential s Shaft 0 Standard conditions Superscripts ch Chemical ph Physical 0 Standard conditions Introduction Hydrogen Sulfide is a hazardous pollutant due to its toxic, corrosive and acidic nature. Sulphur recovery unit plants are used to produce elemental Sulphur from H 2 S [1-3] and to avoid acidic emissions that violate environmental regulations [4-6] . Sulphur recovery units process The Claus process is an old method of producing Sulphur from H 2 S. The modified Claus process is currently the most widely used; the process divides the H 2 S feed into two parts: one-third of the feed reacts with oxygen to produce SO 2 , and the remaining two-thirds react with the produced SO 2 to form Sulphur [7-11] . The modified Claus process is divided into two parts: the thermal section and the catalytic section. A Claus reaction furnace and a waste heat boiler are the main components of the thermal section. One-third of the H 2 S in the acid gas feed is converted to SO 2 in the reaction furnace (eq. (1)), and the flue gas from the furnace, which contains unwanted byproducts COS and CS 2 , is cooled in the WHB. The recovered heat is used to generate steam. In most cases, the thermal section converts (55–65%) of the Sulphur. The catalytic section can be two or three stages long, with one stage consisting of: (a reheater, a catalytic reactor and finally a Sulphur condenser). The reheater is used to raise the temperature of the thermal reactor's gas outlet to that of the appropriate reactions. The Claus reaction, which produces Sulphur, is carried out in a catalytic reactor (eq. (2)) in the first catalytic stage, and the by-products COS and CS 2 are converted to H 2 S via hydrolysis reactions (eq. (3) and eq. (4)). The reactor's exhaust gas is cooled in the Sulphur condenser, where the Sulphur is condensed into a liquid phase. The overhead gas from the condenser is fed into stage 2 of the catalytic section, where Sulphur is produced again (Eq. (2)). The maximum Sulphur recovery efficiency obtained from the Claus section with two stages is 93–95 percent, and the maximum Sulphur recovery efficiency obtained from the three stages is 96–98 percent. In recent years, the Sulphur recovery efficiency (SRE) required to meet environmental regulations has been 99.9%, so a tail gas treatment unit (TGTU) has been added to the Claus section to perform this mandatory role. All reactions require the use of combustion air [5,10–13] . If Ammonia is one of the components feeding the SRU, it will be destroyed in the reaction furnace (eq.5) [14] . (1) H 2 S + 1.5O 2 → SO 2 + H 2 O (2) 2H 2 S + SO 2 → 3/8S 8 + 2 H 2 O (3) CS 2 + 2H 2 O ⇌ CO 2 + 2H 2 S (4) COS + H 2 O ⇌ CO 2 + H 2 S (5) 2NH 3 + 1.5 O 2 → N 2 + 3 H 2 O Energy and exergy concepts Process modelling and simulations are important tools for studying the reliability and performance of SRU plants. These models are broadly classified as equilibrium and kinetic models. Computational fluid dynamics (CFD) simulations were later used in these studies. Exergy studies of SRU plants have recently piqued the interest of researchers. In some cases, high energy efficiency equipment also has high energy destruction [15-18] . Optimal energy consumption is regarded as one of the most important factors in achieving a high level of community development. As a result, optimising energy and preventing losses is critical in a variety of industries. High energy consumption and losses have an impact on system costs. Avoiding energy losses is regarded as the primary design criterion in thermal systems [17] . Ibrahim et al. conducted an energy optimization analysis on the refinery plant to save steam from the units that feed the SRU plant. The study saved 1,537,206.38 dollars per year [19] . As we all know, energy can be transformed into various forms, but in exergy, energy can be destroyed in an irreversible process [8] . The exergy of a stream is defined as the maximum amount of work that can be obtained when the stream is brought from its initial state to its dead state using only processes that interact with the environment. Exergy is classified into two types: chemical exergy and physical exergy. The magnitude of chemical exergy is greater than that of physical exergy [20] . One goal of exergy studies is to calculate the destroyed exergies (the lost energy in the process) in the various process equipment. Ibrahim et al. conducted several exergy studies in various units to discover similar relationships between equipment. Review of the literature on the Sulphur recovery units and their feeding units Ibrahim et al. investigated the exergy of two Amine regeneration units feeding the SRU plant. As a result of system losses during startup, the DEA concentration fell from 25% to 20%. Exergy studies on both concentrations were performed to determine the changes that occurred in the various exergy calculations [21,22] . They also carried out an exergy study for an MDEA scrubber unit at the SRU plant. The DEA concentration dropped from 45 percent to 22 percent as a result of system losses during startup. Exergy studies on both concentrations were performed to determine the changes that occurred in the various exergy calculations. [23] . Ibrahim et al. investigated the exergy of two Sour Water Stripping Units (SWS) units feeding the SRU plant and compared between both units [24,25] . With a value greater than 78 percent, the columns (strippers, regenerators, and absorbers) have the highest percentage of destruction in their units. With a value greater than 7%, air coolers have the second-highest percentage of destruction in their units. The percentage share of destruction in the exchangers' units ranges from (3.11 to 10.14) percent [21-25] . According to SRU, only a few studies have focused on exergy. Rostami et al. conducted an exergy study on five different SRU configurations, noting the differences between the different SRU five selected types as overall exergy without regard for individual equipment) [8] . Hashemi et al. conducted a comprehensive exergy study on an SRU plant, taking into account individual equipment, and identified equipment with high destruction rates [17] . Zarei conducted an exergy study to distinguish between different process sections without taking into account each piece of equipment [26] . Introduction for the current study A refinery plant in the middle east started its official production in 2020. The H2S produced in the different refinery units is recovered as Sulphur product in an SRU plant. The SRU plant faced actual problems during the start-up phase that threatened to stop the overall refinery because the SRU is the last unit receiving acid gas. Fig. 1 represents a block diagram for the plant. In this article authors have conducted a complete exergy study on all equipment in normal and optimized conditions, they compared both cases. The SRU plant was simulated with HYSYS V.11 using a special package for Sulphur named SULSIM. The simulated data was validated by comparing it to plant data, which yielded accurate results. The simulation outputs are then used in exergy calculations (mass flow rates, molar flow rates, and mass exergy for each stream). In addition, the calculated HYSYS power for equipment is used. The required HYSYS software results are used in a series of exergy equations embedded in Excel to perform the required exergy calculations. For the normal and optimized cases, the exergy destruction rate, exergy efficiency, and percentage share of destruction were calculated. A comparison was made between the two cases. It was discovered that some unexpected values were found in the normal, optimized, and comparison cases. All values were described by the authors. A study is also conducted from an energy standpoint to compute the saved steam in the three exchangers under optimised conditions as well as the total saved cost ($/year). Materials and methods The materials and methods describe the HYSYS - SULSIM package simulation used in the studies, as well as the equations used for exergy calculations, energy optimization equations, and the “objective function, manipulating variables, and constraints” required in the studies. Simulation step and process description The entire plant is simulated using the SULSIM package in HYSYS V.11. For safety reasons, the liquid Sulphur product is degassed by air into a degasser to reduce any traces of H2S to a maximum value of 10 ppm by weight. The sweet gas discharged from the TGTU is routed to the incinerator, where any Sulphur compounds are converted to SO2. The incineration's flue gas is discharged to the atmosphere via a stack. Table 1 displays the plant feed characteristics. This package incorporates empirical correlations fitted to plant data and provides the ability to use rigorous simulation to accurately model the various SRU equipment and other Sulphur recovery plant operations. It enables meeting stringent environmental regulations and standards in flare gases, particularly H 2 S and SO 2 emissions limits. Researchers can also optimize the design and operation of acid gas cleaning and Sulphur recovery systems. From an operation point of view, the plant can operate reliably, avoiding shutdown by satisfying air permits during operational changes. Moreover, it is easy to select the correct SRU equipment configuration to meet plant demands. SULSIM package is also used to minimize capital expenditures and operating expenses for SRUs to maximize plant profitability. The most important simulation key is a suitable selection for each piece of equipment. Because SRU plants around the world have different arrangements, the constraints of the equipment and reactors are dependent on the accuracy of the reactor type selection from the SULSIM package. The reactor will solve perfectly once the correct type is selected. Because it is designed specifically for SRU plants, the SULSIM package does not operate in the same way as standard HYSYS. The plant is made up of a thermal section that does 70% of the Sulphur conversion and a catalytic section with two stages. It completes the remaining 30% Sulphur conversion. The first Sulphur condenser condenses the Sulphur produced in the thermal reactor, while the second and third condensers condensate the Sulphur produced in the catalytic section. Following the catalytic section, a TGTU converts any Sulphur by-products in the tail gas exiting the catalytic section as COS and CS 2 to H 2 S. The H 2 S is recycled as feed to the thermal section once more. For safety reasons, the liquid Sulphur product is degassed by air into a degasser to reduce any traces of H 2 S to a maximum value of 10 ppm by weight. The sweet gas discharged from the TGTU is routed to the incinerator, where any Sulphur compounds are converted to SO 2 . The incineration's flue gas is discharged to the atmosphere via a stack. Simulation of the Claus section The reaction furnace is chosen as (reaction furnace – two-chamber), the WHB is chosen as (waste heat exchanger – single pass), and the suitable choice for catalytic reactors 1 and 2 is (catalytic converter). The SULSIM package's Sulphur condenser name is (Sulphur condenser). An ADA (Air Adjust Analyzer) is a critical tool for adjusting the (H 2 S/ SO 2 ) ratio in the catalytic section's tail gas outlet to the optimum value of two. This value ensures maximum Sulphur conversion in catalytic reactors 1 and 2 via reaction (2). Simulation of the tail gas treatment section Using hydrogen, the tail gas treatment converts any Sulphur compounds in the reduction reactor to H 2 S. The tail gas is then cooled in the quench tower before being routed through the Amine scrubber unit. The tail gas is directed to the bottom of an absorber, and an Amine solution is directed from the top to absorb H 2 S from the tail gas. The sweet gas is directed to the incinerator as it exits the top. The rich Amine with H 2 S is then stripped in a regenerator. The lean Amine from the bottom of the regenerator is recycled to sweeten the gas in the absorber. Table 2 displays the TGTU equipment chosen from HYSYS. Diethanolamine DEA and Methyl diethanolamine MDEA are commonly used for gas sweetening. MDEA is used in solutions containing H 2 S and CO 2 because it is more selective to H 2 S than CO 2 . In our case, MDEA is used to desorb CO 2 from the top of the sweet gas directed to the incinerator [28-33] . Simulation of the degassing section and the incinerator The degassing section is selected as (Sulphur degasser) and the incinerator has the same name as (incinerator) in SULSIM. In the degasser, the Outlet Liquid H 2 S Concentration specification is 10 ppm-wt. The target exit temperature of the Incinerator is 652 °C. Validation step The simulation results are compared with plant data and low deficiency is shown between both results. The selected validation streams are the liquid Sulphur product and the flue gas from the stack because the main target of the SRU unit is to prevent any acidic emissions as (H 2 S, SO 2 , S x , CS 2 and COS) through the flue gas from the stack. Acid gas my form acid rains. The produced Sulphur is the second main target. The Sulphur product shall have a maximum limit of 10 ppm-wt. H 2 S. Exergy calculations The physical and chemical exergy are calculated based on the following equations: (1) Physical exergy = H - H 0 - T 0 S - S 0 (2) Chemical exergy = ∑ x i ex che 0 + RT 0 ∑ x i ln x i (3) Destruction exergy = ∑ m i e i - ∑ m e e e Where x i is the mole fraction of specie “i” in mixture, ex 0 che is the standard chemical exergy found directly from tables or calculated through methods [20,25] . The terms of H, S, T, R and 0 stand for enthalpy, entropy, temperature, global constant of gases and standard condition, respectively. We did not ignore chemical exergy because its value is comparable and higher than physical exergy; therefore, the sum of physical and chemical exergy is used as total exergy. (4) E ph = m · e ph (5) E ch = m · e ch The exergy of the material stream is also calculated by the summation of the physical and chemical exergy values for each stream. (6) E = E ph + E ch The physical mass exergy is calculated from HYSYS simulation, while the chemical exergy is calculated from excel. The exergy efficiency of system components is defined as the ratio between the outlet exergy value to the inlet exergy value for each component, and exergy efficiency in the whole system represents the percentage of inlet exergy that is converted to the outlet in the system [17-27] . Exergy destruction calculations of equipment Table 3 shows the exergy destruction equations used for all equipment calculations. The table shows the total inlet exergy, total outlet exergy, and exergy destruction for any given piece of equipment. Material and energy balance concepts The SRU plant has much equipment (reactors, heat exchangers, heaters, coolers, mixers, WHB, reboilers compressors, condensers and turbine). Almost it contains all the equipment categories. This study mainly concentrates about the optimization of steam from the three optimized exchangers. It is expected that the required steam used as heating media will decrease. Any steam reduction has a direct relation with the steam reduction in boilers. Authors in the article tried to give an approach also on the method of the material and energy balance in the plant to be useful for readers. Material balance equations The general material balance equation is as follows: (7) Input - - o u t p u t + g e n e r a t i o n - c o n s u m p t i o n = a c c u m u l a t i o n In the case of steady-state conditions, no accumulation exists, the equation is written as: (8) Input + g e n e r a t i o n = o u t p u t + c o n s u m p t i o n Material balance assumptions The plant runs on steady-state conditions, in case of the non-reactive system no generation or consumption in the system. Material balance equations used in plant The reactive systems follow equation (Eq.2). A continuous non-reactive system follows the following equation: (9) Input = o u t p u t otherwise the equation is written as follows: (10) ∑ input m ∙ i = ∑ output m ∙ i Energy balance equations The overall energy balance equation is as follows: (11) Δ H ∙ + Δ E ∙ k + Δ E ∙ p = Q ∙ - W ∙ s The enthalpy difference is calculated based on the following equation. (12) Δ H ∙ = ∑ output m ∙ i H ̂ i - ∑ input m ∙ i H ̂ i The kinetic energy is calculated based on the following equation: (13) Δ E ∙ k = ∑ output m ∙ i v 2 / 2 - ∑ input m ∙ i v 2 / 2 The potential energy is calculated based on the following equation: (14) Δ E ∙ p = ∑ output m ∙ i g z j - ∑ input m ∙ i g z j Energy balance assumptions The plant contains much equipment, so we have to use assumptions according to each equipment. • If no temperature change or phase change or chemical reaction, no big change in pressure from inlet to outlet. Then (mechanical energy balance is more useful in this case) Δ H = 0 • If (temperature change or phase change or chemical reaction occur), , ( Δ H ∙ ≠ 0 , Δ E ∙ k ) can be neglected. Δ E ∙ p • If no great vertical distances between the inlets and the outlets, Δ E ∙ p = 0 • If the system and its surrounding are at the same temperature or if the system is perfectly insulated then and the process is adiabatic Q ∙ = 0 • If the energy is not transmitted across the system boundary by a moving part, an electric current, or radiation W ∙ s = 0 Plant equipment energy balance equations The equation used for pumps, turbines and compressors is: (15) Δ H ∙ = W ∙ s The equation used for condensers, heaters and reactors: (16) Δ H ∙ = Q ∙ The equation used for mixers, waste heat boilers and adiabatic reactors is: (17) Δ H ∙ = 0 Felder et al. displayed material and energy balance equations for different process and equipment, he also indicated the assumptions of each case [34] . Plant optimized conditions Optimization step was to study the ability to work at lower process temperatures than current conditions without affecting the environmental regulations or the production rate. The optimization step has an overall objective function, manipulating variables and constraints. The overall objective function is cost optimization which is calculated through equation 18. C o s t o p t i m i z a t i o n = c o s t s a v e d f r o m s t e a m o p t i m i z a t i o n o f c o m b u s t i o n a i r p r e h e a t e r + c o s t s a v e d f r o m s t e a m o p t i m i z a t i o n o f c a t a l y t i c r e a c t o r 1 p r e h e a t e r + c o s t s a v e d f r o m s t e a m o p t i m i z a t i o n o f d e g a s s i n g c o l u m n p r e h e a t e r (18) Constraints philosophy 1. Any optimization in the SRU units must take into account the environmental restrictions required to meet environmental requirements. 2. This optimization cannot be conducted if the feed to the thermal reactor contains a high proportion of CO2 since it is responsible for the formation of the two unwanted byproducts COS and CO 2 in the thermal reactor via equations (6) and (7) . High amounts of these byproducts may not be managed in the process under optimal conditions, causing environmental damage. (6) CO 2 + H 2 S → COS + H 2 O (7) CO 2 + 2 H 2 S → CS 2 + 2 H 2 O Combustion air preheater The combustion air preheater objective function, manipulating variable and main constraints are described in Table 4 . Catalytic reactor1 preheater The catalytic reactor1 preheater objective function, manipulating variable and main constraints are described in Table 5 . Degassing air preheater The degassing air preheater objective function, manipulating variable and main constraints are described in Table 6 . (H 2 S/SO 2 ) ratio importance for SRU The optimum (H 2 S/SO 2 ) operation ratio is (2:1), and any variation in this ratio can cause serious problems in the overall process and the TGTU section. During Start-up mode, natural gas is used to heat the process until it reaches the appropriate temperature in the reaction furnace, which is 1350 °C. After that, acid gases are added to the reaction furnace without the TGTU section being lined up. After adjusting the ratio (H 2 S/SO 2 ) to 2, the reduction reactor can be lined up. The reactions in the TGTU section's reduction reactor are extremely exothermic. In a typical ratio, the reduction reactor's outlet temperature is about 280 °C. Catalyst sintering occurs when the temperature reaches 400°Celsius. Results and discussion Validation results and plant simulation The plant has been validated for two main streams that are the main targets of the SRU unit: the Sulphur product stream and the flue gas to stack stream. The percent error is shown in Table 7 and is closed together, with the largest deviation value being 6.91 percent. Ranges lower than 10% are accepted. We did not include in the table many of the components shown previously in the feed because they are all zero in industrial and simulation, but we did include some important components with zero values such as (CO, H 2 S, SO 2 , COS, CS 2 , and NH 3 ) because they are considered key components in the stack outlet. The process simulation outlet that can be also considered as PFD for the plant is shown in Fig. 2 . Physical and chemical exergy calculations for streams for normal conditions The physical and chemical exergy calculations for streams are calculated based on equations of section 2.3. HYSYS calculated molar flow rates, mass flow rates and mass exergy for streams are presented in Table 8 . The calculated physical, chemical and total exergy for each stream are presented in Table 9 . It is observed that in most cases the chemical exergy values are higher than the physical exergy values. Table 10. . Exergy destruction and exergy efficiency of equipment in normal conditions The exergy destruction and exergy efficiency of equipment are calculated based on the equations in Table 3 . The equipment power HYSYS calculations used in exergy destruction calculations are shown in Table 11 . The simulation results are compared with plant data and low deficiency is shown between both results. In general, the chemical exergy is higher than the physical exergy and affects the exergy destruction. The destructed exergy, the percentage share of destruction, and the exergy efficiencies are presented in Table 12 . For streams of similar compositions, chemical exergy is neglected in destructed exergy calculations for equipment. The percentage share of destruction in equipment is presented in Fig. 3 . It is observed that the higher destruction rate is shown in reactors (thermal reactor, catalytic reactor1) with values of (39551.79 kW and 14000.22 kW) respectively. Exergy destruction purposes are irreversibilities that happen by the chemical reaction, friction, mixing, heat transfer and expansion. We can say that all these irreversibilities exist in the thermal reactor and catalytic reactor1. In heat exchangers, heat transfer and frictions are the most purposes of irreversibilities [15] . Destructed exergy values and exergy efficiency It is important to check the destruction values of equipment in parallel to their exergy efficiencies. For example, although the destruction energy of the thermal reactor is higher than the destructed energy of the catalytic reactor1, its exergy efficiency of 61.67% is higher than Catalytic reactor1 efficiency of 48.73%. Although the incinerator percentage share of destruction is 6.548, its efficiency is 53.64%. It is observed that the highest destruction rate participation comes from reactors (thermal reactor, Catalytic reactor1, incinerator and catalytic reactor2) with 62240.20 kW and a percentage share of 88.88% of total destruction. We need to highlight again one concept that it is mandatory to check exergy destruction with exergy efficiency, the lowest exergy efficiency from the table is for E-Air with 32.11%, although its exergy destruction value is 1084.32 kW and its percentage share in destruction is 1.548%. The waste heat boiler is the third participant in the exergy destruction. Some reactions also happen inside the waste heat boiler, in general whenever reactions occur, the outlet composition of the components change and exergy destruction exists. The summation of destructed exergy of reactors with the addition of the waste heat boiler is 67977.95 kW with a percentage share of 97.07% of total destruction. The remaining exergy destruction rate of 2048.36 kW and 2.93 percentage share are from exchangers (E-air, E-CR1, E-CR2, E-IA) and Sulphur condenser3. Amine absorber & Regenerator wrong calculated destructed value The SRU package regarded the unit as a single block, disregarding the various pieces of equipment. Fig. 4 depicts the output of the SRU simulation, demonstrating that the sour gas from the SRU unit was directed to the “Amine Absorber & Regenerator” block, while the sweet gas was directed to the incinerator for the final oxidation of any H 2 S to SO 2 . Stream 1B was recycled once more and mixed with the SRU feed. The calculated destructed value for the “Amine Absorber & Regenerator” in the Sulsim package is −115.56 kW. Ibrahim et al. used the standard HYSYS programme V.11 to perform a full exergy assessment on the Amine scrubber unit with actual equipment and determined that the unit's destructed value is 2404.85 kW [23] . Fig. 5 shows the actual Amine scrubber unit performed by the standard HYSYS V.11. Plant optimized conditions The plant was optimised by lowering the temperature at the outlet of three different exchangers in the process. Table 14 compares the optimised conditions to the design conditions. An exergy study was also performed on the optimised conditions to assess its influence from an exergy standpoint. We know that from an energy standpoint, lowering the temperatures of exchangers that use steam as a heating medium reduces the amount of steam required by the exchangers, but the purpose of this study is to see what happens in exergy parameters. Physical and chemical exergy calculations for streams for optimized conditions The physical and chemical exergy calculations for streams are calculated based on equations of “section 2.3”, HYSYS calculated molar flow rates, mass flow rates and mass exergy for streams are presented in Table 15 . The calculated physical, chemical and total exergy for each stream are presented in Table 16 . The chemical exergy is usually higher than the physical one similar to normal conditions. Exceptions for higher physical exergy values than chemical exergy The percentage share of chemical exergy in total exergy shows its higher value usually comparing with physical exergy. The percentage sharing of chemical exergy in total exergy shows clearly this concept. Only little streams as (Air-out, 5, 33, IA) have physical exergy higher than chemical exergy because they are composed of components of low standard chemical exergy, consequently, their chemical exergy values are low. Table 17 shows the components of these streams with their standard chemical exergies’ values. They are mainly composed of oxygen and nitrogen with very low standard chemical exergy. Exergy destruction and exergy efficiency of equipment in optimized conditions The exergy destruction and exergy efficiency of equipment are calculated based on the equations in “section 2.3”. The equipment power HYSYS calculations used in exergy destruction calculations are shown in Table 18 . Table 20 shows the destructed exergy, the percentage share of destruction, and the exergy efficiencies. It was discovered that the distribution of destruction parameters is nearly identical to that of normal conditions, with a few minor differences. Fig. 6 depicts the percentage share of equipment destruction. The reactors' exergy destruction value (thermal reactor, catalytic reactor1, incinerator, and catalytic reactor2) is 62801.75 kW, accounting for 89.33 percent of total destruction. The reactors with the addition of WHB exergy destruction equals 68516.59 kW with a percentage share of total destruction of 97.46 percent. The remaining 2.93 percent is allocated to (E-Air, E-CR1, E-CR2 and Sulphur condenser3). A comparison between normal and optimized conditions Table 21 shows a comparison of destructed values in normal and optimized conditions. When comparing normal and optimized conditions, it was discovered that some destructed values increased while others decreased in the optimized conditions. The destructed energy in the thermal reactor, for example, increased by 657 kW from 39551.79 kW to 40209.26 kW. In catalytic reactor1, the destructed exergy increased by 701.65 kW. In the waste heat boiler, there is a small decrease in the destruction of 22.91 kW. In the incinerator, a small increase of 50.03 appears. We save 847.59 kW in destruction with catalytic reactor2. Although we save steam in terms of energy, the optimized conditions outperform the normal conditions in terms of exergy destruction by 275 kW. The study clearly demonstrated that the equipment with the highest destructed values as the reactors should be included in the researcher's future studies to reduce energy losses and for useful economic purposes. Better equipment arrangement from the design phase can generally reduce some exergy losses. Exergy studies will be useful and can be applied practically when matching with economic values. 1. Despite the fact that the optimized study saved $69247.03 in energy costs per year, the destructed values in the optimised conditions are 275.41 kW higher than the normal conditions. As a result, applying the optimised conditions practically and economically is more beneficial. 2. In normalised and optimised conditions, the E-air has lower exergy efficiencies of 32.11 percent and 31.00 percent, respectively. Theoretically, adding a preheater before E-air to raise the air temperature to a certain value will increase E-air efficiency and reduce exergy losses, but in practise, it is not an economical solution and will only add unnecessary preheater to the process. Explanation of high destructed values in reactors In normal conditions, the sum of destructed exergy of reactors with the addition of the waste heat boiler has a percentage share of 97.07 percent and 97.46 percent in optimised conditions. It should be noted that many reactions occur in the waste heat boiler. Friction, heat transfer with a finite temperature gradient, throttling, diffusion, combustion, and other chemical reactions are the primary causes of exergy losses. Combustion is a complex process that involves several irreversible steps: mixing of the fuel with the oxidizer (diffusion of oxygen), chemical reaction, heat transfer from reacting molecules to other molecules, and mixing of combustion products with the remainder of the combustion gases (diffusion). Typically, combustion occurs concurrently with heat transfer to the combustion chamber's walls [35] . Catalytic reactor1 comparison example Mainly as it is mentioned before the change in composition affects the destructed energy values. We need to take an example on catalytic reactor1 on the main purpose affecting that the destructed energy in optimized conditions is greater than normal conditions. As it is mentioned before in Table 3 the destructed exergy for catalytic reactor1 is E 9(O) - E 11 . The total exergy of any stream is the summation of its chemical and physical exergy. It is found that E ch for stream 11 in normal conditions is greater than optimized condition by 1135.88 kW as shown in The purpose is the change in the composition of some components in the optimized conditions. Table 22 . The purpose is the change in the composition of some components in the optimized conditions. The H 2 S decreased from 0.0166 to 0.0139, the SO 2 decreased from 0.0083 to 0.0069, while the H 2 O increased from 0.3450 to 0.3481. H 2 S and SO 2 are components of higher standard chemical exergy (ex 0 che ) than water. As it is mentioned in “section 2.3” E ch = m ∙ e ch . The composition of the components is the main influencer in e ch calculated by the equation (∑ x i ex 0 che + RT 0 ∑x i ln x i ). Table 23 shows the variation in composition between the normal and optimized conditions related to catalytic reactor1. Some values of exergy may increase or decrease in streams in normal and optimized conditions, but only we are discussing the main idea and the purpose of changing. Normally the change in E ph is not the main influencer in changing in exergy except in streams of equal inlet and outlet compositions where chemical exergy is neglected in the destruction exergy calculations of equipment. Cost saving from energy point of view The optimized conditions of the plant require less steam than the normal conditions. Table 24 shows the difference between both conditions. For example: in the case of combustion air preheater: the required mass of steam that is used as heating media for the combustion air inlet to the thermal reactor is 3218 kg/h. This is the required mass rate to heat the air to 240 oC in normal conditions. The optimized conditions for the combustion air inlet to the thermal reactor is 220 oC. The mass of steam required to achieve this temperature is 2798 kg/h. So, the saved steam, in this case, is 420 kg/h. The total saved mass of steam from the three exchangers is 1040.12 kg/h. The average steam cost is 7.6 $/ ton. The total saved calculated cost from the three exchangers is 69247.03 $/year. Fig. 7 shows the percentage share of each preheater in the total annual cost reduction. Authors main contribution 1. The term exergy can only be used if it is economically relevant. 2. This optimization case can be applied because of the decreased CO 2 content in the thermal reactor feed. 3. We cannot utilise the SULSIM to calculate the destructed value for the “Amine absorber and Regenerator” since the SRU package treated the unit as a single block, ignoring the individual pieces of equipment. Summary and conclusions An SRU plant of a refinery column that stated its official production in 2020 was simulated and optimized using HYSYS V.11 SULSIM software. A complete exergy study was conducted in both normal and optimized cases. The main calculations concern exergy destruction, exergy efficiency and percentage share of the destruction of each equipment. The total destructed rate in normal conditions was 70026.31 kW. The thermal reactor has the highest destruction rate of 39551.79 kW and a percentage share of 56.481 of total destruction, then the Catalytic reactor1 with a rate of 14000.22 kW and a percentage share of 19.993% of total destruction. The reactors (thermal reactor, Catalytic reactor1, incinerator and catalytic reactor2) participated with 88.88% of the total destruction. The reactors with the addition of the waste heat boiler shared with 97.07% of the destruction rate. The remaining 2.93 percentage share are from exchangers (E-air, E-CR1, E-CR2, E-IA) and Sulphur condenser3. Although from an energy point of view optimized conditions are better, the total destruction rate exceeded the normal conditions with a rate of 275.41 kW in the exergy study. The total destruction rate in optimized conditions was 70301.72 kW. The destruction distribution was almost the same with normal conditions except that some equipment destruction rate in optimized conditions was higher than their values in normal conditions as (the thermal reactor and catalytic reactor1). For example, the thermal reactor destruction rate in optimized conditions is higher than the normal one with 657.47 kW. The total exergy of the stream is the summation of its physical and chemical exergy. Usually, the chemical exergy value has a higher magnitude than the physical exergy. The composition of the components and their standard chemical exergies is a function in the chemical exergy equation. If the composition of the streams changed from components having lower or higher standard chemical exergies, the value of the chemical exergy will decrease or increase, and consequently, the total exergy will be affected by this change. In this way, it is described as the purpose that some destructed values for equipment in optimized conditions were higher than normal conditions. No one until this time gave us a clear method on how to use perfectly the destructed exergy in industry. Plants and forests can use some of the destructed exergies from the sun in photosynthesis. From the energy point of view, the total saved steam from the three preheaters was 1.04 ton/h and the saved cost per year was 69247.03 $/year. The percentage share in cost reduction from (Combustion air preheater, Catalytic reactor1 preheater, and degassing air preheater) was (40.38 %, 59.32% and 0.30%) respectively. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment The authors would like to thank the refinery company team for giving the ability to use plant data in the study and process engineering team for ideas sharing. | [
"KHATAMI",
"MAHMOODI",
"LAVERY",
"ABDOLI",
"SUI",
"IBRAHIM",
"HOSSEINI",
"ROSTAMI",
"KAZEMPOUR",
"IBRAHIM",
"IBRAHIM",
"MEHMOOD",
"GHAHRALOUD",
"MONNERY",
"RAHMAN",
"RAO",
"HASHEMI",
"ZAREI",
"IBRAHIM",
"IBRAHIM",
"IBRAHIM",
"IBRAHIM",
"ZAREI",
"SZARGUT",
"PASHAEI",
... |
9e6d438aa8294b348ac00c63ae7e54c0_Corneal thinning following bevacizumab intrastromal injection for the treatment of idiopathic lipid _10.1016_j.ajoc.2022.101618.xml | Corneal thinning following bevacizumab intrastromal injection for the treatment of idiopathic lipid keratopathy | [
"Sun, Kristie J.",
"Jun, Albert S.",
"Bohm, Kelley",
"Daroszewski, Daniel",
"Jabbour, Samir"
] | Purpose
To describe the occurrence of corneal thinning in a patient following intrastromal injection of bevacizumab to treat lipid keratopathy.
Observations
A 36-year-old female presented with decreased vision in her right eye with central posterior corneal haze and underwent a treatment regimen including artificial tears, cyclosporine 0.05% drops, prednisolone 1% and oral Valacyclovir 1g with no improvement. Neovascularization was noted at 18 months follow up and treated with intrastromal bevacizumab injections at 24 months. The feeder vessel was attenuated at 3- and 6-months post-injection, but tomography indicated sustained thinning and flattening of the cornea at the injection site contributing to the development of irregular astigmatism.
Conclusions and Importance
Corneal thinning is an uncommon potential side effect of intrastromal bevacizumab injection that may affect postoperative visual acuity. | 1 Introduction The cornea is characterized by a lack of blood vessels and lymphatics, ensuring its transparency and immune privilege. Its avascular properties are maintained through a fine balance of angiogenic and antiangiogenic cellular mechanisms. In disease conditions, such as inflammation, trauma, or hypoxia, this homeostasis can be disturbed leading to corneal neovascularization (CNV) with secondary hemorrhage, lipid deposition, and scarring. 1 Multiple treatment modalities have been studied including topical steroids, argon laser vessel ablation, and fine needle diathermy. 2 3 , Vascular endothelial growth factor (VEGF) has been to found to be a key factor in the development of CNV, and has been a popular target for CNV treatment modalities. 4 5 , Bevacizumab (Avasatin; Roche, Welyn Garden City, UK) is a recombinant humanized monoclonal antibody that binds the VEGF-A isomer, and is widely regarded as a safe and effective treatment for choroidal neovascularization. Since then, limited publications have described outcomes of treatment of CNV with topical, subconjunctival, and intrastromal bevacizumab with variable outcomes 6 ,. While generally regarded as a safe treatment, the literature lacks good description of complications of bevacizumab administration in the cornea. In this report, we describe the occurrence of corneal thinning, a newly described complication, following intrastromal injection of bevacizumab for lipid keratopathy. 6–11 2 Case report A 36-year-old female was referred for evaluation of progressively decreased vision in her right eye. On presentation, the patient complained of onset of symptoms one year prior. She had a history of migraines that are controlled with topiramate. Aside from being a daily contact lens wearer for the past 20 years, she had no other past ocular history of eye infection or previous eye surgeries. She had no known family history of corneal diseases or dystrophies. At presentation, her corrected distance visual acuity (CDVA) was 20/30 in the right eye (OD) and 20/20 in the left eye (OS). Her intraocular pressures were normal. On slit lamp examination of the right eye, the eye was noted to be quiet with no signs of inflammation. The corneal epithelium and anterior 2/3 of the stroma were clear. A diffuse posterior haze spanning the central cornea and sparing the periphery was noted ( Figs. 1 and 2 ). The anterior chamber was quiet. The iris did not exhibit synechiae or trans -illumination defects on retroillumination. The left eye examination was unremarkable. The patient was started on a trial of artificial tears, cyclosporine 0.05% drops twice a day, prednisolone 1% drops three times a day and oral Valacyclovir 1g twice a day with little improvement over the following 6 months. At the 18 months follow-up visit, a fine, deep, neovessel was noted at 8 o'clock, which was believed to be contributing to the posterior haze. The patient was asked to limit contact lens wear to 8 hours a day, but the haze continued to worsen. At her 2-year follow-up, CDVA decreased to 20/100 in the right eye. Given the persistence of the deep feeder vessel and worsening presumed lipid keratopathy, it was decided to proceed with intrastromal bevacizumab injections to obliterate the vessel. After patient consent was obtained, the patient was anesthetized with proparacaine hydrochloride and the ocular surface was disinfected with 5% povidone iodine solution. With a 30G needle, 0.1 cc of bevacizumab 25mg/ml was injected at the corneal limbus close to the feeder vessel. During the procedure, a blanching of the vessel was noted. The patient tolerated the procedure well. She was placed on a post-operative regimen of ofloxacin 0.3% four times per day for four days, and instructed to continue prednisolone 1% drops two times a day. Three months following the injection, the patient's vision deteriorated further to 20/200. Slit lamp exam revealed attenuation of the feeder vessel but no improvement in the posterior haze. Tomography revealed irregular astigmatism over the intrastromal injection site causing a 60 μm decrease in corneal thickness ( Fig. 3 ). Pentacam scheimpflug imaging revealed a diffuse hyperreflectivity in the posterior cornea, representing the area of haze ( Fig. 4 ). At 6 months post-operatively, the patient's vision improved to the pre-injection baseline of 20/100 and the haze was found to be less dense. However, tomography still showed irregular astigmatism contributing to decreased vision. She was referred for RGP lens fitting but experienced minimal improvement in vision presumably due to the remaining stromal haze. At her last visit, the patient was engaged in a discussion about possible full thickness or deep anterior lamellar corneal graft to improve her vision. 3 Discussion Lipid keratopathy is a condition classified by lipid deposits in the cornea that result in corneal opacification. 3 , There are two recognized etiologies of lipid keratopathy: idiopathic, which is spontaneous, and secondary, due to neovascularization, inflammation, trauma or other systemic disease of the eye (e.g., herpes, interstitial keratitis). 12 3 , 13 , Bevacizumab is known to bind VEGF, preventing VEGF from binding to its endothelial cell surface receptors. This inhibits angiogenesis, as well as potentially reducing existing abnormal vasculature. 14 Due to bevacizumab's ablative properties, we propose a potential mechanism to explain the corneal thinning: after vessel regression, edema may be resorbed. The expected result is a return to baseline corneal thickness, but if accompanied by subsequent resorption of collagen or other corneal structural tissue, ablation resulting from intrastromal bevacizumab injection may result in secondary local thinning and the development of astigmatism. An inflammatory process or ulceration is less likely, since no epithelial defect was noted after injection. However, this conclusion is limited by the solitary nature of this case report. We recommend caution when considering use of intrastromal bevacizumab injection due to the possibility of corneal thinning. Alternative modalities to treat neovascularization, such as laser ablation and topical steroids, may be more appropriate initial considerations. 14 To conclude, intrastromal and subconjunctival bevacizumab injection has been recommended in the literature as a safe and generally effective way to address corneal neovascularization. Occasionally temporary side effects have been noted, but thus far no long term or systemic side effects have been described. There have been no documented associations between bevacizumab injection and corneal thinning or flattening. Potential underreporting of this phenomenon may be related to eyes with lipid keratopathy often having poor visual potential due to other comorbidities (multiple prior surgeries, retina or glaucoma issues, etc.), thus making visual changes due to corneal thinning and irregular astigmatism less noticeable. In our case, the patient had normal visual potential. This case presents the first known documentation of an intrastromal bevacizumab injection resulting in corneal thinning and flattening. The mechanisms underlying the observed effects are still unclear, and require further examination. In the meantime, caution should be applied when considering treatment modalities to address corneal neovascularization and lipid keratopathy, particularly for eyes with otherwise normal visual potential. Patient consent This report does not contain any personal information that could lead to the identification of the patient. Funding No funding or grant support. Authorship All authors attest that they meet the current ICMJE criteria for Authorship. Declaration of competing interest The following authors have no financial disclosures: KS, AJ, KB, DD, SJ. | [
"CURSIEFEN",
"STEVENSON",
"HALL",
"HSU",
"CARMELIET",
"GIANNACCARE",
"SARAH",
"GUPTA",
"FASCIANI",
"HASHEMIAN",
"LICHTINGER",
"SPRAUL",
"LOEFFLER",
"MUKHERJI"
] |
7cd6f24823404c148b97f35f066caec5_What is the role of expanded hemodialysis in renal replacement therapy in 2020_10.1016_j.nefroe.2021.06.001.xml | What is the role of expanded hemodialysis in renal replacement therapy in 2020? | [
"Perez-Garcia, Rafael",
"Alcazar-Arroyo, Roberto",
"de Sequera-Ortiz, Patricia"
] | null | Elimination of uremic toxins in dialysis One of the primary functions of dialysis in the treatment of stage five chronic kidney disease (CKD) is the elimination of uremic toxins (TU). The model to follow is the kidney, capable of purifying all types of TU continuously without discarding albumin. The elements of this renal function are the glomerular filtration, the tubular functions and their urinary elimination to the outside. Dialysis is far from emulating these functions, although its purification capacity has improved over the years. Knowledge of TUs has advanced in the last decades. They have been molecularly typified, the concentrations they reach in CKD have been measured, and their individual toxicity has been described. 1,2 TU retention is related to cardiovascular risk, 1–3 the main cause of mortality in dialysis patients. TUs are usually classified by their molecular weight (MW) and their binding or not to proteins, mainly albumin ( 3–5 Table 1 ). Recently, some authors propose the division of medium TU into two groups: molecules of medium and high MW, 1,2 placing the limit of these at 15,000 Da, according to different authors. This classification is of particular interest, since these high PM TUs have been associated with some of the main comorbidities derived from CKD and dialysis, 6 3 specifically, with inflammation and cardiovascular disease. New techniques and hemodialysis (HD) membranes allow the elimination of a greater number of medium-sized molecules than conventional HD. At the beginning of dialysis, the dialyzers-membranes available only allowed the removal of low-MW TUs. Low-flux (LF) dialyzers only significantly purify low-PM TU and have a coefficient of hydraulic permeability (Kf) of less than 20 mL/h/mmHg. Later, high flux or high flux (HF) dialyzers appeared with a Kf greater than 20 mL/h/mmHg, capable of eliminating the so-called medium molecules, with a MW of up to 20,000 Da, including β 2 -microglobulin (β 2 -mG) and leptin. Thus, depending on the permeability of the dialyzer, two types of HD are distinguished, high (HD-HF) and low flow (HD-LF). Currently, we have numerous dialyzers with Kf greater than 50 mL/h/mmHg, called very high hydraulic permeability. The Membrane Permeability Outcome (MPO) study is the greatest exponent of the best clinical results of HF-HD compared to LF-HD. Techniques with high convective transport and especially on-line hemodiafiltration (OL-HDF) with HF dialyzers, were the next step, managing to purify some of the high-MW TU, avoiding the elimination of significant amounts of albumin. The 7,8 Online Hemodiafiltration Study (ESHOL) and other randomized trials have demonstrated the superiority in terms of survival of OL-HDF patients over HF-HD and LF-HD. We could conclude that eliminating the TU of PM larger and in a greater quantity is related to a better prognosis of dialysis patients, independently of other morbidity and mortality cofactors. 9,10 In the last five years, a new type of membrane has been developed, with a higher cut-off point (CO), called mid-cut point (MCO), with the ability to remove high PM molecules, as well as This is done by high CO (HCO) membranes, used in myeloma, but capable of retaining albumin. These new membranes have a high retention point or “ 11,12 retention onset ”. This concept is determined by the PM from which more than 10% of the solutes will be retained, that is, the PM for which the coefficient of screening (Cc) will be 0.9. The size of the pores of the OLS membrane is intermediate between those of the HF and HCO membranes ( Fig. 1 ). In addition, and as we will explain below, these membranes have a dialyzer design that enhances internal convective transport. Therefore, with these devices it is possible to obtain a clearance of medium and large molecules superior to that of HF dialyzers ( 13 Table 2 ). HD performed with OLS dialyzers has been called “extended” HD (HDx) because it is capable of increasing the PM range of removed TUs. The objective of this review is to describe the characteristics of OLS membrane dialyzers, their performance in terms of TU elimination, preliminary clinical results, and to position HDx among HD techniques today. Technical development of dialyzers and dialysis The evolution of dialyzers has been a key piece in the development of HD. In this evolution, they have highlighted a series of markers of its effectiveness in eliminating TU. Classically, to assess the elimination of low PM TU, the Kt/V or better the Kt corrected by body surface (Ktsc) is used. The transition from HD-LF to HD-HF was marked by the Kf titration of the dialyzers, greater than 20 mL/h/mmHg in those of HF. 14 Convective transport is assessed by the total ultrafiltered volume (VTU) and the Cc coefficient for the different molecules, being the Cc of β2 -mG the most used. In this sense, the EUDIAL 15 group marked among the requirements to define an effective OL-HDF, to use a dialyzer with a Cc for β2-mG greater than 0.6 and a VTU per session greater than 23 L. The importance of the amount of convective volume administered is due to the fact that it has been directly related to the clearance of medium molecules, 16 and as previously commented, with the mortality of HD patients. 17 To typify dialyzers with MCO membranes, two markers have been used, the retention point or “ 11 retention onset ” (MWRO) and the cut-off point (MWCO): the first is determined by the MW of the molecule whose Cc is of 0.9 or expressed in another way, the MW of the molecules that begin to be retained by 10%, and the second, the MW that corresponds to a Cc of 0.1, close to the CO of the evaluated membrane ( Fig. 2 ). These two markers have been used in the Cc curve for a membrane/dialyzer, because determining the MW with which a molecule begins to be retained and the MW of CO is very difficult. OLS dialyzers would have a Cc for β 2 -mG close to 1 and 0.9 for myoglobulin. Cc in dialyzers depend on the conditions under which they have been measured. The type of plasma, blood flow (Qb), ultrafiltration flow (Quf) and the time it took to obtain the sample are some of the factors that determine the result. The comparison of dialyzers, using their Cc is only reliable if the measurement conditions are similar. In the technical data sheet of the devices, it must be specified under which conditions the Cc has been measured. 15 The MCO membrane has been used in Theranova® (Ther) dialyzers, available in Spain for three years. Their design favors internal filtration/retrofiltration, as a form of purification by convective transport, which has been called internal hemodiafiltration (HDFi). 18 One of the ways to increase HDFi is to decrease the internal diameter of the capillaries, up to 180 µ in diameter in these dialyzers, to increase internal resistances and achieve greater filtration/back-filtration. The internal radius of the capillary is listed to the fourth power in the Hagen-Poiseuille equation that calculates the resistance of the blood to the passage through the capillaries. A long, narrow dialyzer with a capillary diameter of 180 µ will cause a very large pressure drop within the capillary, of around 170 mmHg for 400 mL/min of Qb. Another way to increase internal filtration is to increase the hydraulic resistance in the dialysis fluid (LD) compartment, increasing the density of the capillaries inside the dialyzer, between 55 and 60%, 19 creating a pressure drop of 30 mmHg to 500 mL/min of LD flow (Qd). In internal filtration, Qb and Qd directly influence. 20 Although internal convective transport has been enhanced in the new MCO dialyzers, recent studies suggest that diffusive transport is the main mechanism for elimination of medium molecules in HDx. Always remember the interference of these two types of transport on the same membrane. 21 A requirement in the development of MCO membrane dialyzers was that they did not lose a significant amount of albumin. The high permeability must be compatible with a minimal elimination of this protein, with a Cc less than 0.01. The loss of albumin is dependent on the size of the pores, its interaction with plasma and the transmembrane pressure (PTM). In OL-HDF, the prefilter pressure can reach 700 mmHg, being the main determinant of the loss of albumin. Dialyzers with OLS membranes should not be used at these pressures, so their use is contraindicated in OL-HDF and isolated ultrafiltration. With the HDF technique, OLS membranes can cause significant albumin losses. In any hemodialysis technique, the loss of this should be less than 4 g per session. 6 To assess the effectiveness of OLS dialyzers, we must continue to use Kt/V or Ktsc. In the clinic we cannot measure HDFi, although it has been estimated that for an MCO dialyzer, Ther400, with a Qb of 400 mL/min it would be 41.6 mL/min and 53.1 mL/min with the Ther500, which would mean a four-hour HD session with an internal filtration of 9,984 mL and 12,744 mL, respectively, to which the programmed ultrafiltration would have to be added. Taking into account that the Cc of this dialyser for β2-mG is >0.9, a clearance of β2-mG similar to that of OL-HDF can be calculated. The Qb, therefore, would also be an important determinant of the purifying efficiency with these dialyzers. The preHD concentration of β2-mG could be useful, and should be kept between 20 and 30 mg/L, except in situations of hyperproduction of β2-mG. With OLS dialyzers, Kf no longer has the importance of HF dialyzers; the Kf and Cc relationship is not good. 6,22 The use of HF-HD and OL-HDF has necessitated other advances in dialysis. Among them, the machines with precise ultrafiltration (UF) control, the LD with bicarbonate and ultrapure quality stand out. HDx also requires a dialysis monitor with these advances, although, unlike OL-HDF, it does not require a LD infusion system tied to UF monitoring. The HDx can be performed with any modern HD machine with endotoxin filters. Uremic toxin clearance and clinical evidence of HDx The two dialyzers available in Spain with an MCO membrane, composed of polyarylether sulfone and polyvinylpyrrolidone, free of bisphenol A (BPA), are the Ther400 and the Ther500. Both have an internal diameter of the capillaries 180 µm 35 µm thick wall capillary. The Ther400 has a surface area of 1.7 m 2 and a Kf of 48 mL/h/mmHg and the Ther500 2.0 m 2 and a Kf of 59 mL/h/mmHg ( in vitro with bovine blood, Ht. 32 % Pt 6 g/dL and 37 °C). The Cc is 1 for β 2 -mG, 0.9 for myoglobin and 0.008 for albumin (measured with human plasma and Qb 300 mL/min and UF 60 mL/min). The clearance of TU with these dialyzers has been compared with that of the HF-dialyzers and with OL-HDF ( Table 2 ). HD-HF has similar or slightly lower results regarding the elimination of molecules of low PM in contrast to HDF-OL and HDx, measured by the proportion of the decrease in the concentration of the molecules before and after HD (RR) or by clearing it. Regarding the medium molecules, post-dilution OL-HDF and HDx are superior to HF-HD and if OL-HDF is performed with high convective transport, OL-HDF is superior to HDx. Regarding large molecules, as the MW increases, HDx surpasses OL-HDF, as for example with λ light chains. 23–34 The characteristics of dialysis that make OL-HDF superior to HDx have been evaluated. With a Qb of 350 mL/min, OL-HDF exceeds HDx in terms of the percentage reduction (RR) of the molecules, evaluated by means of a « 34 Global Removal Score » (GRS), from 80.6 mL/min of Quf, with a Qb of 400 mL/min it would be from 74.1 mL/min. The main problem with this comparison is in the composition of the GRS, which includes, among other molecules, urea. If the RR of large molecules, such as λ light chains and interleukin-6 (IL-6), is analyzed, HDx would be superior to OL-HDF. The influence of the different large TUs on morbidity and mortality remains to be weighed. 24–26 With OLS dialyzers, albumin loss in LD is usually greater than in HF-HD and OL-HDF, in any case less than 3.5 g/session, between 0.03 and 3.15 g/session. 25,29,30,32 , Albumin loss is dependent on the type of membrane and transmembrane pressures used and can exceed HDF-OL 10 g/4 h with some dialyzers. 35,36 In HDx patients, serum albumin is maintained after an initial drop. 37 At 12 weeks and 12 months with OLS membranes, no significant changes in albuminemia have been detected. 24,38 In the work of Bunch et al. 31,33 performed in 638 patients, after one year, found a decrease of 3.5%. 39 One question that arises with MCO membranes is whether, while they cleanse more TU of medium and high MW than HD-HF, they remove more clotting factors, nutrients, drugs, and other molecules that are beneficial to the body. An in vitro study suggests that the change from HD-HF to Ther500 does not require variations in anticoagulation or in the adjustment of a drug such as vancomycin. The changes in insulin and erythropoietin concentrations would be similar with Ther500, HD-HF with Polyflux210H™ and HDF with Fx CorDiax 800™. 40 40 Although the ability of MCO dialyzers to retain endotoxins and other pyrogenic substances has been reported, given the significant back-filtration of these dialyzers, it is reasonable to use ultra-pure LD. 41,42 Clinical results with HDx There is little clinical evidence in the medium and long term with HDx ( Table 3 ). 23,28,31,33 , The study that includes the most patients is the Colombian COREXH Registry, 38,43–45 there they recruit 992 patients dialysed with HDx and 638 complete one year of follow-up ( 39 Table 3 ). They have a mortality of 8.54 per 100 patient-years, low compared to other similar studies with other HD techniques. The admission rate would be 0.79/patient-year and 6.91 days/patient-year. 46,47 No adverse effects have been described with HDx, 39 nor hypersensitivity reactions to synthetic membranes/polysulfones. 25,39,43 However, because it is a synthetic membrane, these hypersensitivity reactions may occur. 48 The perceived quality of life has been assessed in a prospective study in which 49 patients were randomized, 24 were dialyzed with Ther400 and 25 with HD-HF with the FX CorDiax 80-60™. Basically and at 12 weeks, the Kidney Disease Quality of Life Short Form-36 (KQDOL-36) test with 36 items was performed. Ther400 patients expressed a better quality of life, mainly in the physical components/domains, as well as less itching and sleep disturbances. 33 Cho et al. did not find significant differences in the preHD concentration of TUs between patients dialysed with Ther400 and those with FX CorDiax 80 after 12 months. The low serum concentrations of β 31 2 -mG, 25.6 mg/L, which rise to 28.4 mg/L per year are noteworthy; these concentrations are common in OL-HDF or when there is significant residual renal function (RRF). The work does not provide this last data, which could mask the effect of the Ther400 through its decrease over time. With HDx, an improvement in pro-inflammatory parameters has been reported. One of the factors that explain the loss of RRF in dialysis is inflammation. 38 Some UT with high PM would be harmful to the renal tubules and to RRF 49 ; their greater clearance by the MCO dialyzers 2,3,27 could better preserve RRF, which will have to be investigated. 27 The induction of vascular calcification is reduced with plasma dialyzed with MCO and HCO, so a possible beneficial role of HDx to avoid vascular calcifications should be studied. 34 At the close of this review, there are 16 ongoing HDx studies, registered in the ClinicalTrials . Six of these are complete and eight are in recruitment. In six HDx was compared with HD-HF, in four with HDF and in one with both techniques. Some focus on specific aspects such as: anticoagulation, preservation of RRF, calcifications and mineral metabolism or symptoms. Among them is the Spanish multicenter, open, prospective, randomized study MoTHER to explore morbidity and mortality in dialysis patients with HDx compared with OL-HDF. 50 Key features of the HDx What makes HDx different from other forms of HD? Its ability to remove large molecules, high PM TU ( Table 1 ). The HDx can be defined as a high Cc HD, tuning further, with a MWRO in the range of high PM TUs. Therefore, to conceptualize it and differentiate it from conventional HD-HF, the Cc of myoglobin, ≥0.9, should be used. The HDx is therefore a high-screening HD. Myoglobin is a 17,000 Da molecule that is easy to measure and belongs to the low range of high MW molecules. Where to position HDx as a dialysis technique There is little clinical evidence to determine what types of patients may benefit the most from HDx. Based on previous experience with other high-level TU debugging techniques, such as OL-HDF, we could propose some aspects to check in future studies. 1. Patients without a significant RRF; 2. Patients with the prospect of staying on hemodialysis for years, for example, not candidates for kidney transplantation; 3. Patients with a sufficient intake of nutrients; 4. Patients with a lot of comorbidity; 5. As an alternative to OL-HDF in cases where high convective transport cannot be guaranteed (elevated Hb, suboptimal Qb). The cases with greater comorbidity that could benefit from HDx would be those that have a clear relationship with the retention of high-PM TU: chronic inflammation; resistance to EEE; Restless leg syndrome, secondary immunodeficiency, and cardiovascular disease. 51 52 Regarding the competition with OL-HDF, HDx would be useful in patients in whom it is not possible to achieve an adequate convective volume per session (23 L) or when OL-HDF is suspended for safety reasons. Good results have been reported in people with certain pathologies : pruritus, 43,53 post-HD asthenia, anorexia, restless legs syndrome, light chain myeloma, 33 rhabdomyolysis, severe inflammation. Some of these indications coincide with our own experience. 54 The superiority of HDx over HD-HF in eliminating high PM TU, its easy implementation and its safety suggest creating a new section in the classification of HD techniques. The HDx is not a “conventional” HD in the sense of being within established standards. With the existing data, the HDx should be in a new category of HD. Morbidity and mortality clinical studies are needed to demonstrate the non-inferiority of HDx over OL-HDF. 50 Financing This work has not received any type of funding. | [
"VANHOLDER",
"VANHOLDER",
"WOLLEY",
"COZZOLINO",
"DAI",
"RONCO",
"LOCATELLI",
"PALMER",
"MADUELL",
"PEREZGARCIA",
"BOSCHETTIDEFIERRO",
"BOSCHETTIDEFIERRO",
"PEREZGARCIA",
"PEREZGARCIA",
"HULKO",
"TATTERSALL",
"LORNOY",
"LEE",
"FIORE",
"DONATO",
"MACIAS",
"LORENZIN",
"BELM... |
aa1eaa8bb0d240449e02b698e019e9b2_Effect of multi-pass cooling compression and subsequent heat treatment on microstructural and mechan_10.1016_j.jmrt.2023.01.217.xml | Effect of multi-pass cooling compression and subsequent heat treatment on microstructural and mechanical evolution of TC4 alloys | [
"Yan, Zhaoming",
"Liu, Haijun",
"Dai, Xueyan",
"Li, Luyao",
"Zhang, Zhimin",
"Wang, Qiang",
"Xue, Yong"
] | Three passes of hot compression tests of hot isostatic pressed Ti-6Al-4V alloys are carried out on a Gleeble-1500D thermo-mechanical simulator with a temperature-drop of 950–900-850 °C, constant strain rate of 0.01∼1 s−1, and the height reduction from 20% to 70%. Then, the subsequent solution and aging treatments are conducted on a heat treatment furnace. The corresponding results of microstructures and mechanical properties show: In the process of multi-pass deformation, dynamic recrystallization with spheroidization of lamellar
α
phases are the main characteristics affecting flow behaviors. With the increasing strain, lamellar
α
phases gradually transform from a bimodal distribution to a single peak state perpendicular to the compression direction. The decreasing length and rising thickness of lamellar
α
phases in the transformation process facilitate the formation of spheroidized
α
phases. During the heat treatment, the proportion of spheroidized
α
phases increases, and the spheroidization phenomenon promotes the reduction of lamellar
α
phases in length and increment in thickness. Lamellar
α
phases mainly dominate the contribution of microhardness in the hot compression process, and the equiaxed
α
phases and
β
transformation structures are combined to influence the mechanical properties of heat-treated Ti-6Al-4V alloys. Notably,
β
transformation structures contribute primarily at a higher solution temperature. | 1 Introduction Dual-phase Ti alloys can create the required microstructure by specific processes according to the needs of service conditions. As one of the most commonly used dual-phase Ti alloys, Ti-6Al-4V (TC4) alloys consumed 75–85% of research and industries owing to their low density, high temperature strength retention, excellent corrosion resistance, and a good balance of mechanical characteristics [ 1–3 ]. Hot isostatic pressing (HIP), a relatively reliable powder metallurgy (PM) technology, has been used to fabricate complex components in high-end manufacturing [ 4 ]. However, some apparent disadvantages still need to be concerned, such as low component densification, uncontrollability of microstructure, poor performance, and so on [ 5 , 6 ]. Thermoplastic processing for PM TC4 alloys is a frequently used method to fabricate Ti products with microstructures and mechanical properties necessary for critical applications [ 7 ]. Much commercial application of forged PM Ti alloys has been reported [ 8–10 ]. However, few studies have investigated the phases’ transformation mechanisms and their contributions to integrated hardness processed by multi-pass deformation and subsequent heat treatment [ 11–13 ]. Establishing an appropriate forging procedure with desired microstructures for producing large-scale Ti components is a significant challenge. It is known that the outstanding properties of TC4 alloys are primarily determined by the proportion, distribution, size, and morphology of phases (hexagonal close-packed lattice structure, hcp) and α phases (body-centered cubic lattice structure, bcc) [ β 14 , 15 ]. The plasticity of the phase is better than the β- -phase, and the strength of the α phase is higher than the α- phase. Thus, how to regulate and control of β- phases and α phases become the key points to improve the comprehensive properties. Recent studies have investigated the effect of working parameters on the microstructural evolution of Ti alloys [ β 3 , 16–18 ]. The hot deformation behavior of wrought TC4 alloys is generally analyzed in the temperature range from 750 °C to 1100 °C [ 19–21 ]. The strain rate with 0.1–10 s −1 is attractive for producing Ti components by considering efficiency and cost [ 22 ]. Luo et al. [ 23 ] studied the effect of processing parameters on the flow behavior of TC4 alloys, and the results showed a higher temperature and strong accumulative strain were more prominent than strain rates. Lei et al. [ 24 ] revealed microstructure characterization and nano-micro hardness of the tri-modal microstructure of titanium alloy under different hot working conditions. Nemat-Nasser et al. [ 25 ] observed the dynamic strain aging behavior at a higher processing temperature and the dynamic recrystallization at a high strain rate of HIPed TC4 alloys. Isothermal deformation has been the first choice in wrought Ti alloys. There are few reports focused on the effect of multi-pass cooling deformation on the deformation behavior and phase transformation; therefore, studies are needed to understand those phenomena. Heat treatment allows Ti alloys to improve their ductility and fatigue further [ 26–28 ]. Recent investigations prove that the proper heat treatment and thermochemical treatment of Ti alloys can obtain the desired microstructure, which inhabits a balance of mechanical properties for an industrial application [ 29–31 ]. Du et al. [ 32 ] investigated the relationship between microstructures and properties by the / α and β solution treatment and subsequent aging at temperatures ranging from 440 °C to 560 °C for 8 h. Lin et al. [ β 33 ] studied the effects of solution temperature and cooling rate on the microstructure and microhardness of wrought TC4 alloys and found that the increasing temperature can accelerate the transformation of grain boundary from the low-angle grain boundaries (LAGBs) to the high-angle grain boundaries (HAGBs). Peng et al. [ α 34 ] investigated the effects of treatments on microstructures and mechanical properties of TC4-DT Ti alloys, and Xu et al. [ β 35 ] studied the effects of cooling rate following the or β / α heat treatment on microstructure and phase transformation. β This study aims to investigate the thermal deformation behavior, microstructure evolution, and mechanical properties of HIPed TC4 alloys processed by multi-pass cooling deformation and subsequent heat treatment. Spheroidization mechanisms and orientation distribution of lamellar phases are interpreted systematically. The effects of heating temperature and holding time on the microhardness of wrought TC4 alloys and the phases’ contributions are discussed in detail. α 2 Experiment procedure In this work, TC4 powder used for the HIP was produced with the plasma rotating electrode method and provided by Sino-Euro Materials Technologies of Xi'an Co., Ltd. Table 1 shows the chemical composition of the TC4 powder. Fig. 1 illustrates the research route schematic of this work. The powder sizes varied from 50 μm to 150 μm, and the average size was calculated to be 100 μm. The experimental TC4 compact was obtained by hot isostatic pressing at 920 °C, a pressure of 120 MPa, and holding for 2 h. Then, it was furnace-cooling to room temperature. The microstructure and inverse pole figure (IPF) of the HIPed TC4 alloy are shown in Fig. 2 . The initial microstructure mainly consisted of equiaxed phases and lamellar structures with excellent densification, and the average size is about 30 μm. α Three passes of hot compression tests were carried out on a Gleeble-1500D thermal simulator using cylindrical specimens with dimensions of 8 mm in diameter and 12 mm in height. The specimens were heated in a resistance furnace at a heating rate of 10 °C/s and kept at the target temperature for compression. A graphite lubricant was used between a specimen and crosshead to reduce the deformation friction. From our previous study, the transition temperature is 957 °C [ β 11 ]. The compression tests were conducted in the following procedures: (1) heating to 950 °C, holding for 5 min and 20% reduction from 12 mm to 9.6 mm, then water quenching; (2) heating to 850 °C, holding for 5 min and 10% reduction from 9.6 mm to 8.4 mm, then water quenching; (3) heating to 800 °C, holding for 5 min and 40% reduction from 8.4 mm to 3.6 mm, then water quenching. The strain rates ranged from 0.01 s −1 to 1 s −1 . The true stress-strain curves were recorded automatically from the software. The heat treatment was conducted at solution temperatures of 900 °C and 950 °C for 1 h, and the aging temperature of 550 °C for 6 h. We selected the observation surface parallel to the compression axis to investigate the microstructural evolution. The sandpapers of 320 #, 600 #, 100 #, 1500 #, and 3000 # were used for grinding, and the silk was used for polishing. The specimens were etched using Kroll's reagent and examined using an Axio Observer A2m Carl Zeiss optical microscope (OM). The Hitachi SU5000 scanning electron microscope (SEM) was used to observe the phase morphology. The texture and grain orientation were tested by electron back-scattered diffraction (EBSD) equipped on Hitachi SU5000 SEM with a voltage of 20 kV, a titled angle of 70, and a working distance of 15 mm. The EBSD data was analyzed by the TSL OIM™ software. The JEOL JEM-F200 transmission electron microscopy (TEM) was used to research the structure details, and the bright-field images were observed. The mechanical properties were tested on the UHL VMH-002VD. 3 Results and discussion 3.1 Effect of multi-pass deformation on flow behavior The true stress-strain curves of HIPed TC4 alloys processed by multi-pass compression and the corresponding work-hardening variation (-∂ θ /∂ σ )-ln( σ ) curves are shown in Fig. 3 . It can be found that the flow stress increased significantly as the decreased temperature and increased strain rate during the multi-pass deformation, which indicates that the stress is sensitive to the deformation conditions of temperature and strain rate. Balasundar et al. [ 36 ] researched the hot deformation behavior of BT3-1 alloys and revealed the strain rate effect. The relationship can be explained by the following equation: Where the (1) ε · = b ρ ν , ε ˙ b , ρ , ν represent the strain rate, burgers vector, dislocation density and sliding velocity of dislocation, respectively. It can be noticed from equation (1) that the sliding velocity of dislocations increases with the increment of strain rate. However, the slipping speed will not increase indefinitely due to the existence of lattice friction or the influence of solute atoms. Thus, there exists a maximum slip velocity for dislocation slipping. When the sliding speed reaches the utmost, it gets to an equilibrium state due to the strain rate effect and the lattice hindrance. Therefore, when the strain rate continues to increase, and ν remains unchanged, the dislocation density rises sharply, and the true stress increases accordingly. Based on the research of the curves, as shown in Fig. 3 a, b, and c, we can know the following facts: (1) The flow stress value increased sharply and reached a peak quickly at the first stage, which is mainly attributed to WH [ 37 ]. (2) With increasing strain, the flow stress gradually decreases and then reaches a steady state, which can be contributed by the softening mechanisms of dynamic recovery (DRV) and dynamic recrystallization (DRX). (3) The effects of softening in two and three passes are better than in the first pass, which can be attributed to the primary temperature being closer to the phase transition temperature. The / α phase transition occurs in this temperature region, accompanied by a small amount of recrystallization behavior of β phases. Compared with one-pass, the softening mechanism of two-pass and three-pass gradually changes to the spheroidization of lamellar β phases and DRX of α phases. α To explore the recrystallization behavior during multi-pass deformation, according to the principle of nonequilibrium thermodynamics, the maximum deformation storage capacity and the minimum external power consumption can be used to determine the onset of DRX during the deformation. Fig. 3 d shows the -∂ θ /∂ σ )-ln( σ ) curves of HIPed TC4 alloy, and the minimum value represents the onset of DRX [ 38 ]. It can be found that the lowest point in the line means the occurrence of DRX. The DRX process in the thermal deformation process is related to thermal activation and diffusion, sufficient deformation storage is the driving force, and the rearranged dislocation structure is the basis for nucleation [ 39 ]. Therefore, with the increase passes, the accumulation of distortion energy in the microstructure and the dislocation density results in the increment of recrystallization and spheroidization of lamellar phases. α 3.2 Effect of multi-pass deformation on microstructural evolution Fig. 4 shows the optical micrographs of TC4 samples processed by three compression passes at the strain rate of 0.1 s −1 . It can be observed from Fig. 4 a that the equiaxed and lamellar phases gradually disappear after the first pass and form the Widmanstatten structure consisting of the needle or acicular α phases and α grain boundaries. It can be attributed that the primary α deformation temperature (950 °C) is near α transus temperature (957 ± 10 °C), resulting in the transformation from β to α phases. In addition, the staggered acicular β ′ phases with an angle of 90° precipitate in the subsequent water quenching, which can be mainly caused by the precipitation of α phases are always along the habit plane of matrix α phases. It is worth noting that the β and α phases are isomers and have a strict orientation relationship. β Meanwhile, the phase shows a specific directionality, and its changing process will be discussed systematically in Section α 3.3 . After the second pass deformation, as shown in Fig. 4 b, it can be observed that acicular phases gradually coarsen and transform into lamellar phases, and the lamellar α phases begin to deflect and spheroidize. Meanwhile, a small number of equiaxed α phases can be observed due to the spheroidization and recrystallization. During the last pass compression, as shown in α Fig. 4 c, a high amount of equiaxed phases can be seen in the observation area. Spheroidized α phases present a necklace-like shape and distribute at the α boundary. Overall, the hot deformation mechanisms of TC4 alloys are associated with DRX, DRV, and globularization of lamellar α phases. The lamellar α phases easily deflect in the favorable direction and accumulate much distortion energy in the previous deformation. The spheroidization of lamellar α phases is apt to happen due to the shear strain in the last pass deformation. α Fig. 5 shows the EBSD microstructures of multi-pass hot compressed TC4 alloys, including IPF, kernel average misorientation (KAM), and misorientation distribution maps. The low grain boundaries (LAGBs, misorientation angles between 2° and 15°) shown by the white lines and high grain boundaries (HAGBs, misorientation angles higher than 15°) marked with black lines are illustrated in Fig. 5 a, d, and g. The statistical results of misorientation angles are shown in Fig. 5 c, f, and i. Usually, KAMs are used to explain the local residual strain concentration and dislocation density in a processed alloy, and different colors that transition from blue to red represent the increasing degree of strain and dislocation concentration; the results are shown in Fig. 5 b, e, and h. It can be concluded from the above microstructure observation that the obtain of equiaxed phase is mainly from two aspects: spheroidization of lamellar α phases and DRX. To distinguish the proportion of DRX and spheroidization, based on data analysis and experience, the average thickness of lamellar α phases is about 1.5 μm. Thus, we select 1.5 μm as the statistical standard to distinguish DRX and spheroidization (less than 1.5 μm was used as the DRX grains). α As seen from Fig. 5 a that acicular phases with different orientations are precipitated after the first pass compression. With the increasing strain, the deflection and kink of lamellar α phases can be observed in the subsequent two passes, and a large number of equiaxed α phases can be seen in the observation area, as shown in α Fig. 5 d and g. After three compression passes, the DRX fraction (f DRX ) is 3.2%, 7.1%, and 35.7%, respectively. It can be noted that a large number of refined recrystallized grains are generated after the last pass compression, and this can be attributed to the high accumulated distortion energy. Meanwhile, due to the high accumulative strain, the increasing dislocation density in the grain can result in dislocation plug integration, and dislocation cells gather at the grain boundaries and gradually transform into sub-crystals. The grain size decreases from 21.2 μm to 4.1 μm, which can be related to the increasing recrystallization fraction. The LAGBs are primarily distributed in the coarse deformed grains, while HAGBs mainly consist of DRX grain boundaries and some coarse deformed grain boundaries. Some DRX grains can be seen located near LAGBs in the deformed grains. The CDRX mechanism is characterized by progressive subgrain rotation [ 40 , 41 ], and DDRX grains are usually generated through grain boundary bulging [ 42 ]. The decreasing HAGBs from the second and third pass may be associated with the high strain accumulated in one pass compression, which promotes the formation of high-density dislocations. From the observation of KAM maps, as shown in Fig. 5 b, e, and h, it can be noticed that yellow and green regions mainly occupied the coarse deformed grains and their grain boundaries after the first pass compression. The residual strain and dislocations begin to generate and accumulate due to the accumulative strain. It gradually expands from the initial accumulation at the grain boundary to the interior of the grain. With the increasing deformation passes, the volume fraction of DRX increases, and DRX grains consume many dislocations. However, an intense strain concentration is observed in Fig. 5 h, which has a high DRX volume fraction. However, the reason for the decrease of HAGBs mainly comes from two aspects: (1) The relatively low deformation temperature and high accumulative strain in one pass, promoting the accumulation of dislocation energy; (2) The precipitation of acicular phases due to the phase transformation in one pass deformation have a specific orientation relationship [ α 43 ], mainly distributed at 60° and 90°, resulting in the formation of many HAGBs. However, lamellar phases deflect with the increasing strain, corresponding with the decreasing volume fraction. The reduction of HAGBs at 60° and 90° is more substantial than the rising of HAGBs. α Fig. 6 shows the TEM structures of HIPed TC4 alloys after three passes of hot compression. It can be seen from Fig. 6 a and b that the dislocations are mainly concentrated at the grain boundary of lamellar phases after the second pass compression and a few dislocation walls inside the grain. A large number of dislocations provide favorable nucleation sites for the DRX process. After the last pass compression, as shown in α Fig. 6 c and d, many dislocation cells are formed due to the accumulation of dislocations, and this is also the reason for the increasing fraction LAGBs and DRX grains. 3.3 Quantitative characterization of α phases during hot compression Microstructure analysis, including spheroidization and orientation distribution of lamellar phases, have been investigated by metallography, EBSD, and TEM in the above section, and quantitative characterization of α phases can help understand microstructural evolution behavior better. α Fig. 7 shows the statistics of lamellar phases in aspect ratio and spheroidization fraction during multi-pass deformation. It can be seen from α Fig. 7 a that the significant decrease in the length of lamellar phases shows two distinct stages: stable decline (the second pass) and sharp decline (the last pass). The appearance of a stable decline stage is mainly dominated by the low accumulative strain and the rotation of lamellar α phases, accompanied by the flow softening [ α 44 ]. This process alleviates the fragmentation of kinked lamellar phases. During the sharp decline stage, a large amount of distortion energy accumulated by three passes of compression facilitates the fracture and spheroidization of lamellar α phases due to the shear strain. The thickness of lamellar α phases shows a trend of increasing sharply first and then decreasing slightly. The transformation of acicular martensite into lamellar α phases at 900 °C during the second pass compression advances the increasing thickness, and the precipitation of more α colony and microstructure coarsening controlled by bulk diffusion make a weakening trend of thickness changing [ α 45 ]. Moreover, the fracture and spheroidization of lamellar phases is a reasonable explanation for the decreasing length of lamellar α phases. It can be observed from α Fig. 7 b that the spheroidization fraction of lamellar phases increases with the rising accumulative strain. This phenomenon verifies the trend that the length of lamellar α phases decreases after three passes of compression. α To further explore the characteristics of lamellar phases during multi-pass deformation, the orientation distribution of lamellar α phases is counted in α Fig. 8 . We defined the orientation angle between lamellar phase and compression direction (CD) to be α θ , which varies from 0° to 180°. During the first pass, as shown in Fig. 8 a, the angular interval between the two peak orientations of the lamellar phase is 94°, and the two peaks are symmetrically distributed at 45° and 139°, respectively. Recent studies showed that the initial lamellar α phases precipitated along the inertia plane from the α matrix and transformed from the β isomer of bcc to the β phase of hcp [ α 46 , 47 ]. This transformation had a series of non-diffusion characteristics and a strict orientation correspondence relationship, such as {0001} //{110} α ,<11–20> β //<1–11> α . Therefore, the precipitation of lamellar β phases has a specific direction relative to α phases, and the angles between lamellar β phases precipitates are about 90° [ α 48 ], which is consistent with the results of this study. With increasing strain, as shown in Fig. 8 b and c, the angle interval between two symmetrical peaks gradually reduces and moves toward the center, which is the metal flow direction. For instance, the orientation angles of the two symmetrical peaks are 55° and 115° at ε = 0.35, and the angular interval reduces to 60°. However, in the result of the deformation experiment, the orientation distribution of two symmetrical peaks approximately evolves into a single peak at ε = 1.2. The average orientation angle of the single peak is about 95°, which is almost perpendicular to the compression axis. The orientation distribution of lamellar phases rotates in a preferred direction along the metal flow. Xu et al. [ α 46 ] studied the evolution of lamellar phases during the isothermal forging, resulting in the orientation distribution changing from two symmetrical peaks to a single one with increasing strain, which is consistent with the experiment results in this research. Overall, the lamellar α phases first rotate and bend, and the spheroidization phenomenon becomes more apparent with the rising accumulative strain. The difference in softening mechanisms between the second and third pass compression can be summarized as the rotational kink of lamellar α phases, spheroidization, and DRX. α 3.4 Effect of heat treatment on microstructures To explore the evolution of lamellar phases during heat treatment, the specimens with different compression passes are subjected to solution and aging treatment. The corresponding results are shown in α Figs. 9 and 10 . It can be seen from Fig. 9 that the increasing fraction of spheroidizated phases and coarsening of lamellar α phases can be observed with the rising accumulated strain. Compared with the microstructures of wrought TC4 alloys, the heat treated alloys show that the fragmentation of lamellar α phases and a necklace-like structure can be observed. In addition, many equiaxed α phases are generated [ α 49 ]. Overall, the microstructural evolution of heat treated TC4 alloys can be described as follows: After the heat treatment of the first compressed TC4 alloys, it can be observed the curved and serrated grain boundary phases after solution treatment at 950 °C, while all these grain boundary α phases are broken after aging treatment, showing a necklace-like structure or transformed into equiaxed grains, as shown in α Fig. 9 a and d. The lamellar phases inside grain α are further coarsened. They do not break, mainly due to the small amount of primary deformation and the more distortion accumulated by rain boundary β phases. When the solution temperature is down to 900 °C, as shown in α Fig. 10 a and d, a small part of grain boundary phases are broken, mainly concentrated at the grain intersection, and the grain coarsening is not apparent compared with that at the solution temperature of 950 °C. α After the heat treatment of the second and last pass of compression, it can be observed that a large number of lamellar phases are spheroidization, and the proportion rises with the increasing temperature and accumulated strain. Lamellar α phases disappear entirely, and equiaxed α phases decrease at a higher solution temperature, as shown by comparing α Fig. 9 f and Fig. 10 f, which the following two aspects can attribute: (1) the higher solid solution temperature makes the lamellar phase completely spheroidization; (2) the solid solution temperature near the phase transition point makes part of α to α phase transition, resulting in the reduction of equiaxed β phases. The above phenomenon can also be proved by the apparent increasing friction of transformed α , as shown in β Fig. 9 c and f. 3.5 Quantitative characterization of α phases during heat treatment Fig. 11 shows the length, thickness, and spheroidization fraction of lamellar phases at different heat treatment conditions. It can be seen from α Fig. 11 a and b that compared with the wrought TC4, the length of lamellar phases shows no dramatic reduction after heat treatment which can be attributed to the relatively low accumulative strain that can only promote the growth of lamellar α phases. With the increasing strain, the high accumulative dislocation and distortion energy in lamellar α phases facilitate the spheroidization of α phases after heat treatment, decreasing the length. After the last compression, almost all lamellar α phases are spheroidization at 950 °C. In comparison, some lamellar α phases remain at 900 °C, with their length decreasing significantly. The spheroidization of lamellar α phases is sensitive to the solid solution temperature, and a higher temperature is closer to the phase transformation point, resulting in the transformation of a large number of α phases. The cooling precipitated acicular phases and matrix α phases form the β transformation structures. As the accumulation of strain distortion energy reaches a certain degree, lamellar β phases can achieve total globalization and observe the apparent reduction of lamellar α phases in length. From the observation of the thickness evolution of lamellar α phases, as shown in α Fig. 11 b, it can be observed that the significant increment in thickness of lamellar phases occurs after the first and second pass compression, and this can be attributed to the following aspects: (1) Due to the fracture of lamellar α phases, the spheroidization phenomenon is more prominent, and resulting in the growth of lamellar α phases during heat treatment, (2) The diffusion of Al and V elements during heat treatment leads to the fusion of α phases, increasing the thickness. α Meanwhile, the increasing thickness of lamellar phases inversely indicates the broken of these phases, spheroidization, and reduction in length. The spheroidization fraction of lamellar α phases is counted in α Fig. 11 c. It can be noted that the spheroidization of lamellar phases is accompanied by decreasing length and increasing thickness during heat treatment. Moreover, the spheroidization fraction of lamellar α phases at 900 °C is higher than that at 950 °C after the heat treatment of the last compression sample, and this can be attributed to the phase transformation of a large number of α phases at a higher solution temperature makes the decreasing content of equiaxed α phases. α 3.6 Relationship between microstructures and integrated hardness Fig. 12 shows the influence of parameters on the Vickers microhardness and integrated hardness of TC4 samples processed by hot compression and subsequent heat treatment, and the relationship between integrated hardness ( H ) and Vickers hardness is shown in equation i (2) [ 47 ]. It can be seen that the increasing microhardness is closely related to the rising accumulative strain during the multi-pass hot compression, and the maximum value is 324 HV after the last pass compression. Heat treatment further improves the microhardness, and the sample treated at 950 °C/1 h (solution)+550 °C/6 h (aging) shows a higher microhardness than the sample treated at the condition of 900 °C/1 h + 550 °C/6 h. It can be attributed to the fact that the sample heated to a higher solution temperature can precipitate a more fine acicular phase during the cooling process. The microhardness value of heat treated samples shows a decreasing trend with the increasing deformation passes, which can owe to a large amount of spheroidization of lamellar structures. A large number of transformation structures generate at 950 °C/1 h + 550 °C/6 h, resulting in the microhardness value being significantly higher than that of 900 °C/1 h + 550 °C/6 h. A current research concluded that the grain interfaces greatly affected the microhardness, mainly depending on the strong barriers for dislocation transmission [ β 50 ]. For TC4 alloys with different microstructure morphologies, including equiaxed, lamellar phases and ɑ transformation structures ( β β t ), the grain interfaces mainly include ɑ p / , β ɑ l / , and β ɑ s / phase interfaces. Different microstructures lead to a variety of properties, and the contributions of other structures to the integrated hardness are shown in equations β (3)–(5) . (2) H i = H V × 9.8 1000 × sin ( 68 ∘ ) (3) c α p = H α p V α p / H i (4) c β t = H β t V β t / H i Where (5) c α l = 1 − c α p − c β t c ɑp , c βt , and c ɑl are the contribution values of ɑ p , β t , and ɑ l to the integrated hardness, H ɑp and H βt represent the macroscopic hardness for ɑ p and β t , V ɑp and V βt stand for the content of ɑ p , and β t . Fig. 13 summarizes the contribution of microstructures of hot-compressed TC4 alloys and subsequent heat treatment to the integrated hardness. It can be observed from Fig. 13 a that α contributes the greatest to the improvement of integrated hardness during hot compression, which can be attributed to the large volume fraction of lamellar 1 phases. ɑ β takes the second place and t ɑ finally. During the hot compression, the contribution of p ɑ decreases and l ɑ increases with the rising accumulative strain, and the integrated hardness increases due to the strengthening of refined grains. After heat treatment, the contribution of p ɑ drops sharply, while l β and t ɑ rise directly, as shown in p Fig. 13 b and c. It is mainly due to the increasing fraction of β , in which many refined needle phases play a strengthening role. At the treatment condition of 950 °C/1 h + 550 °C/6 h, the integrated hardness is mainly dominated by t β , then the t ɑ and p ɑ . Increasing the solution temperature (reaching the phase transformation point) will produce more l β in the matrix. t 4 Conclusion In this study, the effects of multi-pass cooling deformation and different heat treatment processes on microstructural evolution are investigated, and the relationship between microstructures and mechanical properties is established. The following conclusions can be drawn: (1) The flow stress and deformed microstructures strongly depend on the deformation temperature and strain rate. DRV and DRX are the main softening mechanisms. During the hot compression, the DRX grains generate at the intersection of grain boundaries after the first pass compression. The grain boundary phases gradually break after the second pass compression, and the DRX behavior transfers into the crystal. After the last pass compression, the volume fraction of DRX further improves, and the spheroidization of lamellar α phases occurs. α (2) With the increasing strain, lamellar phases deflect from a bimodal distribution to a single peak distribution perpendicular to the compression axis accompanied by the reduction of lamellar ɑ phases in length and the increment in thickness, providing facilitation for the spheroidization of lamellar ɑ phases. ɑ (3) During the heat treatment, the sample's spheroidization fraction of lamellar phases increases. The equiaxed phases mainly generate in the heat treated samples after three passes of compression. Meanwhile, it can be observed that the decreasing length and increasing thickness of lamellar ɑ phases happen with the spheroidization. ɑ (4) The integrated hardness of hot compressed TC4 alloys is mainly dominated by lamellar phases. In contrast, in the heat treatment process, the alloy's hardness is gradually controlled by equiaxed ɑ phases and ɑ transformation structures. The β transformation structures contribute mainly to the integrated hardness at a higher solution temperature. β Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments The present research was supported by the National Natural Science Foundation of China (Grant No. 51675492 ). | [
"ZHAO",
"FAN",
"SUN",
"WANG",
"ZHAO",
"CUI",
"WEI",
"AMHERDHIDALGO",
"QIU",
"LIAO",
"LIU",
"GAO",
"BODUNRIN",
"ZHANG",
"JIANG",
"KAR",
"MANDAVI",
"SHAO",
"KIM",
"SESHACHARYULU",
"SESHACHARYULU",
"SEMIATIN",
"LUO",
"LEI",
"NEMATNASSER",
"LIU",
"ZHAO",
"WANG",
"... |
fb93abe0919648a29698e2947de7d9e8_A systematic review of the effects of housing support on social welfare outcomes in pregnancy and ea_10.1016_j.chipro.2024.100024.xml | A systematic review of the effects of housing support on social welfare outcomes in pregnancy and early childhood | [
"Brew, Kathleen",
"Heerde, Jessica",
"Price, Anna",
"McLean, Karen"
] | Background
Homelessness during pregnancy and early childhood is associated with poorer social welfare outcomes for birth parents and their children. For these clients, contact with the child protection system is common. In some cases, children are removed.
Objective
To systematically review published literature investigating the impact of housing support during pregnancy and early childhood on child protection outcomes.
Participants
and setting: Provision of housing support for clients experiencing homelessness while pregnant or parenting young children (0–7 years) in high-income countries.
Methods
A systematic search of databases CINAHL and SocIndex for peer reviewed studies, with independent double-screening of retrieved studies and full-text review of eligible studies.
Findings
Of 793 screens and 37 studies with full-text review, two were eligible. Both were small, without a control group, and one was qualitative. There was no convincing evidence for impacts, and meta-analysis was not possible. In post-hoc reviews, five additional studies met all criteria except child protection outcomes. These studies’ findings suggested that, compared with controls, housing interventions led to faster initial improvements in housing status and decreased alcohol use, as well as decreases in child internalising and externalising behaviours.
Conclusions
There are promising indications that housing interventions generate benefits for clients experiencing homelessness while pregnant or parenting young children. However, high-quality longitudinal studies with robust intervention designs are lacking, likely due to the challenges inherent in embedding research to evaluate such programs. Given the importance of pregnancy and early childhood on children's development, existing housing support and policy implementation evaluations should be prioritised. | Abbreviations US United States UK United Kingdom UN United Nations EBT Ecologically Based Treatment RCT Randomised Controlled Trial TAU Treatment As Usual MH Mental Health IPV Intimate Partner Violence SU Substance Use 1 Introduction Stable housing is a social determinant of health and a fundamental human right ( Shaw, 2004 ; United Nations General Assembly, 1949 ). Homelessness is generally considered within a topology of housing instability, including being unsheltered (e.g., “sleeping rough” or living in places not designed for habitation), residing in emergency shelters (e.g., crisis housing) or residing in temporary accommodation (e.g., “couch-surfing”) ( Amore et al., 2011 ). Homelessness in high-income countries is not uncommon ( Heerde & Hemphill, 2018 ). In the United States (US), the number of people experiencing homelessness on any one night was estimated at 580,000 in 2022 ( de Sousa et al., 2022 ). In Australia, where the authors are based, the relative proportion of families experiencing homelessness is higher (4.98% compared to 1.8%) ( ABS, 2018 ; de Sousa et al., 2022 ). In June 2020–21, 278,275 people in Australia received support from Specialist Homelessness Services, of which 60% identified as women (167,388) ( AIHW., 2022 ). It is important to note that these figures likely underestimate the prevalence of homelessness, given it represents only those accessing homelessness support services. There are a wide variety of complex and inter-related individual, social and cultural factors are correlated with homelessness. These include problematic substance abuse, domestic and family violence, mental ill-health and poverty. The intersection and complexity of factors associated with experiencing homelessness means that designing interventions to address homelessness is complex ( McVicar et al., 2015 ; Zhao, 2023 ). Gender, socio-economic status, race, and ethnicity are associated with risks and stigmas linked to homelessness, and can amplify and exacerbate individual experiences ( Belcher & DeForge, 2012 ; Piat et al., 2015 ). Across the homelessness literature conducted in high income countries, homelessness is associated with a range of poor health and social outcomes. These relate to educational attainment, employment, financial status, housing insecurity, engagement with institutional care, and obtaining health care ( AIHW, 2021 ; Andrade et al., 2020 ; Fazel et al., 2014 ; Hausman & Hammen, 1993 ). These challenges can minimise individual autonomy and their access to resources to support physical, social, and psychological health and well-being across the life-course ( Nickasch & Marnocha, 2009 ; Omerov et al., 2020 ). For example, higher rates of morbidity and mortality are evident for people experiencing homelessness compared to those who have stable housing ( Fazel et al., 2014 ). 1.1 Homelessness and inadequate prenatal care The United Nations (UN) Sustainable Development Goals aim to “leave no one behind” and call for gender equality and empowerment, including equal rights to economic resources, property, financial services ( UN General assembly, 2015 ). In UN member countries, the negative outcomes of pregnancy while experiencing homelessness are a compelling contradiction of these fundamental principles. For example, studies originating from the US and Canada report that clients who are pregnant while experiencing homelessness have lower rates of health service engagement, and higher rates of morbidity and mortality compared to the general populations ( Arangua et al., 2005 ; Cheung & Hwang, 2004 ). In a US sample Teruya et al. (2010) suggested experiences of childhood victimisation, low self-esteem, and lack of resources may mediate the association between homelessness and these outcomes. Practical challenges related to research with groups experiencing marginalisation mean it is difficult to measure pregnancy and birth outcomes during periods of homelessness, and this remains under-investigated. In an early study of US women, Bassuk and Weinreb (1993) socioeconomic disparities leading to and resulting from homelessness were significantly higher for clients who were pregnant while experiencing homelessness compared to those who were not pregnant. Australian research has highlighted significant challenges in access to support services during pregnancy when experiencing homelessness ( Murray et al., 2018 ). Pregnant people experiencing homelessness in Australia and the US report multiple resource-related and systematic barriers to accessing prenatal care including adverse prior therapeutic relationships, lack of transport, high costs and organisational and administrative restrictions ( Bloom et al., 2004 ; Gelberg et al., 2004 ; Theobald et al., 2023 ; Wood & Bogoias, 2022 ). As a consequence, those who are pregnant and experiencing homelessness often report poor healthcare engagement, limited continuity and coordination of prenatal care and increased risk of negative neonatal and maternal health outcomes ( Esen, 2017 ; Theobald et al., 2023 ). 1.2 homelessness and early childhood Homelessness in pregnancy is especially significant when considering its intergenerational impacts ( Flatau et al., 2010 ). The first thousand days from conception to two years are a crucial period, with multiple biological, environmental, and social factors influencing children's development, physiology, and subsequent health ( Moore et al., 2017 ). The importance of childhood is inherent to the UN Convention on the Rights of the Child, which champions the rights of children to healthy living and family environments ( UN General assembly, 1989 ). Infants born to individuals experiencing homelessness are more likely to be born pre-term, have low birth weight and spend time in intensive care than those born to stably housed clients ( Cutts et al., 2015 ; Little et al., 2005 ; Richards et al., 2011 ). A prior meta-analysis reported that pre-term babies (independent of homeless status) are more likely to experience poorer cognitive, neurodevelopmental, morbidity and mortality ( Bhutta et al., 2002 ; Platt, 2014 ). Research originating from the US has indicated that children born to parents experiencing homelessness display higher rates of fair or poor physical health, developmental delays and lower levels of school readiness (including academic underachievement) compared to children whose birth parents were stably housed during pregnancy ( Husa et al., 2022 ; Sandel et al., 2018 ). 1.3 Homelessness and child protection involvement As described earlier, there is complexity in the nature of psychosocial and economic adversities that influence housing instability. Consequently, there are limitations to conceptualising an adequate framework for the relationship between homelessness and child protection involvement. Nonetheless, an emerging theoretical basis for this phenomenon is evident when considering prior research. Qualitative studies from Australia and the US describing the perspectives of female birth parents experiencing homelessness highlight the importance of housing in accessing social welfare services and retaining child custody ( Loxton et al., 2007 ; Smid et al., 2010 ). Precarious housing, perceived health and safety hazards, unmet needs of children and turbulent relationships have been shown to precede child separations in homeless populations ( Barrow & Lawinski, 2009 ; Pelton, 1991 ). Intimate partner violence, mental ill-health and substance abuse, often reported by female birth parents experiencing homelessness, are also associated with child protection involvement ( Bassuk et al., 1998 ; Browne & Bassuk, 1997 ; Delfabbro et al., 2012 ). Homelessness is also associated with feelings of isolation, stigma and disempowerment which influence a person's capacity to provide care, support, and education for their children ( Andrade et al., 2020 ; Hausman & Hammen, 1993 ). Systemic biases and racism reinforce these experiences for all clients experiencing homelessness, but especially those from marginalised demographics ( Drake et al., 2011 ; Wells et al., 2009 ). Some studies have also suggested that surveillance bias plays a role, with higher rates of child protection notifications for clients involved with state services (who are also more likely to be from marginalised demographics), adding an additional layer of complexity ( Baughman et al., 2021 ; Edwards, 2019 ). The nature of child protection services varies across international settings. This can involve, but is not limited to, notification and investigation of suspected cases of maltreatment or abuse, implementation of care and protection orders, and removal of the child from the family/parent/guardian. Any level of involvement with child protection services indicates either possible or confirmed child maltreatment. It is difficult to quantify rates of child protection service involvement with families experiencing homelessness, due to the practical challenges of longitudinal data collection and the complexity of family circumstances. Child protection services are vastly different across international jurisdictions, which makes comparison of statistics difficult. Nonetheless, there is a broad range of extant research conducted in the US suggesting a significant association between homelessness and involvement with the child protection system ( Chandler et al., 2022 ; Font & Warren, 2013 ; Marcal, 2018 ). Recent US research has found that more than one-third of a sample of 59 birth parents experiencing homelessness consistently lacked child custody over the 2-year study period ( North et al., 2023 ). Other recent data showed that child protection involvement was 1.58 times more likely for children who had spent time in emergency or transition housing than the control group who had not ( Palmer et al., 2023 ). In Australia, 48% of children with a legal Care and Protection Order (i.e., those who were under the care of child protection services) between 2022 and 2023 were at risk of homelessness at the beginning of child protection involvement ( Australian Institute of Health and Welfare, 2023 ). Between 2011 and 2015, approximately 127 per 1000 children involved with homelessness services agencies were also involved with child protection services ( Australian Institute of Health and Welfare, 2016b ). In contrast, government data found that 28.6 per 1000 of all Australian children received child protection services in 2014–2015 ( Australian Institute of Health and Welfare, 2016a ). Research in Australia and the US suggests that children experience a range of adverse outcomes during and after involvement with the child protection system compared to those with no involvement. These include negative behavioural outcomes, criminal justice involvement, homelessness, mental ill-health, substance abuse, and poorer educational and employment outcomes ( Cashmore & Paxman, 1996 ; Courtney & Dworsky, 2006 ; Fernandez, 2014 ). Child abuse is also commonly reported by children involved with child protection. For example, 66% of respondents in the 2017 Australian Royal Commission into Child Sexual Abuse reported being sexually abused in home-based care (which can include residential care by relatives, foster carers or in staffed facilities) ( Commonwealth of Australia, 2017 ). In the US, foster families have higher rates of physical abuse and maltreatment compared to non-foster families ( Benedict et al., 1994 ; Poertner et al., 1999 ). Children in foster care have also been found to have high rates of mental health problems ( Clausen et al., 1998 ). Early interventions to prevent child protection involvement for struggling families has been a focus of government. A recent Australian report highlighted the importance of early interventions to reduce child protection involvement and improve social welfare outcomes for families ( Social Ventures Australia, 2020 ). This report informed the new “Safe and Supported” National Framework for Protecting Australia's Children 2021–2031, which includes early intervention as the first focus area ( Commonwealth of Australia, 2021b ). 1.4 Housing interventions Given the previously described research describing the association between homelessness and child protection involvement, stable housing interventions in pregnancy or early childhood may hold promise in addressing this. In the US, a “Housing First” framework was first proposed in the 1990s for unsheltered adults with psychiatric illnesses. This approach positioned housing as a human right which should be offered immediately, alongside support services, to clients in need ( Tsemberis, 1999 ). The approach provided immediate permanent, independent housing, treatment, and support services. Evaluation results suggested improved rates of housing retention (compared to treatment as usual) when clients began with transitional and community housing ( Tsemberis, 1999 ). Several subsequent studies have investigated the effectiveness of Housing First in populations experiencing homelessness. Three systematic reviews, one with a meta-analysis, reported strong evidence for the success of Housing First on housing retention among participants in the US, Canada, Sweden and France ( Baxter et al., 2019 ; O'Donnell et al., 2015 ; Woodhall-Melnik & Dunn, 2016 ). In one systematic review of economic evaluations in high-income countries, shelter, emergency department and overall cost-benefits of Housing First approaches were reported for participants when compared to treatment as usual controls ( Ly & Latimer, 2015 ). Existing reviews have noted a lack of research on Housing First for sub-populations of people experiencing homelessness, across different housing settings, and whether initial outcomes are sustained over time. Research relating to early intervention and prevention programs in high income countries has suggested improvements in housing outcomes can be attained, including those driven by support programs for children at risk of homelessness, where family reunification is supported ( Hoffman & Rosenheck, 2001 ; Toumbourou & Heerde, 2021 ). Other systematic reviews have found moderate or insufficient evidence for housing retention and improvements in health outcomes in individuals with mental health disorders or addiction while experiencing homelessness ( Kertesz et al., 2009 ; Rog et al., 2014 ). Globally, social policy in high-income countries such as Australia, the US and the UK frequently includes elements that are designed to reduce rates of homelessness, such as increasing the supply of affordable housing to those who are homeless or at high risk of homelessness, often through the adoption of Housing First approaches ( Assistant Secretary for Planning and Evaluation, 2022 ; Commonwealth of Australia, 2021a ; O'Connell, 2003 ; UK Government ). Reflecting this, it is possible the provision of housing interventions could improve outcomes for female birth parents and their children experiencing or at high risk of homelessness. Indeed, in the US, there is clear precedent arising from numerous prior studies of Supportive Housing for Families programs for families experiencing homelessness with children of any age. Many of these programs report significant decreases in child protection services contact or increased rates of family reunification over the duration of the program for intervention compared to control groups ( Boullion et al., 2021 ; Lenz-Rashid, 2017 ; Nolan et al., 2005 ; Pergamit et al., 2019 ; Tapper, 2010 ). One data linkage study in Minnesota, US found a decrease of 50% in out-of-home placements for the 183 children in the supportive housing group, compared to the control group ( Hong & Piescher, 2012 ). Child protection involvement overall decreased from approximately 9%–1% in the intervention group, compared with an increase from 2% to 3% in the control. A study of 385 families participating in a high-needs housing program in Washington State, US found significantly higher rates of family reunification with Supportive Housing for Families clients compared to matched families in emergency shelters ( Rog et al., 2015 ). A 2017 study of 1857 families in the US found that child separation was almost halved and foster care placements were more than halved with permanent 30% subsidised housing ( Shinn et al., 2017 ), while rapid and temporary housing assistance was not significantly associated with differences in child protection involvement. In Australia, examples such as Launch Housing, Viv's Place and the Cornelia Program provide comprehensive housing and support services aimed at keeping families unified ( Murray et al., 2018 ; Hutchins, 2022 ). To the knowledge of the authors these programs are yet to be formally evaluated. There has been one systematic review to date considering a population which is pregnant or parenting young children. In this review, Krahn et al. (2018) investigated long-term (at least one year) housing interventions for pregnant/parenting clients using substances and found improvements across domains including parent and child mental and physical health, support service engagement, parental perceived self-efficacy, and intimate partner violence. The review was however limited by variability in study design and quality, limiting comparison. Despite existing literature suggesting there is considerable intersection between housing instability/homelessness, child development and social welfare outcomes (including child protection involvement), to the knowledge of the authors there has been no previous systematic review of the impact of housing interventions during pregnancy and early childhood on child protection outcomes, or social welfare outcomes more broadly. 1.5 Aims of this study This study aimed to synthesise and critically evaluate existing scientific evidence on the impact of housing support interventions on child protection outcomes for pregnant clients experiencing homelessness. While the study focus was pregnancy, studies of populations with young children were also included to provide practically relevant results (“pregnant/parenting"). Due to limited literature that included child protection outcomes (see Results), we formed a subsidiary aim (post-hoc), to investigate the impact of housing support on a broader range of pre-specified social welfare outcomes (education, employment, finances, housing, social support and healthcare services). 2 Methods 2.1 Search strategy A systematic review search strategy was specified using a Population, Intervention, Comparison, Outcomes approach ( Richardson et al., 1995 ) in consultation with a specialist research librarian at a major tertiary hospital in Melbourne, Australia. Boolean operators combined groups of related terms for “pregnancy”, AND “homelessness”, AND “intervention” OR health and social outcomes. Related terms within these groups were linked with OR, to include papers with either the intervention or outcome of interest. Terms for multiple socio-economic and socio-political outcomes associated with child protection involvement were included: education, employment, finances, housing, social support and physical and mental healthcare services. This was to ensure that all potentially relevant studies including the specific outcome of child protection were identified. See appendix A for full search strategy. Following the librarian's recommendation, searches were conducted in CINAHL and SocINDEX databases, as these were deemed the most suitable for the social welfare and pregnancy focus. Searches were restricted to articles published in the English language with publication date of 2000–June 2023. Retrieved studies were restricted to those conducted in high-income countries, given the existing research informing this review was largely conducted in this setting. This also ensures that results are applicable to an Australian context, given the authors are based there. Conference abstracts, case reports, commentaries, editorials, guidelines, letters, practice guidelines and preprint studies were excluded. Table 1 presents the study inclusion and exclusion criteria. The review protocol is registered with the International Prospective Register of Systematic Reviews (PROSPERO; CRD42022344119). The lead author screened all articles followed by second independent, blinded screen shared between two other authors. Conflicts were resolved by consensus with a third author, to reduce risk of bias. 2.2 Data extraction and quality assessment Data extraction was performed independently by the lead author, with second extraction of included studies by the fourth author. Subsequently, quality assessment was performed by the lead author in consultation with the co-authors, using the Cochrane criteria for Randomised Controlled Trials (RCTs), Consolidated criteria for reporting qualitative research (COREQ) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for observational studies ( Higgins et al., 2011 ; Tong et al., 2007 ; von Elm et al., 2007 ). Due to insufficient data contained in the reviewed studies, a meta-analysis was not possible. 3 Results Fig. 1 presents the flow diagram of study selection, as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement ( Page et al., 2021 ). The search yielded 925 studies. An additional two studies were identified from citations of retrieved studies during the review process. All study citations were imported into systematic review management (Covidence). Duplicates (134) were removed within Covidence, and citations were imported into reference management software (Endnote). Given the broad nature of the search strategy, 756 of the 793 abstracts were excluded during title and abstract screening. Full text reviewed was conducted for 37 articles, against the inclusion and exclusion criteria (see Table 1 ). Tables 2 and 3 present summary details of the reviewed studies. 3.1 Eligible studies (child protection outcomes) Two studies met all inclusion criteria ( Kuskoff et al., 2022 ; Seymour et al., 2020 ). As described in Figs. 1 and 10 papers were not peer reviewed, nine papers did not include a housing intervention, four papers did not include the population of interest, and 11 papers did not include child protection outcomes. Given the lack of eligible published studies, the inclusion criteria were broadened in subsidiary (post-hoc) analysis to the pre-defined social welfare outcomes of education, employment, finances, housing, social support and mental and physical healthcare service engagement. Five of the 11 papers that did not report child protection outcomes met all these broader inclusion criteria and were eligible for full-text review. As these papers reported on additional outcomes such as child behaviours, mental wellbeing, substance use, intimate partner violence and physical health, we describe these in the synthesis below. The remaining six studies that did not report on any of the pre-defined social welfare outcomes were excluded. 3.2 Child protection outcomes The first included paper reported results from a cohort study of housing and child protection outcomes for 44 female birth parents and their families in the first two years of an intensive, integrated residential and social support centre located in New South Wales, Australia ( Seymour et al., 2020 ). The centre offered up to thirty-six months of housing, parenting and life skills training, psychological support, and health care. Of the 44 female birth parents in the sample, two had children removed to state care while living at the centre (see Table 2 ). A footnote to the main study findings reported five additional families had children removed in the six weeks following the completion of data analysis. The second paper, by Kuskoff et al. (2022) evaluated the government-funded Supportive Housing for Families program (SHF; housing and parenting support and additional brokerage funds), located in Brisbane, Australia. It involved one-off semi-structured, qualitative interviews with 34 participants (families, support workers or case support officers) of the program which involved rental subsidies and intensive wraparound services. The study did not include data to quantify any change in rates of involvement with child protection services. Despite this, results suggested that parents consistently prioritised their children's wellbeing, and that the biggest barrier to meeting their needs was homelessness. In this study, homelessness was described as a key driver of child protection involvement, unrelated to any risk posed by parents' behaviour. The authors suggested that by providing housing stability and assisting mothers with their parenting practices, SHF reduced risks leading to child protection involvement and facilitated disengagement from child protection services. 3.3 Subsidiary studies Table 3 summarises the five reviewed studies that reported on subsidiary social welfare outcomes (education, employment, finances, housing, social support, and healthcare services). All studies were conducted in the US. The earliest was a non-randomised pilot of Ecologically Based Treatment (EBT) for 15 female birth parents of young children with substance use disorders ( Slesnick & Erdem, 2012 ). The program provided utility and rental assistance, furniture, appliances, case management and substance abuse interventions. A subsequent RCT of the program involved 60 female birth parents, with two papers reporting results ( Guo et al., 2016 ; Slesnick & Erdem, 2013 ). A recent RCT published by Slesnick et al. (2023) 240 female birth parents with substance use disorder and their young children. As expected, our search strategy found the prior systematic review (the fifth paper) on housing first programs for pregnant/parenting female birth parents experiencing homelessness who used substances ( Krahn et al., 2018 ). 3.4 Subsidiary social welfare outcomes In addition to social welfare outcomes, findings relating to outcomes not included within the scope of this review will now be described. The EBT pilot results reported by Slesnick and Erdem (2012) showed apparent decreases in maternal substance use and homelessness. Improvements in mental wellbeing and reduced display of internalising and externalising behaviours in children were also reported. The subsequent RCT found a more rapid initial improvement in treatment compared to control groups for decreased homelessness and reduced alcohol use ( Slesnick & Erdem, 2013 ). The analysis by Guo et al. (2016) of data from this first RCT reported consistent reductions in children's displays of internalising and externalising behaviours in the intervention compared to control group. Improvements in reduced homelessness, decreased alcohol and drug use, improved mental health and reduced reports of intimate partner violence across treatment and control groups were evidenced at the final 9-month follow-up ( Guo et al., 2016 ; Slesnick & Erdem, 2013 ). The study reported no improvements to parent's self-reported physical health, self-efficacy, parenting stress or unemployment across either the intervention or control groups. The most recent RCT by Slesnick et al. (2023) found participants receiving both housing and supportive services reported decreased use of substances (and lower levels of substance use) and improved self-efficacy, compared to participants in the housing-only or control groups. The systematic review by Krahn et al. (2018) examined pregnant females experiencing homelessness who were using substances. The authors included several of the same papers reviewed in the current study but did not consider child protection outcomes. Krahn and colleagues reported that the reviewed studies were of low quality, making the comparison and generalisability of results difficult, and warranting further research. 4 Discussion This systematic review sought to describe the impact of housing support on child protection outcomes for pregnant/parenting female birth parents experiencing homelessness. While eleven studies met the participant inclusion criteria, only two examined child protection outcomes. There was no convincing evidence for impacts, and meta-analysis was not possible. The remaining studies examined broader social welfare outcomes that are also likely to impact family functioning. Across the reviewed studies, findings suggested that the provision of housing support resulted in improvements in housing stability, reduced alcohol and other substance use, and improvements in child display of internalising and externalising behaviours ( Guo et al., 2016 ; Slesnick & Erdem, 2013 ). When combined with support services, participants were more likely to report improvements in self-efficacy and maintained lower or reduced substance use than housing-only or control participants ( Slesnick et al., 2023 ). 4.1 Child protection outcomes Given the ethical and practical challenges of conducting research with people experiencing marginalisation including homelessness, estimation of rates of child removal by child protection services is challenging and figures vary across studies. One report involving 2282 families in the US showed that 38% experienced child removal before or during a shelter stay ( Walton et al., 2018 ). Another longitudinal US study of 59 female birth parents experiencing homelessness found that more than one-third consistently lack child custody over the two year period ( North et al., 2023 ). While this suggests improvements to child protection outcomes may result through direct housing support, variation in study quality and sample representativeness means the results of prior studies cannot be interpreted or generalised with confidence. Furthermore, we were unable to locate any reliable data describing rates of child protection involvement in Australia amongst clients experiencing homelessness. It is therefore difficult to make direct comparisons to the Australian study by Seymour et al. (2020) found removal of children from 4.5% of female birth parents. The second reviewed study qualitatively described supportive housing as a protective factor in addressing child protection concerns. This is because it enabled stability and addressed the biggest barrier (homelessness) to parents meeting the needs and preventing removal of their children ( Kuskoff et al., 2022 ). A commissioned report (non-peer reviewed) outlining findings from an earlier pilot of this program stated that 82% of families had previous or current involvement with child protection services ( Kuskoff et al., 2021 ). While the subsequent peer-reviewed study included in this review does not offer quantitative data, it provides valuable insight into clients’ priorities and abilities when provided with adequate resources ( Kuskoff et al., 2022 ). 4.2 Subsidiary outcomes When considering the impact of pre-birth homelessness on neonatal and child health, developmental and educational outcomes, the practical significance of providing housing support to pregnant clients and parents of young children is clear ( Caldwell et al., 2003 ; Husa et al., 2022 ; Richards et al., 2011 ). The observed improvement in child behaviours in treatment compared to control group in one RCT may suggest the benefits of housing support occur beyond maternal outcomes investigated in the subsidiary studies ( Guo et al., 2016 ). Importantly, housing instability has been associated with separation of children from families, thus interventions addressing housing instability sooner rather than later, ideally with a preventative approach, may increase the likelihood of preserving family unity ( Barrow & Lawinski, 2009 ). Given the vulnerable and formative period of the first 1000 days, the speed with which risk factors are addressed is likely to be important. The long-term impacts of prompt housing support for children across multiple domains warrants further investigation. All reviewed studies reported improvements in one or more of the investigated social welfare outcomes. These included homelessness, substance use, mental wellbeing, child behaviour, intimate partner violence and physical health. Improvements in these outcomes may be partially explained by extensive public welfare services and social policy in the US settings where these studies were conducted ( de Sousa et al., 2022 ; State of Homelessness, 2023 ). This does not, however, explain disparities in outcomes for pregnant/parenting clients experiencing homelessness prior to intervention, nor speak to child removal in this population. Furthermore , while the studies reporting results of RCTs showed treatment and control groups tended towards comparable outcomes at 9 months, the impact of housing support on longer-term outcomes was not analysed ( Slesnick & Erdem, 2013 ; Slesnick et al., 2023 ). 4.3 Limitations The scope of this systematic review was restricted to studies published between 2000 and 2023 in the English language and conducted in high-income countries. Given all subsidiary studies were in the US, results are difficult to generalise to other high-income countries internationally, especially given differences in social welfare infrastructure and public health policy across international contexts. These parameters also limit the generalisability of this review to population groups in non-English speaking, high-income countries. No study required participants to be pregnant; hence, the results may be less applicable to pregnant sub-populations. Furthermore, the review focussed on examining social welfare outcomes. Other important outcomes such as health, substance use, wellbeing and housing were not explored and present opportunities for future study. The scope of this review also did not include substantial consideration of a number of potentially relevant conceptual frameworks and themes, such as the Sustainable Development Goals, ecological treatment, intersectionality and the social determinants of health ( UN, 2024 ; WHO, 2024 ). Only one reviewed study controlled for ethnicity, and none examined the potential influence of race in relation to either homelessness or child protection involvement ( Slesnick et al., 2023 ). Consideration of these and other intersectional perspectives warrants further study. 4.3.1 Included study quality The first included study was a low-quality case report of a small sample, with no control group, limiting interpretation of reported rates of child protection involvement ( Seymour et al., 2020 ). The study did not report baseline outcome measurements, and had a short follow-up, making the assessments of reported rates of child removal and long-term child protection outcomes difficult to interpret. Furthermore , all study authors were affiliated with the organization funding the study/centre, raising the risk of bias. The second, qualitative study also comprised a small sample and analysed only a single interview with each participant ( Kuskoff et al., 2022 ). This analysis comprised descriptive and analytical coding and thematic grouping of transcripts. Despite this, the lack of longitudinal or quantitative data means the perspectives depicted are of limited value in informing long-term, evidence-based policies. 4.3.2 Subsidiary study quality Several limitations to the subsidiary studies are noted which, compounded by variability in measurement tools and reported outcomes, limit comparison and generalisability. All studies recruited participants from a single setting (be it shelter, housing service or community-based service) and the longest follow-up period was 9 months. Furthermore, the pilot and earliest RCT of EBT had sample sizes of 15 and 60, respectively ( Slesnick & Erdem, 2012 , 2013 ). The short follow-up periods and small sample sizes limit the generalisability of results to broader populations and understanding of long-term outcomes. The pilot and earlier RCT of EBT objectively measured substance use through urine sampling, although the more recent RCT of EBT did not use biological samples ( Guo et al., 2016 ; Slesnick & Erdem, 2012 , 2013 ; Slesnick et al., 2023 ). Despite this, these studies all used validated self-reporting measurement tools with high validity and test-retest reliability. Statistical analysis across these studies included chi-square ( X 2 ) tests and tests of significance (such as effect sizes, reporting of p values). The most recent RCT by Slesnick et al. (2023) used latent class analysis to interpret the data. To account for missing data from two women in the first RCT, hierarchical linear modelling was used to enable analysis of all useable data, given there were two or more data points per case ( Guo et al., 2016 ; Slesnick & Erdem, 2013 ). Slesnick et al. (2023) accounted for missing data in their analyses using Little's MCAR test and full information maximum likelihood to reduce bias from attrition. X 2 tests in all RCTs indicated sufficient randomisation of groups. 4.4 Implications Existing research has suggested there is a high prevalence of child separation from families in populations experiencing homelessness ( AIHW, 2021 ; Chandler et al., 2022 ; Marcal, 2018 ). Despite this, there remains scant investigation of child protection outcomes among pregnant/parenting clients receiving housing support. Given the promising results reported in the subsidiary studies regarding social welfare outcomes, evaluation of a Housing First approach for pregnant/parenting clients presents an opportunity for future research to specifically include child protection outcomes. In Australia for example, the federal and some state governments have committed to substantially increasing the availability of low-cost and social housing ( Commonwealth of Australia, 2023 ; Victorian Government, 2023 ). Planning for the evaluation of such large-scale policy implementation from the outset will enable opportunities to develop robust, randomised and ethical evidence regarding the intergenerational benefits of stable housing for pregnant people and their children ( Venkataramani et al., 2020 ). Longer follow-up of health and wellbeing outcomes, for larger sample sizes of children, following homelessness in-utero and during early childhood is also warranted. Rates of returning to homelessness services at 4.5 years following both permanent and non-permanent housing interventions in the US have been reported as 9.5% and 16.9% respectively ( Brown et al., 2017 ). In general, homelessness recurrence after a single episode has been quantified at 23.7% ( McQuistion et al., 2014 ). Despite this, the longest follow-up period of the studies was just two years. Given the ongoing consequences of homelessness in pregnancy and early childhood, longer-term research is warranted to evaluate the longevity of reported benefits. Given no studies required participants to be pregnant or in the perinatal period, further opportunities exist for research focusing on this time. Studies isolating pregnant from early childhood populations could inform evidence-based provision of tailored interventions for these sub-populations. The aforementioned significance of the first 1000 days warrants specific consideration of the unique health, emotional and social needs and consequences of pregnancy. The lack of published studies reflects the substantial challenges of conducting high-quality research, particularly RCTs, with groups experiencing social marginalisation. Considering the adverse maternal and child outcomes associated with homelessness during pregnancy and childhood, randomisation of participants to housing intervention or control groups presents an ethical quandary. Furthermore, social and practical difficulties and a lack of research to guide program evaluation for populations experiencing homelessness add to the complexity in conducting research in this field ( Winship, 2001 ). Many existing programs also do not receive funding for sufficient implementation support nor rigorous outcome evaluation ( Winship, 2001 ). To address some of these challenges, the aforementioned Cornelia program has engaged independent academic researchers to ensure rigorous evaluation of qualitative and quantitative outcomes during program implementation ( Lynch, Manning, & Coutts, 2022 ). Linkage of large, multi-sectorial administrative data has been presented as a means to addressing some of the limitations relating to research with marginalised population groups ( Heerde et al., 2023 ). It makes sense, economically, practically, and ethically, to utilise and integrate existing data for research to adequately describe the impact of interventions on child protection outcomes and reduce burden on participants. This would also offer an opportunity to identify potential intervention targets which address broader societal factors leading to and perpetuating homelessness. Future program evaluations should also adjust for the effect of marginalities including but not limited to race, gender identity, sexual identity. There are already precedents for evaluating social interventions which are challenging to test using traditional randomised controlled trials. In Minnesota, US, Hong and Piescher (2012) performed linkage of Government education and social services data. This enabled 3-year longitudinal analysis of educational and child welfare outcomes for 183 children receiving supportive housing services in comparison to a control group experiencing homelessness. Another example is the Target Trial approach, involving data from large observational databases, such as government bodies, followed by identification and correction of potential bias in study design and analysis to emulate an RCT ( Hernán & Robins, 2016 ). 5 Conclusion Literature investigating the impact of housing support on child protection outcomes for pregnant/parenting clients experiencing homelessness is lacking. A handful of studies of mixed quality indicate that housing interventions for this priority population result in faster initial improvements in housing stability and reduced substance use. However, a lack of studies analysing high-quality longitudinal data limits the current research base. Given the challenges of conducting research with people experiencing homelessness, evaluations of existing housing programs, novel trials of large-scale housing policy implementation, and analysis of linked health, homelessness and administrative data should be prioritised. Funding/support Research at the Murdoch Children's Research Institute (MCRI) is supported by the Victorian Government's Operational Infrastructure Support Program . JH is supported by a National Health and Medical Research Council Emerging Leadership Investigator Grant ( GNT2007722 ). JH also holds a Dame Kate Campbell Fellowship awarded by the Faculty of Medicine, Dentistry and Health Sciences at The University of Melbourne . AP is supported by The Erdi Foundation Child Health Equity (COVID-19) scholarship . CRediT authorship contribution statement Kathleen Brew: Data curation, Formal analysis, Methodology, Writing – original draft, Writing – review & editing. Jessica Heerde: Conceptualization, Formal analysis, Methodology, Writing – review & editing. Anna Price: Conceptualization, Formal analysis, Methodology, Supervision, Writing – review & editing. Karen McLean: Conceptualization, Formal analysis, Methodology, Supervision, Writing – review & editing. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A Supplementary data The following is the Supplementary data to this article. Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.chipro.2024.100024 . | [
"AMORE",
"ANDRADE",
"ARANGUA",
"BARROW",
"BASSUK",
"BASSUK",
"BAUGHMAN",
"BAXTER",
"BELCHER",
"BENEDICT",
"BHUTTA",
"BLOOM",
"BOULLION",
"BROWN",
"BROWNE",
"CALDWELL",
"CASHMORE",
"CHANDLER",
"CHEUNG",
"CLAUSEN",
"COURTNEY",
"CUTTS",
"DESOUSA",
"DELFABBRO",
"DRAKE",
... |
ca9fb8d99feb4c0c8c594f003d4218c0_Large loop-coupling enhancement of a heavy pseudoscalar from a light dark sector_10.1016_j.nuclphysb.2017.02.001.xml | Large loop-coupling enhancement of a heavy pseudoscalar from a light dark sector | [
"Di Chiara, Stefano",
"Hektor, Andi",
"Kannike, Kristjan",
"Marzola, Luca",
"Raidal, Martti"
] | The small background and the sensitivity to charged particles via a leading order loop coupling make the diphoton channel a privileged experimental test for new physics models. We propose a simple archetypal scenario to generate a sharp di-photon resonance as a result of threshold enhancements in the effective coupling between a heavy pseudoscalar particle and new vector-like leptons. We therefore study three different scenarios consistent with the current experimental limits and deviating from the Standard Model at the 2 σ level. The model also introduces a natural dark matter candidate able to match the observed dark matter abundance and comfortably respect the current direct detection constraints. | 1 Introduction Last year both the ATLAS and CMS experiments of the Large Hadron Collider (LHC) at CERN reported the presence of an excess in the diphoton channel, peaked at a centre-of-mass energy of about 750 GeV [1,2] . The signal appeared with a statistical significance of about 2.6 σ in the data gathered by the CMS detector, while the ATLAS collaboration measured a 3.6 σ excess. The two indications, being compatible within the limits of the resolutions of the detectors, triggered an incredible number of works elaborating on the possible beyond-the-Standard-Model origin of the excess. 1 In March 2016, both the ATLAS and CMS collaborations updated their analysis with the new data collected at 13 TeV. The observations based on an integrated luminosity of 1 A comprehensive collection of the papers on the topic is presented in [3] . hinted once again at the particle interpretation of the signal, with an excess reaching a local significance of 3.4 3.3 fb − 1 σ and 3.9 σ in the data of CMS and ATLAS, respectively [4–6] . Disappointingly, though, the significance of the purported signal was seriously impaired in a later analysis of larger datasets by the ATLAS and CMS groups, respectively relying on 15.4 and of acquired data 12.9 fb − 1 [7,8] . In spite of the fate of the resonance, the episode still exemplifies the potential of the diphoton final state: such a channel is indeed an important tool for searches of heavy neutral spin-zero resonances owing to the smallness of the involved background and the sensitivity to new physics via a leading order one loop contribution. In this regard, we remark that even the 125 GeV Higgs boson was first signalled by a resonance in the same diphoton channel. Furthermore, the latest LHC data present deviations at the one to two 750 GeV σ level associated to energies larger smaller or equal to about a TeV which might be confirmed in forthcoming analyses. We thus find compelling to study how potential signals of diphoton resonances are entwined with otherwise unobserved new physics states which, within particle models that are well defined up to very high energies, provide their effective coupling to the photons. According to the Landau–Yang theorem, a resonance in the diphoton channel can originate only from a particle of either spin zero or two. In the remaining of the paper we assume that the speculated particle has spin zero and investigate a way to produce a signal large enough for the corresponding resonance to be possibly discoverable in a dataset of the order of hundreds of , fb − 1 2 but still consistent with the current experimental limits. As it was the case for the 750 GeV excess, we require the absence of indications in complementary channels such as the di-jet, the 2 The integrated luminosity projected to be delivered by LHC at the end of Run II is about . 200 fb − 1 , the di-boson and the di-lepton ones. Interestingly, this condition necessarily forces the introduction of additional charged and colored particles characterized by large multiplicities and/or large couplings to the new resonance. However, assuming a diphoton cross section of t t ¯ , the running of these couplings seems to drive the model to a non-perturbative regime already at scales as low as a few TeVs O ( fb ) [9,10] . While this fact apparently favors frameworks supporting the compositeness of the hypothetical spin-zero particle, we aim to demonstrate that a perturbative scenario based on a fundamental spin-0 field is still attainable. In the following we study three cases of fundamental spin-0 resonance characterized by masses of 330 GeV, 720 GeV, and 1000 GeV, respectively, and show that a large excess in diphoton events can be comfortably reproduced within a simple extension of the Standard Model (SM) which retains perturbativity up to scales at least as large as and up to the Planck scale in the best case of the three scenarios. In the framework we consider in this study, the high mass diphoton resonance originates from a new pseudoscalar particle that couples to a vector-like (VL) charged lepton, characterized by a mass not far from half the mass of the new scalar resonance, and to two VL neutrinos. The pseudoscalar production at the LHC is allowed by the coupling to a heavier VL top quark that mediates the gluon fusion process. Our goal is to demonstrate that, in this setup, the new particle content needs only modest Yukawa couplings to generate the large diphoton cross section initially observed at the LHC. O ( 10 10 ) TeV Interestingly, in our analysis we also find that the lighter VL neutrino is a viable dark matter (DM) candidate and that, in the same parameter space that yields a sizeable resonant signal, this particle gives rise to a DM abundance in the ballpark of the measured one. The scenario also complies with the present direct detection constraints. The paper is organized as follows: in Section 2 we review the motivation that lead to the choice of our framework, which is detailed in Section 3 . The relevant LHC phenomenology is discussed in Section 4 whereas in Section 5 we show that the proposed VL neutrino is a viable DM candidate. We gather our conclusions in Section 6 . 2 Threshold enhancement of the diphoton decay rate As a prototypal case we study here a possible LHC diphoton signal with 720 GeV invariant mass. For the relevant centre-of-mass energies of 8 and 13 TeV, the largest contribution to the production cross section is given by the gluon–gluon fusion process. Hence, we introduce additional colored and charged particles to provide the effective coupling between the SM gauge bosons and the speculated 720 GeV particle via loops. The contributions of the new states to the diphoton decay width are inversely proportional to the square of the masses of the new particles. Since any new charged and colored vector bosons must be much heavier than 720 GeV because of the present experimental bounds, it follows that the contribution of new spin-1 particles to the effective coupling is necessarily suppressed. As for the remaining possibilities, the partial amplitude mediated by a heavy fermion loop is four times as large as that mediated by a heavy scalar, and therefore fermions are more suitable in generating a large diphoton cross section. In the present scenario we then consider VL fermions f (whose mass terms, contrarily to those of chiral fermions, are gauge invariant) characterized by a charge , in units of the positron charge, and e f colors. The contribution to the diphoton decay width of a scalar N f H or of a pseudoscalar A of these VL fermions is then quantified by [11,12] where (1) Γ S → γ γ = α e 2 m S 3 256 π 3 v w 2 | ∑ f a f N f e f 2 F f S | 2 , S ∈ { A , H } and (2) a f = y f v w 2 m f , τ f = 4 m f 2 m S 2 , The pseudoscalar partial amplitude is affected by a discontinuity, such that (3) F f H = − 2 τ f [ 1 + ( 1 − τ f ) f ( τ f ) ] , F f A = − 2 τ f f ( τ f ) . which originates from threshold effects. We avoid this discontinuity in our computation by setting (4) lim m f → ( m A / 2 ) ± F f A = π 2 2 , F f A | m f = m A / 2 = π 2 , at the threshold. F f A = π 2 / 2 In the left panel of Fig. 1 we show the diphoton decay rates of a 720 GeV pseudoscalar particle A , via a VL fermion loop, normalized to the corresponding quantity for a scalar H . As one can see, the large enhancement of the loop coupling of A to photons for allows the pseudoscalar decay width to match the scalar one through a Yukawa coupling between m f = m A / 2 A and the VL fermion that is about 2.5 times smaller than that of H . In the right-hand side panel of Fig. 1 , we furthermore show how the partial amplitude for the diphoton decay of H depends on . The normalization here is given by the corresponding quantity computed for m f , which approximately matches the current lower bound on the mass of heavy VL quarks m f = 700 GeV [13] . We then conclude that, to achieve a given decay rate, the Yukawa coupling of a 720 GeV pseudoscalar to a 365 GeV charged VL lepton must be about 3.5 times smaller than that required by a scalar which couples to photons via a 700 GeV VL quark loop. Motivated by the observations above, we identify the speculated 720 GeV particle with a CP-odd scalar A that couples only to VL fermions. The relevant diphoton cross section, in the narrow width approximation, is then given by where the SM labeled quantities, respectively the LHC cross section at 13 TeV for the production via digluon fusion of a SM Higgs boson of mass (5) σ ( p p → A → γ γ ) = σ p p → H SM Γ g g Γ g g SM Γ γ γ Γ tot , and the decay rate of the same boson to digluon, are tabulated in m A [14–16] . The decay rate of the pseudoscalar A to digluons in Eq. (1) is defined by [11,12] with (6) Γ A → g g = α s 2 m A 3 128 π 3 v w 2 | ∑ f a f F f A | 2 , , while f = T is the total decay rate of Γ tot A . The cross section in Eq. (5) is maximized for , which in the present scenario holds as long as the Yukawa couplings of the VL quarks are comparable to those of the charged VL leptons. The lower limit for heavy copies of SM quarks decaying to a top or bottom quark and a Γ tot ∼ Γ g g , or a SM Higgs, ranges from 705 GeV to 846 GeV W ± [13,17] . These can be relaxed to 690 GeV in case of decays to light quarks [18] . Charged heavy leptons, depending on their charge, must instead be heavier than at least 400 GeV [19] . This limit is relaxed to 104 GeV for charged particles of an doublet which decays to a nearly degenerate neutral component SU ( 2 ) L [20] . The latter, being weakly interacting and stable, is a viable DM candidate. Motivated by this intriguing example, in the next section we introduce a model that captures and generalizes the features of the diphoton resonance we discussed above. 3 The model In order to model potential signals of resonances appearing exclusively in the diphoton channel, we extend the SM particle content with a VL lepton EW doublet and singlet and a VL top quark, which by construction do not contribute to the anomaly diagrams of the gauge group. To avoid lepton-number violating processes we also impose a S U ( 3 ) c × S U ( 2 ) L × U ( 1 ) Y symmetry at the Lagrangian level. In particular, we assume that all the VL fermions are odd under the Z 2 , while the SM leptons and Z 2 generation quarks, as well as the Higgs boson and the pseudoscalar 3 rd A , are even. The first two generations quarks are also taken odd under the discrete symmetry so that the VL quark is allowed to decay into these states via a small Yukawa coupling [21] . Finally, we assume the Lagrangian to respect CP symmetry, which forbids linear and cubic terms in the A potential: Here we take a positive portal coupling, (7) L ⊃ [ y N L L ¯ L H ˜ N R ′ + y N R L ¯ R H ˜ N L ′ + H.c. ] − i y L A L ¯ γ 5 L − i y N A N ′ ¯ γ 5 N ′ − i y T A T ¯ γ 5 T + m L L ¯ L + m N N ′ ¯ N ′ + m T T ¯ T − m A 2 A 2 − λ A A 4 − λ A H A 2 | H | 2 . , that prevents λ A H > 0 A from acquiring a CP-violating vacuum expectation value and omitted the small Yukawa couplings of the VL quark to the light quarks. We remark that VL fermions coupled to heavy spin-zero resonances appear in a class of string-inspired supersymmetric models [22–25] , of which our framework possibly is an effective low energy limit. In Appendix A we present the masses and pseudoscalar couplings of the VL neutral leptons. The mass assignments for our VL fermions allow the charged component E to decay into the lighter VL neutrino via the process . A small mass splitting between the charged and the neutral VL leptons fulfills the requirement in E → N 1 + W ± ⁎ → N 1 + l ± ν l [20] and respects the constraint from the T parameter [26,27] . As for the VL quark, which in our scheme eventually decays to light quarks, we take the T mass to be 700 GeV. This value guarantees that the experimental constraints from direct searches [18] are satisfied. 4 Phenomenology of the LHC signal In this Section we determine the values of the Yukawa couplings that yield a large cross section consistent with the current diphoton constraints [7,8] and respect upper bounds from the complementary channels obtained with the 8 TeV dataset. We also present a study of the running of the couplings of our model to determine its cutoff scale. We compute the diphoton cross section in Eq. (5) by evaluating the diphoton decay rate as in Eq. (1) , with , and the di-gluon decay rate in Eq. f = E , T (6) . The values of the coupling coefficients used in both the expressions are reported in a f Appendix A . To simplify the phenomenological analysis we set and then fix the masses of non-SM particles to the sample values given in (8) y L = y T ≡ y v , y N R = y N L , Table 2 . The three pseudoscalar masses chosen correspond to energies in the diphoton invariant mass distribution showing a one to two σ excess over the SM prediction [7] . The chosen cross section together with the values of the Lagrangian parameters determined by this choice of masses in the three different scenarios are listed in Table 3 . The values of and y N are determined in each scenario, as explained in Section y v 5 , by matching the observed DM abundance while reproducing the reference diphoton cross section value given in Table 3 . We define the mixing angle, expressed in Appendix A in terms of the model free parameters, for the VL neutral leptons mass eigenstates by The values given in (9) N 1 = N cos θ + N ′ sin θ , N 2 = N ′ cos θ − N sin θ . Table 3 are such that the lighter neutral mass eigenstate is mostly made of a VL sterile neutrino in each of the three scenarios. For each data point we also check that the dijet, WW , and ZZ decay cross sections 3 satisfy the experimental bounds from the 8 TeV LHC data. 3 In evaluating the WW cross section we make the simplifying assumption that neutral and charged VL leptons be degenerate, which is approximately true in the case at hand, and neglect the contribution of the mostly sterile VL neutrino . Furthermore for all the diboson decay rates we assume the electroweak (EW) vector bosons to be massless, given that their mass corrections are of N 1 . O ( m Z 2 / m A 2 ) ∼ 1 % Table 4 shows the values obtained for these quantities and the corresponding upper bounds [28–30] . The prediction for the Zγ channel in each of the three scenarios is about two orders of magnitude smaller than the corresponding experimental upper constraint on the fiducial cross section [31] , by definition smaller than the total cross section. 4 The model therefore satisfies the corresponding bound on the 4 The fiducial cross section is defined as the total cross section multiplied by the signal acceptance for the given cuts. Zγ total cross section as well. We also calculated the prediction for the invisible decay cross section, respectively equal to 34, 2.6, and 0.66 fb in each of the three scenarios ( , 720, and 1000 GeV), and noticed that the result at m A = 330 is about 300 times smaller than the experimental upper bound at 750 GeV quoted in m A = 720 GeV [9] . This clearly represents a strong indication that the model's predictions for invisible decays in the three scenarios are naturally well within the experimental bounds, though of course only a direct comparison with the relevant experimental bounds can give a definitive answer about the viability of the model's predictions for this channel. Finally, in Fig. 2 we show the running of the quartic couplings for the values of the Lagrangian parameters considered in scenario I, which remains perturbative up to the Planck scale ( ). This value is well above the cutoff scales of 10 19 GeV obtained for SM-like VL fermions O ( TeV ) [9,10] . In determining the cutoff scale, we required the running couplings to be smaller than 4 π and the scalar potential to be bounded from below. The beta functions of the model are calculated with the help of the PyR@TE package [32,33] and are given in Appendix B . As customary, we neglect the Yukawa couplings for all SM fermions with the exception of the top quark. The beta functions of the SM (see for example [34] ) are used to calculate the renormalization group running from the top quark mass up to m t . At 330 GeV , we take the values of the SM gauge couplings to be m t , g ′ = 0.35940 , g = 0.64754 , the top Yukawa g 3 = 1.1666 and the Higgs self-coupling y t = 0.95096 λ H = 0.12879 [34] . In Table 3 we show the values of the cutoff scale for all three scenarios, as well as the corresponding values of the non-SM quartic couplings. The quartic self-coupling of the pseudoscalar and the portal coupling are chosen so as to prevent the Higgs and pseudoscalar self-couplings from running through zero. This is required by the fact that the beta functions of the scalar quartic couplings receive additional negative contributions due to the larger fermion content. Because the running of the gauge and Yukawa couplings is much slower, the model is valid up to a scale where either λ A H or λ A become non-perturbative. With these values of parameters, the model can stays perturbative up to the Planck scale in the case of relatively small Yukawa couplings (scenario I) or down to λ A H for the larger Yukawa couplings of scenario III. 10 10 GeV A simple variant of the model can be obtained by assuming flavor universality, which is imposed by assigning negative parity to the third quark generation as well. In such a model the mass of the heavy VL quark, Z 2 T , would be bound to be larger than 705 GeV [35] : while this value is only slightly above 700 GeV, the chosen T mass, and therefore it does not substantially change the results in Table 3 , it is relevant to ask how a larger would affect those results. By using the same assumptions as in Eqs. m T (8) , the Yukawa coupling , and indirectly y v as well, would need to be comparably larger to compensate for the larger y N R = y N L , given that the diphoton cross section scales as the inverse square of m T . Stability would then require larger pseudoscalar quartic couplings at the m T scale, which in turn would drive the model to the non-perturbative regime at a scale lower than the one given in m A Table 3 . 5 An authentic WIMP candidate We investigate now the compatibility of the measured DM abundance with the relic abundance of the VL neutrino . N 1 DM provides 26% of the energy density of the present Universe. As mentioned in Section 1 , DM is usually modelled after a WIMP because particles having masses and annihilation cross sections set by the EW scale provide the measured value of DM abundance in a natural way [36,37] . Our model indeed presents a natural candidate for DM, the VL neutrino that, owing to its SM weak interactions emerges as an authentic WIMP. N 1 5 We calculate the relic abundance of 5 The heavier VL neutrino eventually decays to N 2 by emitting SM light-fermion pairs via virtual N 1 Z bosons. , arising from the standard freeze-out scenario, N 1 where (10) Ω N 1 h 2 ≃ 0.1 60 g ⋆ ( T fo ) 〈 σ v 〉 fo 〈 σ N 1 N ¯ 1 → S M v 〉 ( T fo ) , is the effective number of relativistic degrees of freedom at the freeze-out temperature g ⋆ ( T fo ) , T fo is the standard freeze-out cross-section and 〈 σ v 〉 fo = 3 × 10 − 27 cm 3 s − 1 is the velocity averaged annihilation cross-section of 〈 σ N N ¯ → S M v 〉 ( T fo ) to SM particles at the freeze-out temperature N 1 . For the latter, we assume the dominance of VL neutrino annihilation mediated by an T fo s -channel pseudoscalar A into gluons, enhanced by assuming a VL neutrino mass close to the threshold. Other annihilation channels, like the s -channel annihilation through Z into a pair of SM particles, are highly suppressed, and therefore negligible, given that the VL neutrino is at 99% made of a sterile neutrino gauge eigenstate. For a lightest VL neutrino with mass , whose value for each scenario is given in m N 1 Table 5 , we find the value of the physical coupling to A , , reproducing the DM relic abundance as measured by the Planck collaboration, y N 1 Ω DM h 2 = 0.1188 ± 0.0010 [38] . By using for each scenario the value of listed in y N 1 Table 5 in combination with Eq. (16) , we can express as a function of y N , with the remaining parameters given in y v Tables 2 and 3 , and determine subsequently by matching the observed diphoton cross section. In y v Fig. 3 we plot the value of producing the observed DM relic abundance for a range of lighter VL neutrino masses near the resonance and for y N 1 . y v = 0.36 , 0.4 , 0.44 The relic abundance of is strongly constrained by the direct detection experiments, which put an upper bound on the DM elastic scattering off a nucleon. In the case at hand, the only parton that interacts with the pseudoscalar mediator is the gluon, and the only effective operator relevant for such process can be written as N 1 with (11) O g - N 1 = 1 Λ F 3 i N ¯ 1 γ 5 N 1 ⋅ α S 8 π G a μ ν G ˜ μ ν a , being the effective scale of the interaction. The resulting cross section is both momentum suppressed and spin-dependent, in which case the current experimental upper bound is still rather large Λ F [39] and therefore not yet sensitive enough to constrain the theory [40] . 6 Conclusions While the preliminary evidence for the 750 GeV excess at the LHC has not be confirmed by the latest results that both ATLAS and CMS experiments presented at ICHEP 2016, the small background of the diphoton channel and its sensitivity to new physics via its leading order loop coupling make it a privileged experimental test for new physics. In this spirit, we took the initial evidence in favor of a 750 GeV resonance as a template to study possible weakly-interacting new physics contributions in conjunction with the appearance of high-mass diphoton resonances. The main result of this paper is to demonstrate that a diphoton excess can be enhanced by threshold effects due to new light particles, that evade direct search current constraints. By studying three different scenarios we show that a large excess in the diphoton spectrum can be explained within weakly coupled theories that retain perturbativity up to energy scales as large as the Planck scale. Our results rely on a prototypical model containing a vector-like EW doublet and singlet leptons and a vector-like top quark, as summarized in Table 1 . In addition to satisfying the LHC upper bounds on possible large diphoton excesses in the invariant mass spectrum and the bounds of complementary channels, our framework proposes a natural dark matter candidate. This is the lighter vector-like neutrino, whose relic abundance accounts for the entire measured dark matter abundance in the same part of the parameter space selected by the speculated LHC signals. The direct and indirect DM detection constraints are also satisfied in the same parameter ranges. Note added While the present paper was being finalized and several of the authors were attending the EW session of the 2016 Moriond conference, a paper using a threshold enhancement of the diphton cross section similar to that studied in our paper appeared on the arXiv [41] . We remark that our model differs in particle content and interactions by the one of [41] ; we furthermore show that, while satisfying the observed 8 TeV constraints on the diphoton and other diboson cross sections, our model can remain perturbative up to the Planck scale in the best case. Acknowledgements This work was supported by the Estonian Research Council grants PUTJD110 , PUT808 and PUT799 , the grant IUT23-6 of the Estonian Ministry of Education and Research , and by the EU through the ERDF CoE program grant TK133 . AH thanks the Horizon 2020 programme as this project has received funding from the EU Horizon 2020 programme under the Marie Sklodowska-Curie grant agreement No. 661103 . Appendix A Vector-like fermion masses and couplings The masses of the VL neutral leptons , defined by Eq. N 1 , 2 (9) , are with mixing angle determined by (12) m N 1 , 2 = 1 2 l 2 + L 2 + m 2 + M 2 ∓ 2 ( l 2 + m 2 ) ( L 2 + M 2 ) , where (13) tan 2 θ = l M − L m l L + m M , Finally, the coupling coefficients of the mass eigenstates (14) M = m L + m E , m = m E − m L , L = y N R + y N L 2 v w , l = y N R + y N L 2 v w . E and T appearing in Eqs. (1) , (6) are simply while the Yukawa coupling of the physical VL neutrinos (15) a E = − y L v w 2 m L , a T = − y T v w 2 m T , to N 1 , 2 A are with (16) y N 1 , 2 = 2 y N ( m 2 M 2 ∓ m M ( l 2 + m 2 ) ( L 2 + M 2 ) ) + l 2 M 2 ( y N + y L ) + L 2 m 2 ( y N − y L ) 4 ( l 2 + m 2 ) ( L 2 + M 2 ) × N 1 , 2 2 (17) N 1 , 2 = 1 + | l L − m M ∓ ( l 2 + m 2 ) ( L 2 + M 2 ) L m + l M | 2 × 1 + | l L + m M ± ( l 2 + m 2 ) ( L 2 + M 2 ) L m − l M | 2 . Appendix B Beta functions The one-loop beta functions for our model, calculated with the PyR@TE package [32,33] , are given by: (18) 16 π 2 β g 1 = 167 18 g ′ 3 , (19) 16 π 2 β g = − 5 2 g 3 , (20) 16 π 2 β g 3 = − 19 3 g 3 3 , (21) 16 π 2 β y t = y t [ 9 2 y t 2 + 12 ( | y L L | 2 + 3 | y Q L | 2 + 3 | y Q R | 2 ) − 17 12 g ′ 2 − 9 4 g 2 − 8 g 3 2 ] , (22) 16 π 2 β λ H = 3 8 ( g ′ 4 + 2 g ′ 2 g 2 + 3 g 4 ) − 6 y t 4 + 24 λ H 2 + 2 λ A H 2 − 2 | y N L | 4 − 2 | y N R | 4 + λ H ( − 3 g ′ 2 − 9 g 2 + 12 y t 2 + 4 | y N L | 2 + 4 | y N R | 2 ) , (23) 16 π 2 β λ A = 2 [ 36 λ A 2 + λ A H 2 − 2 | y L | 4 − | y N | 4 − 3 | y T | 4 + 4 λ A ( 2 | y L | 2 + | y N | 2 + 3 | y T | 2 ) ] , (24) 16 π 2 β λ A H = 8 λ A H 2 + λ A H [ − 3 2 ( 3 g 2 + g ′ 2 ) + 6 y t 2 + 24 λ A + 12 λ H + 8 | y L | 2 + 4 | y N | 2 + 2 | y N L | 2 + 2 | y N R | 2 + 12 | y T | 2 ] − 4 ( | y L | 2 | y N L | 2 + | y N | 2 | y N L | 2 + | y L | 2 | y N R | 2 + | y N | 2 | y N R | 2 + y N L y N R y L ⁎ y N ⁎ + y L y N y N L ⁎ y N R ⁎ ) , (25) 16 π 2 β y N L = 1 4 y N L [ − 3 ( 3 g 2 + g ′ 2 − 4 y t 2 ) + 2 ( | y L | 2 + | y N | 2 + 5 | y N L | 2 + 2 | y N R | 2 ) ] + 2 y L y N y N R ⁎ , (26) 16 π 2 β y N R = 1 4 y N R [ − 3 ( 3 g 2 + g ′ 2 − 4 y t 2 ) + 2 ( | y L | 2 + | y N | 2 + 2 | y N L | 2 + 5 | y N R | 2 ) ] + 2 y L y N y N L ⁎ , (27) 16 π 2 β y L = 1 2 y L ( − 9 g 2 − 3 g ′ 2 + 14 | y L | 2 + 4 | y N | 2 + | y N L | 2 + | y N R | 2 + 12 | y T | 2 ) + 2 y N L y N R y N ⁎ , (28) 16 π 2 β y N = y N ( 4 | y L | 2 + 5 | y N | 2 + | y N L | 2 + | y N R | 2 + 6 | y T | 2 ) + 4 y N L y N R y L ⁎ , (29) 16 π 2 β y T = y T ( − 8 g 3 2 − 8 3 g ′ 2 + 4 | y L | 2 + 2 | y N | 2 + 9 | y T | 2 ) . | [
"STAUB",
"FRANCESCHINI",
"BERTUZZO",
"DJOUADI",
"GUNION",
"DITTMAIER",
"DITTMAIER",
"ANDERSEN",
"AAD",
"CHATRCHYAN",
"ABBIENDI",
"GE",
"CVETIC",
"BHUPALDEV",
"KING",
"KAROZAS",
"PESKIN",
"BAAK",
"AAD",
"AAD",
"AAD",
"LYONNET",
"LYONNET",
"BUTTAZZO",
"JUNGMAN",
"BERT... |
20226cf185bd45928210bdaee86c0b1a_Which factors affect the magnitude of fractional lumbosacral curve after posterior cobb to cobb fusi_10.1016_j.bas.2021.100145.xml | Which factors affect the magnitude of fractional lumbosacral curve after posterior cobb to cobb fusion for Lenke type 5 curves in AIS? | [
"Ates, Ahmet",
"Dincer, Recep",
"Coskun, Sina",
"Eltayep, Mustafa",
"Gur, Seray",
"Tasci, Ugur",
"Mutlu, Ayhan",
"Sanli, Tunay",
"Kahraman, Sinan",
"Karadereler, Selhan",
"Enercan, Meric",
"Hamzaoglu, Azmi"
] | null | Introduction: Aim of this study is to evaluate the clinical outcomes and radiologic parameters affecting the magnitude of fractional lumbosacral curve (LSC) and spontaneous correction of unfused thoracic curve (UTC) in Lenke Type 5 adolescent idiopathic scoliosis (AIS) patients, treated by posterior Cobb to Cobb fusion. Methods: 51 (47F,4M) Lenke type 5 AIS patients treated with posterior Cobb to Cobb fusion using segmental pedicle screws and allograft were included. Preop, f/up coronal and sagittal parameters were analysed. Preop Ferguson x-rays were used to measure Sacral oblique angle (SOA). Clinical outcomes were evaluated with SRS22r. Spearman′s correlation test was used for statistical analysis. Results: Average age was 15 (12-17) years and f/up was 7(2-13) years. Average TL/L Cobb angle improved from 42,8° to 6,3° with 85% correction rate. Spontaneous correction rate of the UTC was 57%. Ave instrumented level was 5,5 (4-7); lower instrumented vertebra was L2 in 2 patients, L3 in 40 patients and L4 in 9 patients. SOA was > 5° in 32 patients (63%). Mean SOA was 8° (0-16). lower instrumented vertebra tilt improved from 24,9° to 3,5° (86%). Postoperative fractional LSC was > 10° in 12 patients (24%) and disc wedging below lower instrumented vertebra was > 5° in 21 patients (41%). There were significant correlations between fractional LSC magnitude, disc wedging below lower instrumented vertebra and SOA (r=0,381, p=0,04; r=0,614, p<0,01, respectively). Total SRS-22r score improved from 3,7 to 4,3. Pseudoarthrosis was found in 1 patient (1,9%) (loosening of 1 screw on convex side). There was no infection, neurological deficit or UTC progression. Conclusion: Posterior Cobb to Cobb fusion provided significant correction of TL/L curve, spontaneous correction of thoracic curve and clinical improvement in Lenke Type 5 curves. According to this study, if Sacral oblique angle > 5°, the possibility of postoperative fractional lumbosacral curve and disc wedging below lower instrumented vertebra is higher. For this reason sacral oblique angle should be evaluated preoperatively with Ferguson x-ray and should be taken in consideration in preoperative planning. Image 1 Disclosures: author 1: none; author 2: no indication; author 3: none; author 4: none; author 5: grants/research support=; author 6: none; author 7: none; author 8: none; author 9: none; author 10: none; author 11: none; author 12: consultant=Medtronic | [] |
0fa0d12f0fac4398b682bc82493bf1ed_Effect of ECAP die angle on the strain homogeneity microstructural evolution crystallographic textur_10.1016_j.jmrt.2022.01.088.xml | Effect of ECAP die angle on the strain homogeneity, microstructural evolution, crystallographic texture and mechanical properties of pure magnesium: numerical simulation and experimental approach | [
"Alateyah, A.I.",
"Ahmed, Mohamed M.Z.",
"Alawad, Majed O.",
"Elkatatny, Sally",
"Zedan, Yasser",
"Nassef, Ahmed",
"El-Garaihy, W.H."
] | Billets of pure Mg were processed using two ECAP dies with internal channel angle of 90° and 120° for 4-passes of route Bc at 225 °C. Finite element analysis was used to investigate the deformation behavior of Mg billets. Electron back-scatter diffraction was utilized to analyze the microstructural evolution and the crystallographic texture of the ECAPed billets. Vicker's microhardness and the tensile properties were studied. The finite element simulations showed that the 90°-die revealed a relative more homogenous distribution of the plastic strain compared with the 120°-die. From EBSD analysis, 1-pass condition of the 90°-die showed a bimodal structure that consisted of newly formed fine grains and heavily distorted large ones, whereas 120°-die counterpart revealed fewer areas with fine-grained structure. Accumulating the plastic strain up to 4-passes in the 90°-die and 120°-die resulted in significant refining of 0.88 μm and 1.89 μm, respectively compared to the as-annealed counterpart of 6.34 μm. The texture after 1-Pass and 2-Passes showed weakening in its intensity, which resembles the B fiber texture of ideal orientation {0 0 0 1} <uvjw>. Increasing the number of ECAP passes to 4-passes resulted in a significant strong texture with more than 26 times random with the intense {0001} poles. This was attributed to the grain refining that occurred after 1-Pass and 2-Passes, which allowed the activation of more slip systems upon the 4-Passes. On the other hand, ECAP processing resulted in a significant increase in the tensile strength, hardness, and ductility. | 1 Introduction Magnesium alloys (Mg alloys) are ultra-light alloys due to their strength-to-weight ratio; for instance, their density is two thirds and one quarter of aluminum and steel density, respectively [ 1–3 ]. In addition, Mg alloys have unique properties, such as high specific strength, specific stiffness, and good recyclability [ 4–6 ], which make them desirable materials for different applications, such as aerospace and automotive industries [ 7–9 ]. Furthermore, Mg alloys are considered to be a vital solution to reduce the carbon dioxide emissions resulted from vehicles, due to their savings in weight for about 10%, which in turns reduces fuel consumption from 5 to 10% [ 10 ]. Nevertheless, the poor formability of Mg alloys is the major disadvantage that hinder their applications in many industries, such as aerospace and shipbuilding. The poor formability is originated from Mg hexagonal close-packed (hcp) crystal structure, which makes the number of available deformation modes limited [ 11 ]. In other words, excessive differences of critical resolved shear stresses for different slip systems make the deformation of Mg very difficult [ 5 , 12–14 ]. Consequently, traditional methods such as extrusion and rolling at room temperature result in failure of forming Mg and its alloys [ 15 ]. On the other hand, Mg alloys can be deformed at higher temperatures; however, dynamic recrystallization and recovery processes reduce hardening effects of deformation [ 16–18 ]. In the last decade, numerous efforts have led to develop various types of Mg alloys having a combination of ductility and strength [ 19 , 20 ], in addition to superior corrosion resistance [ 20 , 21 ]. Consequently, using plastic processing techniques to control texture is a desirable approach to enhance the formability of Mg alloys. Sever plastic deformation (SPD) methods present promising techniques for processing Mg alloys at room temperatures [ 22 , 23 ], such as high-pressure torsion [ 24–26 ], equal channel angular pressing (ECAP) [ 12 , 27–29 ], twist extrusion (TE) [ 30 ], multi-channel spiral twist extrude (MCSTE) [ 31–33 ], accumulative rolling bonding [ 34 ], and rolling in a three high skew rolling mill [ 35 ] in a wide range of materials. Among the several SPD techniques, ECAP is not only efficient in producing nanostructure and UFG structure in alloys but also applicable to industry [ 36 , 37 ]. In the last few years, several researchers have studied different characteristics of the ECAP process to investigate the process parameters impact on material behavior, due to the correlation between the mechanical and the microstructure properties of the deformed materials and the plastic deformation degree. Therefore, understanding the phenomena of strain development is very crucial in ECAP process design. The theoretical equivalent strain ( ) regarding the die geometry is illustrated in Eq. ε e q (1) [ 38 , 39 ]. It is clear from Eq. (1) that the major factors affecting the strain on the ECAPed sample are number of passes (N), the internal channel angle (φ), and the outer corner angle (Ψ). The ECAP die angles φ and Ψ are shown in Fig. 1 . (1) ε e q = N 3 [ 2 c o t ( φ + Ψ 2 ) + Ψ c o s e c ( φ + Ψ 2 ) ] Over the last few years, the finite element method (FEM) has become the most reliable computer aided analysis tool used for metal forming simulations. Numerous studies related to the ECAP processing parameters have been conducted using FEM, where most of their numerical analysis focused on the influence of the φ value on deformation patterns and homogeneity [ 40–42 ]. Seung and SeopKim [ 40 ] studied the effect of changing the φ value in the range from 0 to 90. They found out that the round angle induced inhomogeneous deformation, using finite element simulations, with φ up to 9° for reproducing the sharp angle. Nagasekhar et al. [ 41 ] carried a finite element simulation on ECAP for φ of 60°, 75° and 90° with Ψ of 10°. They showed that the decline in φ resulted in an increase in the punch pressure, due to the higher strain generated lengthwise on the outer corner of the sample. Furthermore, the ECAP routes directly affects the strain, consequently enormous changes in textures, microstructures, and mechanical properties occur [ 43 ]. Abhishek and Manojit [ 44 ] reported that after the third pass at the cross-sectional region, the route BC revealed the most severely uniform distribution of strain, in contrast with route A where the strain was concentrated at the top-left zone and route C around the corner. It is definitely known that ECAP elongates and strengthens Mg alloys because of textural development and ultra-grain refinement. Furthermore, it was reported that different ECAP processing routes with multiple passes had the strongest influence on grain refinement and were the reason for the frequent change in direction and shear plane during the entire process [ 45 ]. In addition, it was found that the compressive mechanical properties of extruded pure Mg processed by ECAP, at room temperature with φ = 90° and route BC, were declined after two-pass because of the non-basal slipping systems activation and the new-formed texture. However, the mechanical properties increased after the fourth pass due to grain refinement [ 46 ]. Unfortunately, such studies on pure Mg are limited due to its poor deformation ability, and mostly focused on Mg alloys at φ of ≤90°. Consequently, ECAP processing of pure Mg was carried out to investigate the fundamental mechanism of recrystallization, hardening effects, and the mechanisms that limit the grain size reduction. In this study, the effect of ECAP die angle and number of processing passes on the deformation behavior and strain homogeneity of pure Mg was studied using FE simulation. Moreover, a comprehensive analysis of the influence of ECAP process on crystallographic texture and microstructural evolution was implemented, using scanning electron microscopy (SEM) equipped with EBSD technique. The tensile properties and Vicker's microhardness values were investigated and correlated with the microstructural evolution and FE simulation. 2 Materials and methods 2.1 Finite element analysis FE analysis was executed to investigate the stress distribution and the strain homogeneity along the sample's transverse section and longitudinal section (TS and LS) of pure Mg, during ECAP processing. The FE was carried out using Simufact-forming software version 13.3.1 by MSC software corporation, at the same condition of experimental work (processing at 225 °C via route Bc). Worm extrusion module was used to simulate the ECAP process. The model involved a die consisted of two halves, a plunger, and ECAPed billets, as shown in Fig. 2 . All the parts were invisible during the simulation, except the Mg samples to improve and clarify visualization. In addition, the modeled plunger and the 2-half die were discrete rigid elements, which were made of an imaginary non-formable material. Mesh type was hexahedral mesh with a varying number of nodes from 9500 to 15,000 elements, based on the degree of distortion and the mesh sensitivity analysis. The same dimensions of the ECAP die and sample were used in the experimental part with extra precaution, due to the deformation of the ECAPed billets. Bc route was modeled by the positioning option of the software, where the last step of each pass was considered as the first for the next pass. During ECAP simulation, the components became too skewed; consequently, the mesh system had to be revised. Re-meshing criteria was used based on a 1 mm element size and a 0.1 mm strain change. Both isotropic linear elastic and strain hardenable rigid-plastic modes were used for modeling the ECAPed material. Tracked elements were located at the plane in the middle of the specimen on the edge where maximum strain occurred and, on the center, where ECAP has the lowest effect. Ram speed was equal to the experimental ram speed (0.05 mm/s). The coefficient of friction showed good results at m = 0.05–0.1 [ 42 , 48 ]. The Coulomb friction model was used with the die friction factor at 0.05. A software built in tabulated flow curve model was used as Mg strain hardening exponent and yield constant, which are both dependent on the temperature and strain rate. 2.2 Experimental procedure In this study, commercial pure Mg (0.06% Al, 0.008% Ca, 0.005% Cr, 0.002% Cu, 0.015 Fe, 0.012% Mn, 0.005% Ni, and 0.06% Zn, and the balance is Mg) is utilized, which was received in the form of 20 mm diameter rolled billets with 500 mm length. The pure Mg billets were sectioned into 60 mm of length each, using a precision cutting machine, to form the ECAP samples. Before ECAP processing, the samples were annealed at 250 °C for 1 h, followed by furnace cooling. The pure Mg billets were processed via ECAP through 1, 2 and 4 passes of route Bc (1-P, 2Bc and 4Bc), with rotating the ECAPed billets by 90° after each processing pass along its longitudinal axis in the same direction. The ECAP process was implemented at 225 °C at a ram speed of 0.05 mm/s. Graphite-based lubricant was used to decrease the friction between the die's inner walls and ECAPed samples before each pass. A split ECAP die with outer corner angle (Ψ) of 20° was used. The two used ECAP dies were manufactured with an internal channel angle (φ) of 90° and 120°, as shown in Fig. 1 . The equivalent strain of ECAP dies was 1.054 and 0.634 per pass, respectively, based on Eq. (1) . Microstructural evolution of the commercial pure magnesium billets was characterized using SEM. In addition, EBSD was used to study the structural evolution and crystallographic texture of Mg billets before and after ECAP processing. The materials used for microstructural characterization included the as-annealed sample (AA) and ECAP processed billets, which were cut along their longitudinal cross-section (LS) on the plane parallel to the pressing direction (flow plane) and perpendicular to the entry channel of the die, where the axes of the reference system coincide with the extrusion ECAP direction (ED). The specimen was cold mounted in a conductive epoxy. The grinding and polishing were accomplished using spinning wheels at a 150-rpm rotational speed. Grinding was performed with 600, 800,1000, and 1200 silicon carbide papers, where the specimen was rinsed with water between each grinding step. Polishing was performed with diamond suspensions of 3 and 1 μm particle size with yellow DP-lubricant, whereas final polishing step was performed with a 0.05 μm particle size colloidal silica suspension until a scratch-free surface was observed using an optical microscope. Distilled water was used as a lubricant. The specimen was ultrasonically cleaned in ethanol for 10 min and blow dried completely between each polishing step. After polishing, the specimen was etched with a hydrochloric–nitric acid solution (12 ml HCl + 8 ml HNO 3 + 100 ml ethanol) for a few seconds and rinsed with ethanol, immediately. In order to remove the etching stains or oxide layers from the surface, the specimen was ion milled with a 2 keV ion beam energy, 0.425 s −1 specimen rotational speed, and an 85° specimen tilt angle (sample surface at 5° relative to the ion beam axis) for 30 min, using a flat ion milling system. The EBSD measurements were performed on the top surface of the ED plane using a SU-70 SEM operating at 15 kV and at a typical current of 1.5 nA and a 70° tilting angle. The crystallographic data acquisition was performed with a 100-nm step size, using the HKL Channel5 Flamenco software. Once the crystallographic data were collected, an inverse pole figure (IPF) map was constructed, using the post processing HKL Channel5 software. Vicker's microhardness tests (HV) were conducted on the sample before and after ECAP processing, starting at the billets' peripheries and moving towards the center. The hardness test was carried out under an applied load of 0.5 kg for 15 s. Furthermore, the tensile properties of ECAPed samples were measured at room temperature using 100 kN universal testing machine at a strain rate of 10 −3 s −1 . The tensile specimens were selected to be from the center of the ECAPed samples. The dimensions of tensile samples were set regarding the E8M/ASTM standard. Three tensile specimens per processing condition were tested to ensure the accurate display of the results. 3 Results and discussion 3.1 Finite element analysis The distribution of equivalent stress along the sample's longitudinal section (LS) and transverse section (TS) at different ECAP dies with internal channel angles of (φ) = 90° and 120° of the ECAPed Mg billets, which are processed through 1-P, 2Bc and 4Bc at 225 °C, are presented in Figs. 3 and 4 , respectively. The TS was selected close to the top surface of Mg billets, where the simulated ECAP die and plunger were removed to enhance the visualization. Furthermore, the stress distributions, as a function of plunger stroke at the points laying at the central and peripheral regions of similar conditions, are presented in Fig. 5 . The strain distribution of the Mg billets processed through the 90°-die and 120°-die was illustrated in Figs. 6 and 7 , respectively. In addition, the strain distributions as a function of plunger stroke at the points laying at the central and peripheral regions of similar conditions are presented in Fig. 8 . It can be revealed from Figs. 3 and 4 that the internal channel angle had a significant effect on the homogeneity of the stress distribution. The die angle φ = 90° ( Fig. 3 ) led to relatively more homogeneous stress distribution compared with the die angle φ = 120° ( Fig. 4 ), which caused an inhomogeneous deformation. This behavior can be attributed to the increase in compressive stress and decline in shear deformation components. That is to say, deformation heterogeneity developed in the sharp die (φ = 90°), unlike the round one (φ = 120°). On the other hand, it was clear from Figs. 3 and 4 that the central deformed regions exhibited lower stresses compared to the peripheral regions, which can be attributed to the direct friction between the external surface of the sample and the die walls. Additionally, the upper part of the sample revealed higher stress compared to the lowest part ( Figs. 3 and 4 ), which can be attributed to the contact between the Mg sample and the die's corner angle, agreeing with earlier findings in literature [ 49 ]. Similar behavior was noticed in the strain distribution along the LS and TS of billets after processing, through die angles of 90° ( Fig. 6 ) and 120° ( Fig. 7 ). It was clear from Figs. 3a and 5 a that the processing through 1-P using the 90°-die experienced a maximum stress of 17.18 MPa, which was recorded at the upper peripheral part of the billet. Increasing the number of processing passes up to 2Bc revealed a decay of areas which had been subjected to the maximum stress ( Fig. 3 b), compared to 1-P counterpart, which can be attributed to the strain softening. Processing through 3Bc revealed a notable increase of areas which had been subjected to the maximum stress compared to 2Bc counterpart, which indicated that strain hardening dominated the strain softening, as shown in Fig. 3 c. It is worth mentioning that increasing the processing passes up to 4Bc showed an increase in the maximum stress up to 20.3 MPa, which was recorded in the upper part of the ECAPed billet, as shown in Fig. 3 d. Similar trend was reached in Figs. 4 and 5 b for the Mg billets processed through 120°-die. Moreover, the bottom parts of the ECAPed samples have lower values of effective stresses which could be attributed to the formation of corner gaps between the die channel and the bottom of the sample during the ECAP processing, where the sample was no longer in contact with the die in this region [ 50 ]. Alternatively, increasing the friction among the sample and the die walls results in reducing the gap angle as mentioned in [ 51 ]. It is worth mentioning here that the middle region of the ECAPed sample showed a steady stress and strain distribution along the LS ( Figs. 3 and 4 ) compared to the lower and upper parts, which displayed inhomogeneous distributions. In addition, it was noticed that the stress values experienced a drastic increase when the ECAPed samples passed in the deformation zone until they reached a steady value, for all the processing conditions shown in Fig. 5 a and b, subsequently a notable decrease was observed after finalizing the deformation process. Regarding the first stage, the dramatic increase in stress could be attributed to the effect of friction between the Mg sample and the inner wall of ECAP die, where the Mg sample in the deformation zone was gradually pushed down under a steady stress along the inner wall of the channel. The imposed strain along the LS showed inhomogeneity for all processing conditions, as shown in Figs. 6 and 7 , similar to the stress distribution along the LS and TS of the Mg billets processed through the 90°-die and 120°-die. Furthermore, the effective strain was greatly increased with increasing the ECAP passes number, which is in a good agreement with Eq. (1) . Moreover, the effective strain distribution along the TS showed more uniformity compared to the LS counterpart. 1-P processing using the 90°-die resulted in lower values of equivalent strain at the central regions compared to the peripheral region, as shown in Fig. 6 a. The central regions experienced an equivalent strain of 0.6, whereas the peripheral ones recorded equivalent strain of 1.1, as shown in Fig. 8 a, which agrees with the strain value (1.05 per pass) calculated from Eq. (1) . In addition, the decrease in strain at the bottom compared to the top part of ECAPed samples could be attributed to the corner gap formation; where the lower part of the sample was not yet in-contact with the die; therefore, a lower degree of deformation has occurred at the lower part of the ECAPed sample [ 51 ]. Increasing the processing passes up to 4Bc showed an increase in the equivalent strain to 2.5 and 4.5 at the central and peripheral regions, respectively, as shown in Figs. 6d and 8 a. Similar trend was observed during processing using the 120°-die, where 1-P processing experienced an equivalent strain of 0.5 and 0.8 at the central and peripheral regions, respectively, as shown in Figs. 7a and 8 b. Increasing the processing passes resulted in increasing the imposed strain in both the central and peripheral regions, as shown in Figs. 7 and 8 b. 4Bc condition displayed an equivalent strain of 1.8 and 3.7, respectively. It is worth mentioning that the plastic strain distribution showed more homogeneity in the TS section compared to the LS for all the processing conditions, as shown in Figs. 6 and 7 . Accordingly, the inconsistency in the values of plastic strain, recorded along both the TS and LS of the Mg ECAPed billets, will significantly affect the homogeneity of the microstructural features and mechanical properties throughout the Mg billets. 3.2 Microstructural evolution The grain structure and crystallographic texture of the AA pure Mg and the ECAP processed samples were investigated using EBSD. Fig. 9a shows the inverse pole figure (IPF) coloring map relative to the ND for the AA billets and its corresponding band contrast (BC) map ( Fig. 9 b), with the high angle boundaries (HABs) of misorientation angle >15° in black lines and the low angle boundaries (LABs) of misorientation between 3° and 15° in white lines. The microstructure showed a main domination by equiaxed coarse grains, few extra-large existing grains, and some limited areas with extremely fine grains. Moreover, substructures could be seen especially inside the extra-large grains. The grain size ranged from 1.1 μm to 34 μm with an average grain size of 6.34 μm. Table 1 shows the grain size data of AA and the ECAP processed Mg billets, using two different die angles. Fig. 10 shows the IPF maps relative to the ND and their corresponding BC maps, with the HAB >15° in black lines and 15°> LAB >3° superimposed for the ECAP processed pure Mg, using the 90° die for different passes, (a) 1-P, (b) 2Bc and (c) 4Bc. Moreover, Fig. 11 shows the grain size distribution histograms in the same conditions. Fig. 12 shows the misorientation angle distribution histograms at the same conditions. Fig. 15 shows the misorientation angle histogram for the AA pure Mg and the ECAP processed samples. After ECAP processing of pure Mg for 1-P, it can be observed that the microstructure is of a bimodal nature that consists of newly formed fine grains and heavily distorted large ones. The amount of strain imposed after 1-P resulted in the occurrence of dynamic recrystallization at local areas, which are more likely to be the areas in the BM, where a high density of HABs was observed and referred to as divided into extremely fine grains. It has been found that the amount of strain experienced 1-P was 1.05, as determined from Eq. (1) , and due to the heterogeneity in the BM microstructure the different microstructural features will be affected differently by the imposed strain. Thus, this amount of strain will be enough to activate the dynamic recrystallization process in some areas and form fine grains, while it will not be enough to do so in other areas. This scenario is supported by the obtained bimodal microstructure after 1-P, where the grain size ranged from 0.64 μm to 25.45 μm with an average of 1.96 μm. This reduction in the average grain size, from 6.3 μm for the BM to 1.96 μm after 1-P, is associated with an increase in the fraction of low angle boundaries (LAB <15°), as can be seen from the misorientation angle distribution ( Fig. 12 a and b). In addition, more random distribution for the high angle boundaries from 15° up to 80° could be observed. A reduction in the twin boundaries also can be noted at a misorientation angle at around 90°. Similar findings were reported in literature in terms of microstructural features. Lei and Zhang [ 46 ] reported that the microstructure of pure Mg is significantly inhomogeneous after 1-P ECAP processing at room temperature, where they observed that some grains were coarser than others, with an approximate size of more than 50 μm and an average grain size of 4.15 μm. The grain refining increased in area, after ECAP processing for 2BC ( Fig. 10 b), and the areas of coarse grains got reduced. This is mainly attributed to the increase in the imposed strain after 2 passes, where the grain size ranges from 4.414 μm to 9.3 μm with an average of 1.43 μm. Lei and Zhang [ 46 ] also reported that after 2 passes the microstructure became more homogeneous and the average grain size was reduced to 2.33 μm. On the other hand, Wang et al. [ 52 ] investigated the ECAP processing of the as-cast and as-homogenized Mg–Al–Ca–Mn alloys with different Mg 2 Ca morphologies for up to 12 passes. They obtained a mixed grain structure that consisted of coarse and fine grains after 4 passes and by increasing the number of passes the grain structure became more homogenous, especially after 12 passes. Further significant grain refining occurred after ECAP processing for 4Bc ( Fig. 10 c), with obtained grain size ranging from 0.21 μm to 7.14 μm and average of 0.88 μm. The imposed strain after 4Bc was around 4.5, as obtained from the FE, which is certainly enough to activate the dynamic recrystallization mechanism upon ECAP processing. This grain refining occurred through the progressive lattice rotations at grain boundaries as a result of local shearing near grain boundaries, due to the lack of the 5 independent slip systems required for the homogeneous plasticity [ 53 ]. Lei and Zhang [ 46 ] reported similar behavior after 4 processing passes of pure Mg with an average grain size of 1.75 μm. Xu et al. [ 54 ] investigated the ECAP processing of AZ91 Mg alloy for 4 and 12 passes. They reported that the 4 ECAP passes were enough to achieve a homogenous microstructure with an average grain size of 7 μm, which was characterized by the uniform and fine equiaxed grains. In terms of misorientation angle distribution, it was detected that 2-P gave almost similar outcome relevant to that after 1-P, where a significant change with the distribution was almost shifted towards the range of GBA< 40° and the twin boundaries fraction was significantly reduced ( Fig. 12 c – d). To investigate the effect of altering the ECAP die angle on the resulted microstructure and texture of Mg, pure samples were ECAP processed using 120° and 90° angle dies for the same sequence of passes. Fig. 13 shows the IPF coloring maps relative to ND and their corresponding band contrast maps, with high angle boundaries >15° in black lines and low angle boundaries 3–15° in white lines superimposed for the Mg ECAP processed at 225 °C using the 120° angle die for (a) 1-P, (b) 2Bc, and (c) 4Bc. Similarly, Fig. 14 shows the grain size distribution histograms at the same conditions. Fig. 15 shows the misorientation angle distribution histogram for the AA pure Mg and the ECAP processed samples. The microstructure after 1-P ( Fig. 13 a) shows fewer areas with fine-grained structure compared to that obtained using the 90° angle die ( Fig. 10 a), where the majority of the areas were preserved by the BM coarse-grained structure. This can be attributed to the lower strain values experienced during the first pass using the 120° angle die relative to what is experienced using the 90° angle die, as can be seen from the FE results presented in Fig. 8 . After 1-P, the grain size ranges from 1.1 μm to 24.8 μm with an average of 2.62 μm. It can be observed that by increasing the number of passes, the areas of the fine-grained structure increased significantly, to the extent that after 4-Bc the microstructure became almost entirely consisted of fine recrystallized equiaxed grains. However, after 2-Bc, the grain size ranges from 0.87 μm to 13 μm with an average of 2.14 μm ( Fig. 13 b). Moreover, after 4-Bc, the grain size ranges from 0.81 μm to 6.5 μm with an average of 1.89 μm ( Fig. 13 c). Generally, a significant reduction is observed in the grain size due to the ECAP processing of pure Mg for different passes. Continuous dynamic recovery and recrystallization (CDRR) are reported to occur upon ECAP processing of Mg [ 55 ]. Gautam and Biswas [ 55 ] investigated the ECAP processed pure Mg up to 8 passes and at different temperatures. They reported that the CDRR predominantly occurred, resulting in the formation of fine grains with the necklace structure for all the investigated conditions. They also reported that reducing the ECAP processing temperature from 200 °C to 27 °C at the 8th pass has resulted in an average grain size reduction from 2.6 μm to 0.75 μm. Lei and Zhang [ 46 ] also investigated the microstructural evolution of the ECAP-processed pure Mg and reported that with increasing ECAP pass, grains were further refined and the microstructure becomes more homogeneous due to the accumulated strains. In terms of misorientation angle distribution after 1-P, a significant increase in the fraction of the low angle boundaries and a slight reduction in the twin boundaries are observed, which indicate the moderate strain imposed during the first pass. After further ECAP processing for 2-P and 4-P, it could be noted that the low angle boundary fraction increases and the twin boundary fraction decreases, which is an indication of an increase in the operation of slip systems by reducing the grain size ( Fig. 15 b–d). 3.3 Crystallographic texture Fig. 16 shows the (0001), (11–20), and (10-10) pole figures of the AA condition (a), and ECAP processed pure Mg, using 90° die angle for 1-P (b), 2Bc (c), and 4-Bc (d). It can be observed that the AA texture ( Fig. 16 a) is a strong {0001} <uvtw> fiber texture with the intense {0001} poles, which are aligned at an angle relative to the extrusion direction (ED). The {11–20} and {10-10} poles also can be observed in their corresponding pole figures. The crystallographic texture evolved during ECAP processing was mainly the simple shear texture with the shear plane aligned at 45° relative to the extrusion direction. The ideal simple shear texture in hcp metals depends mainly on the active slip systems upon plastic deformation. The number of slip systems for hcp crystal structures compared to fcc and bcc are few, mainly limited to basal {0001}<11–20>, prismatic {10-10}<11–20>, or pyramidal {10–11}<11–20> slip [ 56 , 57 ]. Table 2 shows the ideal crystallographic orientations for the simple shear deformation texture and their Euler angle in φ2 = 0° and φ2 = 45°. In addition, Fig. 17 shows the schematic representations of the ideal simple shear texture position in 0001, 11–20, and 10-10 pole figures. After one pass ECAP processing, it can be observed from Fig. 16 b that the texture resembles the B fiber texture of ideal orientation {0 0 0 1} <uvjw> that is rotated almost about 45° around the normal direction which is aligned with the shear plane normal (SPN), which is indicated on the 0001 pole figure. This is a significance of the basal slip activation during the first ECAP pass. In addition, a reduction in the intensity of the texture can be observed, along with a reduction from a max of 23 times random for the BM to a max of about 9 times random after 1-P ( Fig. 16 b). This texture weakening can be attributed to the limited active slip systems upon shear deformation, during the first pass [ 54 ]. The texture after 2-Bc ECAP resembled the typical B fiber texture with its components almost at their ideal position, as seen in Fig. 16 c. This could be attributed to the change in the positions of the shear plane in the second pass, which resulted in the alignment of the SPN with the TD and the SD with the ED. In terms of texture intensity, it is almost similar to that after 1-P with about 10 times random. Increasing the number of ECAP passes to 4Bc ( Fig. 16 d) had resulted in a significantly strong texture with more than 26 times random with the intense {0001} poles aligned at a midway between the TD and ED. This strong fiber texture observed after 4-Bc could be attributed to the grain refining that occurred after 1-P and 2-Bc, allowing the activation of more slip systems upon the 4-Bc ECAP [ 53 ]. Gautam and Biswas [ 58 ] investigated the crystallographic texture of the ECAP processed pure Mg for up to 8 passes at different temperatures and using route A with a 90° die angle. They reported that after 1-P the strong basal poles rotated by 130° around the TD axis to reach the ideal B fiber position; consequently, by increasing the temperature and number of passes up to 8 no significant changes were observed in the texture except for a slight variation in its intensity [ 58 ]. They ascribed this to the dominance of the slip deformation mechanism. Fig. 18 shows the (0001), (11–20), and (10-10) pole figures of the ECAP processed pure Mg using 120° die angle for 1P (b), 2Bc (c), and 4Bc (d). By comparing the pole figures in Fig. 18 a for the 1-P using 120° die angle with that using 90° die angle, it can be observed that the use of 120° die angle has resulted in a very strong texture after 1-P with about 23 times random while the texture components were severely rotated, due the rotation of SPN and SD relative to the die angle. Thus, the intense 0001 poles in the 0001pole figure showed existence at an angle relative to the ED. The texture is more likely a B fiber, with its components severely rotated from their ideal positions due to the 120 ° die angle. After 2-Bc the texture got weaker with some split in its components, which can be clearly seen in the 0001pole figure ( Fig. 10 b). This can be attributed to the activation of more slip systems upon the second ECAP pass. On the other hand, the ECAP processing for 4-Bc had resulted in the alignment of the intense 0001 poles with the ND perpendicular to the ED ( Fig. 10 c). This texture resembled the P1 fiber texture {-1100} <11–20>, where all the texture components were existing in the three-pole figures with some spread around their ideal positions. This could be attributed to the activation of the prismatic {10-10} <11–20> and pyramidal {10–11} <11–20> slip systems upon the fourth pass. 3.4 Mechanical properties Vicker's microhardness test was used to evaluate the homogeneity of hardness distribution across the periphery and central regions of the ECAPed billets. The hardness variations of the Mg billets, processed through ECAP dies with internal channel angles of φ = 90° and 120° as a function of the number of passes via route Bc, were illustrated in Fig. 19 . The average hardness of the as annealed Mg recorded 26 HV. Fig. 19 shows clearly that the hardness of AA-Mg increased significantly as the number of ECAP passes increased, in both the central and peripheral areas. In addition, the 90°-die experienced higher HV-values compared to the 120°-die, in both the central and peripheral regions. Processing through 1-P using the 90°-die experienced a significant increase of the HV by 50% and 84% at the central and peripheral regions, respectively, compared to the AA counterpart. This could be attributed to the direct friction between the sample and the die walls, which resulted in higher refining of the grains and more strain hardening; consequently, rising the hardness compared to the central regions, which is in a good agreement with FE results presented in Fig. 8 . On the other hand, 1-P condition of the 90°-die revealed an increase in the HV by 8% and 14% at the central and peripheral regions, respectively, compared to the same condition of the 120°-die, as shown in Fig. 19 . The increase of HV-values with decreasing the internal die angle can be attributed to increase of the plastic strain during ECAP processing, which also agrees with the FE findings. Moreover, processing through 2Bc using the 90°-die resulted in an increase in the HV at the central and peripheral regions by 12.8% and 6.25%, respectively, compared to the 1-P counterpart. Similar trend was noted for the 2Bc condition of the 120°-die. Accumulating the plastic strain up to 4Bc using the 90°-die resulted in a significant increase in the HV at the central and peripheral areas by 77% and 111%, respectively, compared to the AA counterpart. Fig. 19 shows that the 4Bc condition of the 90°-die displayed 8.7% and 7.8% increase of the HV-values at the central and peripheral regions, respectively, compared to the 120°-die counterparts. This can be attributed to the grain refinement which occurred, where the averaged grain size was 0.88 and 1.89 at the 4Bc for the 90°-die and 120°-die, respectively, as shown in Table 1 . In addition, the strain hardening during ECAP processing assisted in the increase of HV-values with increasing the number of passes. The tensile tests were carried out for Mg billets before and after ECAP processing through 1-P, 2Bc, and 4Bc at 225 °C. The stress–strain curves of the ECAPed samples were plotted in Fig. 20 . Fig. 20 a and b shows that the processing of Mg billets using both 90°-die and 120°-die had an insignificant effect on the yield strength, which contradicts with the classic Hall-Petch relationship that states that decreasing the grain size results in increasing the yield strength. As seen in Figs. 9, 10 and 13 , increasing the number of ECAP passes led to a significant decrease in the grain size. Accordingly, based on the grain size reduction independence from increasing the yield strength, there are other factors affecting it, such as the crystallographic texture which plays a principal role in strengthening Mg billets due to the strong texture anisotropy (as shown in Figs. 16 and 18 ) of Mg slip systems with hcp crystal structure. In addition, the activation of non-basal slip systems may also affect the properties of Mg billets [ 46 ]. Similar findings were reported earlier in [ 59 ]. It is clearly indicated from Fig. 20 a (φ = 90°) that the 1-P recorded a considerable improvement in both ultimate tensile strength (UTS) and elongation (EL) by 25% and 85%, respectively. Increasing the strain up to 2Bc resulted in an insignificant increase in the UTS by 7%; however, the EL was decreased by ∼3% compared to the 1-P. Further rise of ECAP passes up to 4Bc revealed additional increase in the UTS by 10% and an additional decrease in EL by 20%, compared to 2Bc counterpart. On the other hand, 1-P at φ = 120° showed a considerable increase in UTS and EL by 14.5% and 65%, respectively. Further rise of ECAP passes up to 2Bc revealed additional increase in the UTS by 5% with an insignificant increase of 12% for EL, compared to 1-P counterpart. 4Bc condition revealed further improvements in both UTS and EL by 10% and 5%, respectively. A summary of the ultimate strength results and the grain size as a function of number of passes were illustrated in Fig. 21 . The tensile results were consistent with the relative increase in HABs and the grain refinement ( Figs. 12 and 15 ), with increasing the ECAP deformation passes. The increase in UTS can be attributed to the notable increase of the texture intensity with the increase in the number of ECAP passes. which resulted in the strain accumulation. Moreover, increasing the HABs in ECAP process exhibited a clear effect in producing ultra-fine grains materials. This can be described regarding Hughes' theoretical model [ 60 ] as the strain results in the movement of dislocations. As the strain rises, the dislocations are permanently absorbed by the LABs; consequently, it gradually transforms the LAGS into stable HAGBs; therefore, the grains are refined due to the formation of HABs. Therefore, the higher strength could be attributed to the finer grain size. Nevertheless, ECAP processing results in high dislocation density that hinder the mobility of dislocation [ 61 ]; consequently, enhance the tensile strength and hardness of the ECAPed Mg billets. Moreover, the considerable grain refinement (as shown in Fig. 21 ) provides a grain boundary strengthening mechanism; therefore, improves the mechanical properties of the Mg billets, which is in a good agreement with [ 62 ]. Therefore, using φ = 90° resulted in finer grain size of ECAPed Mg billets, which led to higher UTS compared to 120° counterparts. The reduction in ductility obtained in φ = 90° when the number of passes increased could be assigned to the smaller grain size, as the smaller grain size results in increase of the grain boundary area per unit volume; therefore, material strengthening increases leading to ductility drop. Similar findings were reported in a previous work [ 59 ]. On the other hand, 120°-die revealed higher ductility compared to 90°-die which could be attributed to the lower strain experienced when using the 120° angle die relative to the 90°, which is in a good agreement with FE findings presented in Fig. 8 . Accordingly, 120°-die showed a presence of bimodal and dynamic recrystallization grains (DRX) in the bulk materials, which contributed towards improving ductility [ 63 ]. In addition, the grain size at the 4Bc was ranged from 0.21 μm up to 7.14 μm using the 90°-die, and from 0.81 μm to 20.15 μm using the 120°-die, respectively, as shown in Table 1 . It was reported that the fine grains which are less than 5 μm maintained high strength; however, the lager grains which are around 20 μm provided strain hardening to support the deformation to large strains [ 64 , 65 ]. Therefore, the special textures, which favor the grain size in the range of 10–20 μm and basal slip, improved the ductility of ECAPed Mg billets which agrees with the findings in [ 59 ]. In addition, the improvement in ductility in the ECAPed billets compared to the AA condition could be attributed to the ease of moments of dislocations, due to the bimodal grain structure [ 63 ]. 4 Conclusions Microstructural evolution, crystallographic texture, and mechanical properties of pure Mg and billets of commercial pure Mg were subjected to ECAP processing to investigate the effect of ECAP die angle on the plastic strain homogeneity. The processing was done using two dies with different channel angles of 90° and 120° via a maximum of 4 passes of route Bc at 225 °C. In addition, FE simulations were used to study the deformation behavior of Mg billets during processing. The following conclusions were drawn: 1. FE showed that the 90°-die had a relatively more homogenous distribution of the plastic strain compared to the 120°-die. 2. EBSD analysis revealed that the accumulation of the plastic strain up to 4Bc in the 90°-die and 120°-die resulted in a significant refining of 86% and 70%, respectively, compared to the as-annealed counterpart. 3. 4Bc of 90°-die resulted in a significant strong texture with more than 26 times random with the intense {0001} poles aligned in a midway between the TD and ED, whereas the 4Bc condition using the 120°-die revealed weaker texture with 13 times random. 4. 1-P processing using the 120°-die experienced very strong texture with about 23 times random, where the texture components were severely rotated due the rotation of SPN and SD relative to the die angle. 5. Accumulating the plastic strain up to 4Bc using the 90°-die resulted in a significant increase in the HV at the central and peripheral areas by 77% and 111%, respectively, compared to the AA counterpart. 6. 4Bc condition of the 90°-die displayed 8.7% and 7.8% increase in the HV-values at the central and peripheral regions, respectively, compared to the 120°-die counterpart. 7. ECAP processing through 4Bc revealed a significant increase in the UTS of Mg billets processed in the 90°-die and 120°-die by 42% and 39%, respectively, coupled with a significant improvement in the ductility compared to the AA counterpart. 8. ECAP processing revealed insignificant change in the yield strength of the Mg billets. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment Researchers would like to thank the Deanship of Scientific Research, Qassim University , for funding the publication of this project. Appendix A Supplementary data The following is the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.jmrt.2022.01.088 . | [
"WITTE",
"AMANI",
"MOSTAED",
"MORDIKE",
"ING",
"SHAPIRO",
"KUWAHARA",
"KOJIMA",
"KIM",
"AGNEW",
"ABDELAZIEM",
"AGNEW",
"CHULIST",
"SULKOWSKI",
"HADADZADEH",
"RAO",
"ALSAMMAN",
"WANG",
"XU",
"ALATEYAH",
"WANG",
"CHEN",
"ELGARAIHY",
"ELGARAIHY",
"SALEM",
"ALATEYAH",
... |
03dbcc699e904f46b3de5b2180ae0fda_Dental prostheses mimic the natural enamel behavior under functional loading A review article_10.1016_j.jdsr.2015.07.001.xml | Dental prostheses mimic the natural enamel behavior under functional loading: A review article | [
"Madfa, Ahmed A.",
"Yue, Xiao-Guang"
] | Alumina- and zirconia-based ceramic dental restorations are designed to repair functionality as well as esthetics of the failed teeth. However, these materials exhibited several performance deficiencies such as fracture, poor esthetic properties of ceramic cores (particularly zirconia cores), and difficulty in accomplishing a strong ceramic–resin-based cement bond. Therefore, improving the mechanical properties of these ceramic materials is of great interest in a wide range of disciplines. Consequently, spatial gradients in surface composition and structure can improve the mechanical integrity of ceramic dental restorations. Thus, this article reviews the current status of the functionally graded dental prostheses inspired by the dentino-enamel junction (DEJ) structures and the linear gradation in Young's modulus of the DEJ, as a new material design approach, to improve the performance compared to traditional dental prostheses. This is a remarkable example of nature's ability to engineer functionally graded dental prostheses. The current article opens a new avenue for recent researches aimed at the further development of new ceramic dental restorations for improving their clinical durability. | 1 Introduction Teeth play a critically important role in our lives. Loss of function diminishes our capability to eat a stable diet, which has undesirable consequences for general health. Loss of esthetics can negatively influence social function. Both function and esthetics can be reconstructed with dental prostheses. Material selection for dental prostheses has turned out to be a sizable field for researchers. Ceramics are frequently used in load-bearing biomedical applications due to their excellent biocompatibility, wear resistance and esthetics [1–3] . Ceramics are utilized as total hip and knee replacements [4–8] and adopted for dental restorations [9–11] . Ceramic dental restorations are designed to repair functionality as well as esthetics of the failed teeth. However these materials showed somewhat poor flexural strength, particularly when exposed to fatigue loading in wet environments [1–3] . Subsequently, it can cause extensively discomfort to patients and can reduce the durability for ceramic prostheses due to their flexural fracture [12–15] . The failures of dental restorative systems are due to incorrect selection of materials, incorrect design of the component, the incorrect processing of materials, and presence of defects (e.g. cracks and pores) in the prostheses [16–19] . Additionally, in metal–ceramic restorations there are mismatches in the mechanical properties between the veneering porcelain and metal core. The Young's modulus of the veneering porcelain is 60–80 GPa, while that of the metal core is in the range of 80–230 GPa [20] . Furthermore, there are mismatches in the thermal properties between the veneering porcelain and metal core, where coefficient thermal expansion for metal core is usually higher than veneering porcelain. The significant mismatch between both material properties concentrate stresses at the interfaces that may cause cracks at the metal–ceramic interface and consequently to the failure of the restoration [21,22] . Lastly, metal core is more susceptible to corrosion in which its effect ranges from degradation of appearance to loss of mechanical strength [23,24] . The corrosion products can produce a bluish-gray pigmentation of gingiva and oral mucosa. Furthermore, these products, particularly in immunologically susceptible individuals, can cause local and systemic hypersensitivities [25–28] . Despite a continuous improvement in the dental prostheses such as using a strong zirconia or alumina core to support the esthetic porcelain veneer, ceramic prostheses are still vulnerable to failure at a rate of approximately 1–3% each year [9] . Additionally, ceramics prostheses have a dense, high purity crystalline structure at the cementation surface that cannot be adhesively bonded to tooth dentin support [29,30] . Even though some authors recommended particle abrasion for surface roughening treatment to enhance the ceramic-resin-based cements bond using mechanical retention, particle abrasion also introduces surface flaws or microcracks that can cause deterioration in the long-term flexural strength of ceramic prostheses [31–37] . Further, zirconia cores have a white opaque appearance which needs a thick porcelain veneer with gradual change in translucency to mask the zirconia and to achieve a better esthetic outcome [38] . Further, the dental crowns generate over $2 billion in revenues each year, with 20% of crowns being all ceramic units. Also, aging populations will drive the demand for all types of dental restorations even higher [39] . Moreover, occlusal contact induces the deformation and cracking of dental crowns, which can lead to the failure of the structure [40] . Therefore, it is highly desirable to develop ceramic prostheses that are more resistant to cracking under occlusal contact in recent decade [17,18] . Composite ceramics have been designed in an effort to improve strength and toughness while expanding functionality. Simple laminate materials have been developed for many years, in which a number of materials with different properties are bonded into a layered structure [41] . Though these composites do combine varying properties, the abrupt interfaces between the two materials often hold residual stresses [42,43] and perhaps delaminate under load [44] . Recently, bioinspired functionally graded enamel structures in the design of dental multi-layers have been proposed, as alternative technique, aiming the enhancement of the overall performance of metal–ceramic and all-ceramic dental restorative systems. This technique allows the production of a material with very different characteristics within the same material at various interfaces. Bioinspired functionally graded approach is an innovative material technology, which has rapidly progressed both in terms of materials processing and computational modeling in recent years. Bioinspired functionally graded structure allows the integration of dissimilar materials without formation of severe internal stress and combines diverse properties into a single material system [45–47] . This innovative technology has been applied in medical and dental fields [48–56] . The graded structure eliminates the sharp interface resulting from traditional core-veneer fabrication, eliminating the potential for delamination between the layers [57] . Graded transitions can also reduce stress concentrations at the intersection between an interface and a free surface [58,59] . Similarly, the local driving force for crack growth across an interface can be increased or reduced by altering the gradients in elastic and plastic properties across the interface [60,61] . The bioinspired functionally graded structure can be seen as the precursor to recent studies. Thus, this article reviews the current status of the functionally graded dental prostheses inspired by the dentino-enamel junction (DEJ) structures and the linear gradation in Young's modulus of the DEJ. 2 Natural human enamel 2.1 Microstructure and function of enamel A human tooth consists of pulp, enamel and dentin. The natural tooth has superior overall properties to artificial crowns [47] . Therefore, the knowledge of the structure of the human tooth is very important for the design of artificial dental crowns. Human enamel contains on average 95% inorganic substance, 4% water and 1% organic substance by weight or 87% inorganic, 11% water and 2% organic component by volume [63] . Hydroxyapatite substituted with carbonate and hydrogen phosphate ions are the largest mineral constituent, 90–92% by volume. The remaining constituent is organic substance matter and water. Both enamel protein [64] and water [65] are more abundant in inner enamel close to the dentino-enamel junction. Water in permanent enamel is in the form of free and bound water [66] . Free water refers to those components located in small spaces of enamel, while bound water means those combined with peptide chains or crystal lattices. A study with hydroxyapatite suggested that some of the water in enamel will be more firmly bound to the mineral [67] . Although it is only a minor part of enamel, water plays an important role in enamel's function, because dehydration changes the mechanical properties of enamel significantly [68] . Water forms hydrogen-bond bridges across adjacent peptide chains and maintains functional conformation of protein remnants and collagen fibers in mature enamel [69] . Fox [70] proposed that the water fluid is essential in explaining load-bearing behavior of enamel as, for instance, the “stiff sponge” model, in which enamel was considered as a stiff sponge from which liquid was expelled in compression and drawn in again when the load is released. The most organic substances are protein content, which changes dramatically during normal development ranging from about 20% protein by weight during the secretory stage to 7% at the beginning of the maturation stages. Ultimately, the ameloblasts remove almost the entire original matrix as mineralization progresses. As a result, fully developed normal human enamel contains only ∼1% protein by weight, which is the remnant component of the development matrix proteins [71] . The organic matrix in mature enamel is a multi-component protein/peptide mix, which is lying between crystallites clearly and has the function of gluing hydroxyapatite crystallites together, thereby maintaining the hierarchical structure of enamel. Human enamel consists of ∼5 μm diameter rods encapsulated by ∼1 μm thick protein rich sheathes that are arranged parallel in a direction perpendicular to the dentino-enamel junction from dentin to the outer enamel surface. Crystallite plates in the central part of the rod are parallel to the rod axis while those near the edge of the rod usually have an angle near 15–45° to the longitudinal axis of the rods [72] . The rod unit is the most important level in understanding the microstructure and function of enamel. 2.2 Mechanical behavior of natural enamel As the outer cover of teeth, enamel must retain its shape as well as resist fracture and wear during load-bearing function for the life of the individual. Understanding fracture properties and crack propagation procedure of enamel is important for both clinicians and material scientists. Anisotropic microstructure of the enamel, such as rod orientation, and organic components, controlled the anti-fracture ability of enamel. The dominant rods are primarily oriented so as to approach the outer tooth surface in an approximately perpendicular orientation. This is in order to increase hardness and reduce wear. Interconnections between rod and interrod, and complex cleavage planes limit critical crack size and uncontrolled crack propagation that would otherwise lead to premature fracture [73] . The amount of anisotropy may not only reflect the balance between wear and fracture resistance, but may also reflect a balance between differing vectors of functional stress as well as the transfer of occlusal loads to the resilient supporting dentin [74] . Connections between adjacent rods via the interrod region and the presence of interrod crystallites oriented in a plane different from the main rod direction have been discerned in cross-sectional and long-sectional scanning electron micrographs [75,76] . The variation of crystal directions is the result of bio-fabrication process during the maturation of enamel, which is essential in shielding the cracks. Rasmussen et al. [77] illustrated that fracture in enamel is anisotropic with respect to the orientation of the enamel rods, with the work of fracture for fracture parallel to the rods being 13 J/m 2 but of the order of 200 J/m 2 for fracture perpendicular to the rods; fractographs of enamel showed that the enamel rods behaved as integral units during controlled fracture. Xu et al. [78] illustrated that the cracks in the enamel axial section were significantly longer in the direction perpendicular to the occlusal surface than parallel. The cracks propagating toward the dentino-enamel junction were always arrested and unable to penetrate dentin. The fracture toughness of enamel was not single-valued but varied by a factor of three as a function of enamel rod orientation. White et al. [76] found that enamel was approximately three times tougher than geologic hydroxyapatite demonstrating the critical importance of biological manufacturing. What is more, they suggested that enamel is a composite ceramic with the crystallites oriented in a complex three-dimensional continuum. Zhou and Hsiung [79] found that enamel demonstrated better resistance to penetration. They indicated that the minor organic matrix does regulate the mechanical behavior of enamel significantly. Although most of the enamel organic matrix is removed during mineralization and maturation, some protein, notably ameloblastin, is retained, primarily at the incisal edges and proximal sides of rod boundaries defining a rod sheath [75] . This prevents cracks from advancing straight through enamel to cause catastrophic macro-mechanical failure, but instead spreads the damage laterally and hence energy absorbed over a larger volume. Also, the presence of minute quantities of protein remnants could allow limited differential movement between adjacent rods. Limited slippage could reduce stresses without crack growth. The minor components of enamel, protein remnants and water, have a profound plasticizing effect. As mentioned previously, the protein matrix behaves like a soft wrap around the mineral platelets and protects them from the peak stresses caused by the external load and homogenizes stress distribution within the composite structure. At the most elementary structure level, natural biocomposites exhibit a generic microstructure consisting of staggered mineral bricks. It was proposed that under an applied tensile stress, the mineral platelets carry the tensile load while the protein matrix transfers the load between mineral crystals via shear [80] . The strength of the protein phase in a biological material is amplified by the large aspect ratio of mineral platelets. Besides, the larger volume concentration of protein significantly reduces impact damage to the protein–mineral interface ( Fig. 1 ). By comparison with dense hydroxyapatite material, White et al. [76] found that enamel was approximately three times tougher than geologic hydroxyapatite, which only demonstrates the critical importance of biological manufacturing. The inorganic substances have been reported to vary from the outer enamel surface to dentino-enamel junction. Many investigators reported that the mineral content [64,81] and the density [66] were decreased toward the dentino-enamel junction. Some studies on the mechanical properties of human enamel are presented in Table 1 . 3 Microstructure and behavior of dentino-enamel junction Natural teeth are composed by layered structures, dentin and enamel, that are bonded by a functionally graded dentino-enamel junction (DEJ) layer [97–99] . Marshall et al. [92] stated the DEJ acts as a bridge between the hard brittle enamel ( E ∼ 70 GPa) and the softer durable dentin layer ( E ∼ 20 GPa), allowing a smooth Young's modulus transition between the two structures ( Fig. 2 ). Huang et al. [51] studied the microstructure of the DEJ and they reported that collagen fibrils from the dentin gather into coarse bundles and penetrate across the junction, anchoring into the enamel. The hydroxyapatite is continuous across the junction. The interface is not smooth, but instead is a series of linked semi-circles, or scallops, that increase contact area, and thus the adhesion when DEJ serves as the bonding between dentin and enamel. It also resists cracks that originate in enamel from penetrating into the dentin. Lin and Douglas [99] noticed that there was an extensive plastic deformation, 8%, collateral to the fracture process in the DEJ. Correspondingly, microscopic analysis revealed clear evidence of crack-tip blunting and crack deflection. The parallel-oriented coarse collagen bundles at the DEJ may play a significant role in resisting the crack. Likewise, White et al. [100] investigated the DEJ failure mechanisms by performing micro-indentation tests across the DEJ. Their results exhibited that DEJ does not undergo catastrophic interfacial delamination and the damage was distributed over a broad zone instead. Marshall et al. [92] and Fong et al. [93] used nanoindentation tests to measure the Young's modulus of the natural DEJ area. Their results showed that, within the DEJ region, the Young's modulus varies from ∼70 GPa for enamel to ∼20 GPa for dentin. The fracture results [85] once again demonstrated that it is extremely difficult to initiate cracks in dentin at the DEJ, or to propagate cracks from enamel to dentin across the DEJ. Featherstone et al. [102] and Meredith et al. [103] reported that hardness and modulus of elasticity were the highest at the outer surface of the enamel and decreases toward the DEJ. He and Swain [104] reported that inner enamel has lower stiffness and hardness but higher creep and stress redistribution abilities than their outer counterpart. They attributed this observation to the gradual compositional change throughout the enamel from the outer region near the occlusal surface to the inner region near EDJ. The gradients in the elastic modulus of tooth have been attributed to the distribution of the mineral phase, while different toughening mechanisms in the natural tooth have been attributed to collagen microstructure and water content. They suggested that enamel can be regarded as a functionally graded natural biocomposite. The natural tooth is a remarkable example of nature's ability to design a complex and functional composite. In order to replace the mechanical function of tooth from a restorative perspective, it is not only important to study its localized tissue properties but also its bulk structural behavior. Nonetheless, more research is necessary to comprehend the mechanisms by which tooth structures resist functional forces in the mouth. Thus, the mechanical properties and microstructural features of dental enamel are important to understanding stress dissipation in the tooth, for developing biomimetic restorative materials and for the execution of clinical dental preparations. 4 Bioinspired functionally graded approach Learning from nature, materials scientists increasingly aim to engineer graded materials that are more damage-resistant than their conventional homogeneous counterparts. This is particularly important at surfaces or at interfaces between dissimilar materials, where contact failure commonly occurs. Therefore, many engineered materials are graded in some manner, but functionally graded materials (FGMs) are often characterized by a gradient purposefully formed using compositional or microstructural design. FGM is a material with engineered gradients of composition, structure and/or specific properties aiming to become superior over homogeneous material composed of same or similar constituents [105–108] . The aim of producing FGM is to obtain a material with two different characteristics in its two opposite faces, Fig. 3 . The properties of this innovative FGM can mimic the natural gradients occur, including a graded elastic modulus in hard tissues such as the human enamel and dentin–enamel junction [109] . This novel technology is designed to improve the performance of the materials [110–113] . Although the efficacy of FGMs has been recognized since the early 1970s [114] , the field of FGM did not take off until the mid-1980s, probably due to a lack of suitable fabrication methods until that time. This concept has been later expanded for different application such as coatings, packing, optical, biomedical, etc. In the biomedical field, several approaches have been used to develop functionally graded biomaterials for implants [115–120] . With established methods currently available to synthesize and process materials, gradations in composition, structure, and properties could be engineered over a wide range of length scales ranging from nanometers to meters. There are a wide range of process technologies that are now available for fabrication FGMs such as powder metallurgical process [121] , layer stacking [56] , glass infiltration [122] , centrifugation [123] , electrophoretic deposition [124] , plasma spray [125] , direct-write assembly [126] and rapid prototype color ink-jet printing [127] . 5 Dental prostheses mimic the dentino-enamel junction behavior Among the previously mentioned processing methods, the glass infiltration technology is particularly suitable for the fabrication of all-ceramic restorations [128] . It combines an esthetic, low modulus, and low hardness glass “veneer” with a high strength ceramic “core”, without a sharp interface between the materials ( Fig. 4 ). The lack of interface due to grading improves interfacial bond strengths, reduces residual stresses, and eliminates delaminations. The processing of these structures is simple and straightforward, and can be readily adapted to CAD/CAM technology [128,130,131] . 5.1 Graded glass-zirconia structures Glass-zirconia structures with gradual elastic modulus may be created by using infiltration method [128] . Comparing to the sintering temperature of zirconia, zirconia templates with somewhat low heat-treatment temperature are used for combining glass infiltration and zirconia densification into a single process [128,132] . This way the glass infiltration depth can be tailored by manipulating the porosity of the zirconia templates. Therefore, the grain growth and/or destabilizing of the tetragonal zirconia phase [133] associated with the post-sintering heat-treatment can be prevented. As coefficient of thermal expansion and Poisson's ratio of the infiltrating glass and zirconia (3Y-TZP) are relatively the same, no significant long-range thermal stresses are developed in the graded structure [134] . The resultant structure consists of a thin, outer surface residual glass layer followed by a graded glass-zirconia layer at both the top and bottom surfaces ( Fig. 5 ). 5.2 Graded glass-alumina structures Glass-alumina graded structures may be produced by infiltrating dense alumina surfaces with silica-based glasses [130,134,135] . Following a power-law relationship, the transition of elastic modulus from the graded glass-alumina surface to the alumina core is continuous [136,137] . The resultant structure consists of a thin, outer surface residual glass layer followed by a graded glass-alumina layer, sandwiching a dense alumina core ( Fig. 5 ). Inspired by the microstructure and mechanical properties of natural teeth, synthetic functionally graded materials were proposed to mimic the DEJ. Francis et al. [62] described a procedure to produce a DEJ-like interface and enamel coating involved depositing slurries of oxide or glass powder by a draw-down blade method, drying at then higher temperature heating. They used alumina-glass or alumina-polymer composite to mimic the dentin and a calcium phosphate-based coating to mimic the enamel. Bonding between the two materials was accomplished by a eutectic melt in the CaO–Al 2 O 3 –SiO 2 system. The interpenetration in this DEJ-like interface originates from a solidified melt phase penetrating into the dentin. Huang et al. [51] added bioinspired FGM layer between the dental ceramic and the dental cement and investigated the effects of the functionally graded layer on the stress in the crown and its surrounding structures. From their results, the functionally graded layer was shown to promote significant stress reduction and improvements in the critical crack size. From their study, they concluded that the low stress concentrations were associated with the graded distributions in the DEJ. This provided new insights into the design of functionally graded crown architecture that can increase the durability of future dental restorations. Rahbar and Soboyejo [54] used computational and experimental effort to develop crack-resistant multilayered crowns that are inspired by the functionally graded DEJ structure. The computed stress distributions showed that the highest stress was concentrated at the ceramic outer layer of crown and reduced significantly toward the DEJ when bioinspired functionally graded architecture was used. They reported that the bioinspired functionally graded layers were also shown to promote improvements in the critical crack size. Suresh [122] established that controlled gradients in mechanical properties offer unprecedented opportunities for the design of surfaces with resistance to contact deformation and damage that cannot be realized in conventional homogeneous materials. Graded dental restorations have been shown to display improved features relative to conventional ones, namely higher resistance to contact and sliding [122,138,139] ; higher adhesion of porcelain to the substructure (metal or ceramic) [140–142] ; improved esthetical properties and improved behavior under fatigue conditions [142] . Another important point to which the FGM design can address is the reduction of thermal residual stresses that remains at the metal–ceramic interface during the cooling cycles after the porcelain firing. These stresses are further magnified when there is a significant difference between the thermal expansion behavior of the metal and the porcelain. Depending on the thermal residual stress level that remains in the crown and together with those arising from occlusal loads, a catastrophic failure of the restoration can occur. FGMs have been shown to decrease significantly the thermal residual stresses formed at the interface between metals and ceramics in other fields of applications [143] . Some studies demonstrated that when the contact surface of alumina or silicon nitride was infiltrated with aluminosilicate or oxynitride glass, respectively, they noticed that the graded glass/ceramic surfaces produced in this manner offered much better resistance to contact damage with and without a sliding action than either constituent ceramic or glass [136,144,145] . A number of the studies investigated the effects of increasing elasticity as a function of depth from the surface on the resistance to contact damage. They demonstrated that veneer failure and bulk fracture may be substantially mitigated by controlled gradients of elastic modulus within the restoration layer. Such graded structures exhibit significantly higher resistance to fatigue sliding-contact and flexural damage relative to veneered and monolithic core ceramics. This is because the gradient diminishes the intensity of tensile stresses and simultaneously transfers these stresses from the layer surface into the interior, away from the source of failure-inducing surface flaws [128,130–141] . 6 Clinical implications In clinical applications, these graded alumina materials can be used as monolithic crowns and bridges. Although the graded alumina has limited translucency, the external glass layer and the graded glass-alumina layer provide necessary shade options. In addition, color stains can be applied to the surface of the external glass layer using powdered glass slurry that has similar composition to the infiltrated glass. This staining technique has been used on the Empress system to improve the esthetic outcome of a single color pressed block of glass ceramic and is well established in esthetic dentistry [146–148] . Also, the cementation surface of graded restorations can be etched with hydrofluoric acid and silanized to facilitate a resin–cement bond. Use of zirconia in crowns and bridges has increased over recent years, owing to esthetic and biocompatibility demands. However, the fact remains that porcelain-veneered zirconia restorations suffer unexpectedly high chipping rates, regardless of the manufacturer [149–153] . Additionally, dental crowns generate over $2 billion in revenues each year, with 20% of crowns being all ceramic units [39] . Also, aging populations will drive the demand for all types of dental restorations even higher [34] . If these chipping rates could be reduced, zirconia-based all-ceramic prostheses would become more widely used, addressing a quality of life issue [154] . A great demand for the development of improved dental crowns has been stimulated by the large and ever growing market of the dental crowns [155] . Graded glass-zirconia structures offer a simple remedy. Zirconia cores are, however, only a portion of the all-ceramic restoration. Alternative monolithic graded glass-zirconia restorations are recommended without porcelain veneer, which could be successfully and economically used in posterior applications. These restorations are suggested to eliminate the vulnerable porcelain veneer, while providing superior strength and esthetics. The color characterization of these graded glass-zirconia restorations is achieved by external residual glass and subsequent staining. Therefore, many studies developed a straightforward protocol for fabricating anatomically correct zirconia crowns and bridges with graded surfaces [136–144] . These findings found that restorations made from graded glass-zirconia are orders of magnitude more resistant to sliding-contact damage than the current porcelain-veneered zirconia systems. The graded layer also enhances the flexural fracture resistance of zirconia, allowing the utilization of thinner restorations for highly conservative restorative protocols that preserve tooth structure. Additionally, the cementation surface of graded restorations can be etched with hydrofluoric acid and silanized to facilitate a resin–cement bond. 7 Conclusions In order to replace the mechanical function of tooth from a restorative perspective, it is not only important to study its localized tissue properties but also its bulk structural behavior. Therefore, the functionally graded dental prostheses inspired by the DEJ have been reviewed. These prostheses such as “graded glass-zirconia and graded glass-alumina structures” offer better resistance to immediate flexural damage, better esthetics, and potentially better veneering and cementation properties over homogeneous ceramics materials. The further development of the grading technology could potentially lead to superior long-term clinical performance for dental prostheses. Conflict of interest The authors declare that they have no conflict of interest. | [
"LAWN",
"STUDART",
"RAHAMAN",
"AKAGI",
"GARINO",
"HANNOUCHE",
"JAZRAWI",
"YASUDA",
"BURKE",
"KELLY",
"KELLY",
"JARRETT",
"KELLY",
"LAWN",
"WILLMANN",
"YESIL",
"OZCAN",
"ANUSAVICE",
"SWAIN",
"RIZKALLA",
"LAWN",
"SOBOYEJO",
"LANG",
"PROCHAZKOVA",
"KHAMAYSI",
"VALENTIN... |
5ba4844417de40c29266eaaf59b672ee_Invited review resource allocation mismatch as pathway to disproportionate growth in farm animals p_10.1017_S1751731117003330.xml | Invited review: resource allocation mismatch as pathway to disproportionate growth in farm animals – prerequisite for a disturbed health – CORRIGENDUM | [
"Huber, K."
] | null | doi: 10.1017/S1751731117002051 , published by Cambridge University Press, 10 August 2017 Authors of a cited reference (Hannon, B. M., and M. R. Murphy. 2016. Toward a theory of energetically optimal body size in growing animals. J. Anim. Sci. 94(6):2532-2541) disagreed with the wording used to summarise and highlight their findings. An exchange of correspondence took place between MR Murphy and K. Huber, which resulted in a reformulation. The correct formulation is shown below: “Hannon and Murphy (2016) discussed the idea that maintenance energy cost per unit of body size decreases initially with increasing size. This might be in favor of thermoregulation. Maintenance energy cost per unit size reaches a nadir from where it rises again when body size increases above the optimal body size. At optimal body size MEm per unit size is minimal.” The original text in the section ‘Juvenile growth and development’ was : “Physiologically, body growth most likely aimed to establish an energetically optimal body size in young animals (Hannon and Murphy, 2016). Hannon and Murphy (2016) re-discussed the idea that maintenance energy costs (R) decrease with increasing body size (S) in favor of thermoregulation. R/S reaches a nadir from where is [sic] rises again when body size increases above the optimal body size. At optimal body size MEm is minimal.” The author apologises for the error. | [
"HUBER"
] |
a857a45ec0b041ccb5660a864b189484_Highly pathogenic avian influenza in Sekong province Lao PDR 2018 Potential for improved surveillan_10.1016_j.ijid.2020.09.1001.xml | Highly pathogenic avian influenza in Sekong province Lao PDR 2018 – Potential for improved surveillance and management | [
"Annand, E.",
"High, H.",
"Wong, F.",
"Phommachanh, P.",
"Chanthavisouk, C.",
"Happold, J.",
"Dhingra, M.",
"Eagles, D.",
"Britton, P.",
"Alders, R."
] | null | Background: In October 2018, the first confirmation of highly pathogenic avian influenza in south-eastern provinces of Lao PDR resulted from collaboration between Lao PDR animal and human health officers and an Australian veterinarian residing locally as part of anthropological fieldwork. Case description: Market purchased poultry introduced into a household flock resulted in 18 of 33 poultry found dead or observed with clinical signs over one week that variably included comb discolouration, diarrhoea, lethargy, head tilt and gait changes. Some affected birds were consumed. Post-mortem revealed subcutaneous and intraabdominal haemorrhage and mild discolouration of comb and wattle. National Animal Health Centre testing confirmed A(H5N1). Nineteen remaining birds were euthanised with local family owner consent and compensation. Government veterinarians decontaminated the area, communicated the importance of reporting and against consuming sick birds. Trace-back testing was negative. An epidemiology team conducted human health interviews and provided education. Humans tested negative by PCR. Anthropology interviews provided insights into local preferences in chicken raising, consumption and discarding of diseased chickens; potential interspecies transmission of flu-like illness; reporting behaviour and local causal explanations of ‘seasonal illnesses’. Virus was characterised as clade 2.3.2.1c A(H5N1) with sequence alignment closest to those detected in Myanmar and Vietnam. Discussion: Veterinary and anthropology observations afforded insight into clinical and epidemiological characteristics of disease in village poultry; importance and availabiltiy of laboratory submission and testing; biosecurity weaknesses at the market; beliefs, perspectives and practices of local people and impediments for surveillance of animal and zoonotic diseases. ‘Commonness’ of mass poultry mortality and lack of perceived local benefit was justification for not reporting. Virus characterisation suggested persistence within SE Asian trade lines and spread to village poultry via market interfaces, rather than new strains evolving in endemic village populations. Conclusion: Overcoming combined impediments to reporting that (a) high mortality disease is perceived as normal and (b) that nothing can be done (by the village or the authorities) to change this ‘normal’ appeared key to improved surveillance. Poultry vaccination programs have potential to reduce the frequency of mass-mortality events and continued efforts to design and implement effective and sustainable biosecurity practices at poultry markets are justified. | [] |
bc66e9c86d5f4f70b9c7aaf7eee2834c_Primary extraperitoneal rectum lymphoma in AIDS patient_10.1016_j.jcol.2019.11.488.xml | Primary extraperitoneal rectum lymphoma in AIDS patient | [
"Garcia, Ana Maria Stapasolla Vargas",
"Sangali, Marlei",
"Filho, Antoninho Jose Tonatto",
"Lara, Caroline",
"Silva, Cibele Corbellini Rosa da",
"Saturnino, Marcos Paulo Barreto",
"Carvalho, Luciano Pinto de"
] | Introduction
The gastrointestinal lymphoma can be classified in primary or secondary, and this is important regarding diagnosis and subsequent treatment. Primary gastrointestinal lymphoma of the rectum is rare and therefore lacks data in medical literature. Its incidence has been increasing and that fact may be related to a higher incidence in immunosuppressive therapy and immunosuppressive diseases (such as AIDS).
Metodology
19 articles have been reviewed, searched online on the Scielo and PubMed databases. The goal was to increase data available regarding this pathology and improve its therapy.
Discussion
Primary GI lymphoma of the rectum presents as hematochezia, rectal pain, change in bowel habits. PET/CT is the first choice exam to pursue investigation; however abdominal CT and MRI reveal sufficient information and are much more available in daily practice. Plasmablastyc lymphoma is an aggressive subtype and is usually associated with AIDS patients. There are no available treatment protocols for this specific type of lymphoma and colonic lymphoma’s therapy is usually used for this patient (such as ECHOP and CHOP).
Conclusion
As rare as this pathology is, this article aims to improve the available data and provide useful information regarding diagnosis and therapy. | Introduction Primary gastrointestinal lymphoma is a rare condition, defined as lymphomas involving the Gastrointestinal Tract (GIT) or presenting with gastrointestinal symptoms. The lymphoma with secondary involvement of TGI is more commonly found. This differentiation guides the treatment. 1,2 Primary colonic lymphoma is rare and accounts for only 0.2 %‒0.4 % of all colon cancers, 10 %‒5 % of all primary gastrointestinal lymphomas and about 30 % of extra-nodal lymphomas. The most commonly affected sites are the stomach, followed by the small intestine and ileocecal transition. 3,4 5 The most common colonic location is the caecum (70 %), followed by the rectum and ascending colon. 6 Intestinal lymphomas can be classified into B-cell lymphomas (85 %) and T-cell lymphomas (15 %). Among B-cell lymphomas, mantle cell lymphoma has a worse prognosis, whereas Mucosa-Associated Lymphoid Tissue (MALT) lymphomas have a better prognosis than other B-cell tumors. 6 Studies suggest that the incidence of primary gastrointestinal lymphoma has increased, which may be explained by the presence of immunosuppression, either due to mediated Acquired Immunodeficiency Syndrome (AIDS), by an increase in the prescription of immunosuppressive drugs used after transplantation or associated with treatments for autoimmune diseases. 7 Drugs related to increased incidence of lymphomas include thiopurines (azathioprine and 6-mercaptopurine), 7,8 and, to a lesser extent, the anti-Tumor Necrosis Factor (anti-TNF) therapy. 9,10 11 Methodology and objectives Totally, 19 articles retrieved from the Scielo and PubMed databases were reviewed in order to increase the number of reports regarding this pathology and consequently expand the available knowledge, aiming at improving the therapy and, particularly, the diagnosis of this type of lymphoma. Case report Male patient, 19-year-old, treated at the HNSC emergency department, with a history of anal bleeding (bright red blood) for a year, associated with pain and anal tumor for a month, who had undergone at least two attempts to drain the lesion. Prior to hospitalization, he had hypothyroidism, clinically treated with hormone replacement, in addition to depression and irregular psychiatric treatment. Anal inspection revealed right posterolateral perianal lesion, about 6 cm in diameter, hardened, without drainage of secretion, but with incisions in the central portion. The overlying skin had hemorrhagic suffusion ( Figs. 1 and 2 ). Rectal examination revealed a hypotonic sphincter in a 3 cm anal canal with apparent hardened lesion in the right posterior and lateral anal canal, adjacent to the described perianal tumor lesion. No blood or pus. Mild anemia was evidenced during the initial examination. A pelvic MRI scan ( Fig. 3 ) demonstrated a 11.2 × 7.6 × 7.4 cm tumor extensively invading the right levator ani muscles (mainly the puborectalis). In the right intergluteal sulcus there was a small area with fluid inside (1.2 × 1.0), with a probable fistulous path to the skin. The prostate and seminal vesicles were displaced anteriorly, without invasion and preserving the capsule contour. Defining cleavage plan with the sacrum; prominent bilateral inguinal lymph nodes, especially to the right (largest = 1.1 × 0.9); absence of lymph node enlargement or free fluid in the pelvis. Subsequently, an incisional biopsy of the lesion was performed under general anesthesia, with anatomopathological results compatible with large cell non-Hodgkin’s lymphoma and immunohistochemical profile compatible with immunophenotype B lymphoma, plasmablastic lymphoma, positive for CD10, CD20, CD138, ki-67, Bcl-2 and negative for EBV. Also during the investigation, he was diagnosed with HIV infection, with significant immunosuppression (CD4 37 and CV 541,213 copies). Afterwards, the patient was transferred to the hematology department for therapeutic planning and, during this period, he presented with compressive urinary tract symptoms (with doxazosin response) and psychiatric decompensation, requiring antipsychotic drugs due to the high risk of suicide. After a bone marrow biopsy negative for lymphoma infiltration, chemotherapy with Etoposide + Doxorubicin + Vincristine (EPOCH) was started. Antiretroviral Therapy (ART) was started with an alternative regimen (abacavir + lamivudina + dolutegravir), associated with prophylactic trimethoprim-sulfamethoxazole. After the second cycle of chemotherapy, a control CT scan was performed, which showed complete regression of the rectal lesion. The chemotherapy regimen was maintained until the fourth cycle and then PET-CT was performed, which demonstrated absence of areas of abnormal metabolic activity characteristic of active lymphoproliferative disease and absence of parietal thickening or abnormal metabolic increase in the rectum (Lugano score 1). Discussion The clinical, radiological, and endoscopic features of the primary gastrointestinal lymphoma are nonspecific, which may hinder the diagnosis, making the clinical picture often indistinguishable from other colon diseases, whether neoplastic or inflammatory. When lymphoma has its primary site in the rectum, the main manifestations are bleeding, rectal pain, tenesmus and change in bowel habits (diarrhea or constipation). 12 PET/CT is the image of choice for analysis of mass, area of stenosis, and lymph node involvement, but imaging tests such as pelvic CT and MRI are often sufficient to identify the lesion and are more available in clinical practice. Colonoscopy can show variable mucosal involvement, whether it is a mass, ulceration or infiltration, 13 and allows biopsies for histopathological diagnosis. 14 Plasmablastic Lymphoma (PBL) is classified by the World Health Organization as a type of mature B-cell lymphoma that expresses plasma antigens (CD38, CD138, MUM1) and common B-cell antigens (CD20, CD19, PAX5) with negative CD45. While its pathogenesis is not yet fully understood, it has been shown that Epstein Barr Virus (EBV) is present in most cases. In addition, an association with MYC gene rearrangement has been found in a small percentage of cases. 15,16 PBL lymphoma was initially identified in the oral cavity of patients with Human Immunodeficiency Virus (HIV) and approximately 80 % of PBL cases are associated with this HIV-positive population. 4 PBL has also been found in areas outside the oral cavity, favoring sites such as the Gastrointestinal (GI) tract, lymph nodes, and skin. 17 The GI tract is one of the most common extranodal sites. PBL is considered an aggressive lymphoma with a median overall survival of 14 months. 18 Regarding treatment, there are no exclusive protocols to guide the extraperitoneal rectal lymphoma management, and cases are treated following the guidelines of intraperitoneal rectal lymphoma. Although both CHOP and EPOCH are considered common therapeutic choices, standard therapy or treatment guidelines have not yet been established. Autologous transplantation is considered optional and tends to have a good outcome, but there is little experience with this treatment. 17,18 Surgical treatments are rarely needed and are usually indicated in case of complications. 19 Conclusion Primary gastrointestinal lymphomas are themselves rare pathologies, with extraperitoneal rectal lymphoma being an even rarer form. Primary gastrointestinal lymphomas are rare pathologies in themselves, with extraperitoneal rectal lymphoma being an even rarer form. It represents a diagnostic challenge and lacks specific protocols to guide therapy. With this report, it is expected to add data to the currently available literature. Conflicts of interest The authors declare no conflicts of interest. | [
"DAWSON",
"LEWIN",
"CHIM",
"HENRY",
"CAI",
"KOHNO",
"GURNEY",
"WONG",
"BEAUGERIE",
"KANDIEL",
"SIEGEL",
"QUAYLE",
"LEE",
"WANG",
"HSI",
"STEIN",
"HANSRA",
"CASTILLO",
"LURIA"
] |
e00fb4611cc5437b9ad4ad8bf0b8d3fa_Assessing the subjective perception of urban households on the criteria representing sustainable hou_10.1016_j.sciaf.2021.e00847.xml | Assessing the subjective perception of urban households on the criteria representing sustainable housing affordability | [
"Ezennia, Ikenna Stephen",
"Hoskara, Sebnem Onal"
] | Housing affordability is typically assessed in economic terms, but housing affordability concerns transcend mere housing cost and its relationship to income, to wider issues of social wellbeing and sustainability. New studies on this subject are increasingly recognizing the need for a wider and more holistic understanding of the criteria representing sustainable housing affordability (SHA). Most key authors have embraced this evolution and view the change as positive, and have analyzed industry professionals, academics as well as stakeholders perception along these lines. However, it is not clear whether the views of households align with this since no study has surveyed household opinions. Regarding this, a comprehensive list of 81 potential sustainability performance criteria (SPC) were determined through the review of existing literature. Based on which a questionnaire survey was designed to assess the opinion of households residing in the 26 urban areas of Nigeria on the criticality of these SPC. Through statistical analysis, 30 critical sustainability performance criteria (CSPC) were established. This study posits that at present the housing affordability concerns in urban areas of Nigeria cannot be restrictedly defined by financial attributes. Our findings provide salient information to policymakers and stakeholders that could aid the sustainable delivery of affordable housing programs. | Introduction Housing is one of the essential social conditions which indicate the living standards of a country's citizens. However, due to rapid rates of urbanization reported world-wide [26] , housing supply has always failed to satisfy demand [34] . Therefore, it has become a common experience globally that a house which is already expensive will become even more expensive. This phenomenon amongst other issues has pushed the provision of affordable housing into the center of many governments’ agenda around the globe, to better the living standards of the low- and medium-income households. However, affordable housing alone is insufficient to achieve family and community wellbeing [77] . In recent times, research findings on housing affordability have highlighted substantial relationships between fiscal, social and environmental factors [ 25 , 34 , 59 , 60 , 63 ], regarding housing appropriateness, accessibility, amenity and adequacy [19] . Consequently, embedding sustainability into the criteria contributing to housing affordability has been the call of recent studies on housing affordability [ 6 , 30 ]. Thus, to create more affordable and sustainable communities implies that closer connections must be established between social, environmental, and economic concerns. Yet very few studies on housing affordability, consider the three pillars of sustainability as suggested by housing researchers. According to Awotona, [13] , standards and criteria designed specifically to sustain housing provision and its related services for the urban low-income households are yet to be devised in Nigeria. To date, much of the perceptive analysis of the criteria importance and performance indicators leading to sustainable housing affordability has focused on developed countries [58] . Where developing country has been the focus of the study [ 6 , 34 , 56 ] or part of the study [ 5 , 20 ] research has tended to concentrate on academicians, professionals, and stakeholder's perception. Thus, neglecting the views of households and thereby potentially hiding their unique housing experiences and perspectives. Though it is clear from housing affordability literature that academicians, scholars, housing professionals and stakeholders are beginning to broaden their views and consider wide-ranging criteria that breed housing affordability. However, it is unclear if the views of households align with this. Nevertheless, a criteria system must evolve from people's actual housing affordability experience since they bear the direct brunt of the housing affordability burden. Thus, we bring to the debate a different viewpoint — Households subjective perception. According to Yates et al. [88] housing affordability can be assessed in accordance with people's subjective experience in managing their housing costs. Therefore, this study enabled household respondents to weigh wide ranging criteria and circumstances that affect their housing affordability, with the conviction that households are better positioned to offer the best assessment of the criteria influencing their housing situation. The authors believe that analyzing the subjective views of household on the criteria apposite to sustainable housing affordability can offer other information left out under other subjective assessments (e.g., Housing professionals and stakeholders’ opinion), and can support cost-benefit analysis, policy evaluation as well as aid the identification of potential policy problems. Therefore, the purpose of this study is to identify a comprehensive list of criteria system through which housing affordability can be holistically and sustainably assessed. Objectives are, (1) To determine the criteria importance using household's subjective opinion. (2) To establish if the opinions on criteria importance vary based on the respondent's income group, e.g., Low, or medium income. (3) To determine if household respondents residing in different regions in the study area have differing opinions on criteria importance and, (4) Develop a framework for achieving sustainable housing affordability within the study area. This research is guided by the hypothesis that, Null Hypothesis (H 01 ): Household opinions on criteria representing sustainable housing affordability do not significantly differ based on (a) household income group, and (b) region of residence. The attainment of the research objectives and analysis of the hypothetical statement will present sound evidence on how to improve the performance of affordable housing programs through the deployment of a comprehensive framework for understanding the role and significance of sustainability for enhanced affordable housing delivery in Nigeria. Housing affordability concept: meaning and definition The phrase “housing affordability” is polysemous and nebulous in meaning because it is used to describe several components of housing needs such as housing condition, housing costs, housing quality, household income and overcrowding. Studies have revealed that it has become a multi-faceted phrase [58] . Consequently, due to its heuristic nature, it has been perceived differently by several researchers who have used various definitions and methodological approaches in measuring it (Mattingly & Morrissey, 2014). However, housing affordability is generally described as households’ ability to access and obtain decent housing without experiencing unwarranted financial hardship ( [51] ; 2017). Such a broad description refers to two aspects: (1) Attainability - access to a house at a certain period and (2) Sustainability - the possibility of the household to continue maintaining the house. This implies the ability (or inability) to sustain economic commitments with regard to the housing already obtained, which generally emerge through constraint (e.g., illness or serious injury, unemployment), or through choice (e.g., more desirable location, larger house). Recent research evidence reveals that it is an uncommon approach to address housing affordability challenges by means of incorporating features of sustainability, as well as the dearth of suitable framework that integrates sustainability into affordable housing programs [ 5 , 34 ]. Therefore, this paper explores the role of social and physical environmental factors in minimizing housing affordability problems. It explores the extent to which researchers have postulated the positive impacts of incorporating sustainability criteria into housing affordability assessment. Thus, the real challenge is to inspire stakeholders and industry professionals through the interpretations of genuinely more sustainable solutions for affordable housing delivery (that is, sustainable housing affordability strategies), from the economic, social–psychological and ecological perspectives; as well as to harness emerging sustainable technological innovations and global experiences, and apply them in a more creative way in order to achieve higher standards of economic and social wellbeing amongst households. Understanding the ‘sustainable housing affordability’ construct Although housing affordability has always been at the core of national policy in several nations, scholars agree that the housing affordability concept is poorly defined both in policy and guidance documents as well as scientific literature. Conceptualizing housing affordability dates back more than 40 years, since 1970’s. However, this period has been characterized with the housing affordability concept solely focused on economic dimension, although many aspects of housing exist with no direct market price (see [31] ). Haffner & Hulse [38] in their study on urban housing affordability, argued that the concept of housing affordability has evolved such that the focus is more on the urban policy challenges of growing inequities in access to urban resources and less on understanding housing expenditures in contributing to poverty and disadvantage within the domain of social policy. An understanding of affordability in the light of sustainability extends its scope to incorporate an environmental and social perspective as well as the generally accepted economic dimension. Economic viability alone cannot improve housing affordability. Rather, other sustainability concerns must be considered like location, transportation routes, neighborhood settings, housing design and job opportunities amongst other multitudinal issues. Sustainability sets out to correct the domineering influence of economic sophistication on all aspects of living which has negative climatic effects. It pursues reasonable opportunity distribution between future and present generation through resource conservation, but economic development encourages uncontrolled resource consumption principally aimed at increasing material wealth. The unsustainable construction practices underlying affordable housing production presently raise concerns that merit the attention on sustainability [4] . Many housing initiatives focus on affordable housing provision, yet environmental and sustainability issues are severely neglected. Mainly because of the lack of a comprehensive approach towards the understanding of sustainable housing affordability construct. The concept of sustainable housing affordability as detailed in Fig. 1 was first introduced by Mulliner & Maliene [57] . It integrates other criteria which are derived from the concepts of housing affordability, sustainable housing, and sustainable communities. It then draws a closer link between environmental sustainability and social justice and connects the peoples’ wellbeing with environmental wellbeing. In other words, it can be referred to as the combination of the ability to own a house at a minimal price, in a safe environment that enables healthy living, and covers other sustainable aspects which relate to more fundamental concepts in, among other areas, of micro-economics and social policy. However, there is a dearth of research that identifies a holistic set of criteria for sustainability and performance assessment of housing affordability. In this regard, housing affordability is typically assessed in terms of price or rental cost in relation to income, which creates disconnect between sustainability and affordability of housing. Reconnecting this link requires establishing a comprehensive list of critical sustainability performance criteria (CSPC) contributing to sustainable housing affordability. Generally, the best-known criteria are for sustainable housing which were developed by the UK Green Building Council – BREAM (Building Research Establishment Environmental Assessment Methodology) and US Green Building Council – LEED (Leadership in Energy and Environmental Design) certificate’. Both BREEAM and LEED certificates generally consist of sustainability criteria for buildings concerning: transportation, location, energy and atmosphere, water efficiency, materials and resources, neighborhood pattern and design, indoor environmental quality, renewable energy systems, infrastructure, waste management, pollution, health and well-being. These criteria are largely physical requirements and environmental issues which are subsumed within a discourse that conflated ‘economic growth’ and ‘development’ hence neglecting the human dimension. Recent studies have posited that the conventional assessment and planning processes of affordable housing programs are not often well structured to address social and ecological effects within complex systems [ 59 , 60 ]. However, the sustainable housing affordability concept promotes the consideration of social concerns as well. It has been argued that sustainability should be the spine of affordable housing by promoting cost efficiency of energy, transportation, and health care [48] . Therefore, it has become evident that not housing cost alone that needs to be considered to keep cost criteria in affordable housing in check; energy efficiency of housing, access to amenities [60] as well as citizen participation may also need to be promoted to create sustainable and successful living environments. Only a few studies have proposed some criteria that must be addressed in achieving sustainable housing affordability (see [57] ; 2015; [34] ). Much recently, the housing affordability situation has become even more diverse, and more complex as a result of the ever-changing approaches which exist in this domain. The understanding that there is not one solution for further reduction of cost criteria and achieving enhanced energy-savings in affordable housing programs, but a series of steps to address these challenges is necessary in ensuring that the housing programs are targeted towards main issues; and may be a tool for achieving sustainable communities. In their attempt to address this dynamic paradigm Gan et al., [34] identified twenty-four (24) key sustainability performance indicators (KSPIs); Chan & Adabre, [20] and Adabre &Chan, (2019) presented 21 critical success criteria (CSC) and 30 success factors (SFs) respectively through questionnaire survey of major industry stakeholders and key housing professionals; As a guide for the sustainable development of affordable housing programs. Similarly, Mulliner & Maliene, [58] established 20 sustainable housing affordability criteria (C1-C20). However, in order to stimulate a constructive and concrete academic discourse in this domain, as illustrated in Table 1 . These guides were factored together and expanded through extensive systematic literature review, for a more sustainable housing affordability strategy, and were narrowed down to; Social sustainability performance criteria (safe and secure, universally designed amongst others); environmental sustainability performance criteria (resource efficiency in water, waste, and energy amongst others) and economic sustainability performance criteria (cost efficient over time amongst others). These performance criteria are centered on the basic sustainability requirements (i.e., ability to sustain) and affordability (ability to afford); as well as how to incorporate these criteria into specific circumstances, particular cases, and context; and more general; how to design a practical sustainability assessment model that acknowledges the important role technology plays, especially in both delivery and implementation. All of which are centered on maximizing social acceptability criteria, minimizing cost and environmental impact criteria. Social (cultural) sustainability performance criteria (SSPC) Poor housing condition is an indicator of poor social conditions [74] . Therefore, a well-articulated affordable housing program can guarantee positive social conditions required to support and sustain stronger community cohesion outcomes. Social cohesion defines a society which offers opportunities to every of their member within a framework of accepted institutions and values. A community cultural need is therefore addressed also by a socially acceptable housing regarding size, function, affordability, safety, sense of accomplishment, aesthetics [55] . Consistent with this, Wiedmann, et al. [84] opined that the integration of various social groups, lifestyle choices and social services, as well as healthy urban densities, optimized spatial layout for safety and comfort; and a role as a new landmark and cultural center, are the key social parameters for resolving affordability concerns within the ambit of sustainability. Ultimately, the social objective of housing affordability addresses social exclusion by ensuring decent housing quality, combating spatial segregation by preventing social polarization and reduces inequalities in wealth and income. Maina [49] reported that in Nigeria socio-cultural related criteria such as safety, are not adequately considered in choosing locations and housing unit designs, resulting in their abandonment. Environmental (ecological) sustainability performance criteria (ESPC) Bordigon (1998) illustrated that decreasing the strain placed on the environment by a home can significantly contribute towards attaining a global sustainable society. Environmental considerations are closely associated with materials used and their suitability; construction technique and housing system operations, resource-efficiency (waste and water) energy saving and reducing footprint to lessen biodiversity loss. Therefore, there is both an equity and efficiency imperative to ensure that affordable housing is socially equitable and environmentally sustainable [57] . Material selection for affordable housing construction is generally dependent on the cost, durability of the building materials, their availability as well as acceptability by the users [37] . Building materials can be a major source of indoor air pollution . For instance, nearly 70% of formaldehyde a known asthma trigger and carcinogen is used in building materials, as a binder for carpets, engineered wood products and insulation, among others [44] . This is not often known to architects who make specifications or developers who build, nor households whose indoor environments are most affected by the materials they are constructed with. Yet, with the appeal for inexpensive cost, problematic chemicals like formaldehyde are expected to maintain widespread use, especially for housing programs aimed at cost reduction per square foot. Studies have severally extolled the values of adopting energy saving materials, which are harmless to humans and low level of technology inputs, as the most veritable for sustainable and affordable housing development [7] . Green improvements to affordable housing could promote positive health outcomes of low-income households, for instance access to green public space [71] which is routinely ignored in affordable housing programs [27] . In recent times, heightened interest in eco-friendly living has led to the emergence of environmentally sustainable construction processes [24] such as green affordable housing concept [32] . These “green” housing integrated design practices consist; measures to increase energy efficiency, utilization of non-toxic materials, decrease water usage, and maintains the quality of the indoor air environment [75] ; and also ensure proximity to community resources, such as parks and transit; as well as delighting as an essential component of the building design strategy [17] . Hence, are generally termed as environmentally friendly buildings [ 24 , 28 ]. Therefore, if designers and developers are concerned about the impact of their projects on the health disparities of low-income families or want to ensure that their projects do not isolate residents and concentrate poverty, then they are concerned about the core elements of green building. In the end, the goals of green building are very much aligned with the goals of affordable housing and community development [45,46] . Indeed, affordable housing professionals and designers will increasingly need to understand green building techniques to achieve safe, decent, and affordable housing for low-income households. Hence, advocacy efforts should be encouraged to support the expansion of green housing and emphasize healthy community development. Furthermore, disaster resilience should receive special attention [21] as well as mixed land using, as it promotes accessibility, minimizes cost of transportation, and encourages efficient land use [ 42 , 79 ] in addition ensure flexibility and adaptability [79] which meets the changing needs of households and prevent issues like more resource consumption and environmental disruption [69] . Economic sustainability performance criteria (ESPC) The economic sustainability mostly focuses on countries characterized by poor and dysfunctional economies as well as unstable political institutions. Economic aspect entails job opportunities, economic buoyancy and equitable development which encourage local policies that create more affordable housing, living wage jobs, mass transit systems, health care, and quality education; as well as the consideration of both initial acquisition cost and energy bill [ 32 , 42 ]. Minimizing cost of transportation and energy bill, allows low-income households to spend a larger portion of their income on non-housing necessities [ 35 , 40 ]. Studies found that factors such as house size, monthly installment, and physical characteristics like number of bedrooms and bathrooms [9] , desirability by would be occupants [69] as well as construction cost; are closely associated with economic sustainability criteria. Contractors can utilize cost reduction strategies that reduce cost on the environment, e.g. Use of regionally available techniques and materials [ 12 , 43 ] like earthen materials [7] . Also, providing stable financial incentives and subsidy is necessary to secure financial viability for developers [69] and for individuals who are unable to rent or pay for a house [1] . Earlier studies raised concerns over governments’ disinvestment in public housing and re-placement of public housing mechanisms with market-driven systems [83] . This is consistent recent studies which suggest that several countries have reduced or eliminated housing supply subsidies for low incomes [18] ; the question is whether direct income subsidy is more efficient? Subsidizing housing as a means of poverty alleviation is very questionable, because no research has illustrated the positive outcomes of subsidizing households over housing units for households with limited income [53] . The effectiveness of housing subsidies depends on risk management and the price elasticity of housing demand [36] . Tax relief is as well known to be an efficient means of reducing the affordability burden of low-income households, particularly those residing in the rental housing [86] . Furthermore, access to housing (i.e., The capacity to secure enough mortgage finance to acquire a decent housing unit) is thus imperative in the acquisition of housing, requiring long-term financing [81] . However, this has always eluded low-income earners. Rolnik [70] observed a global U-turn some decades ago in the prevailing urban and housing policy agendas, influenced world-wide via forces driven by neoliberalism and globalization. The commodification of housing, as well as continued usage of housing as investment assets in a globalized financial market, has significantly distorted the enjoyment rights to decent housing. Achieving sustainable housing finance for lower-income groups is an almost unattainable goal of a growing number of countries worldwide and therefore presents a major challenge. Methodology and research protocol Identifying potential SPC for sustainable housing affordability (SHA) It is pertinent to identify possible SPC for SHA, because there is a dearth of studies on this subject and some studies on SPC for affordable housing programs fail to provide a comprehensive list. For instance, Ahadzie et al. [8] study neglected household income in relation to rental cost and housing price, which are considered significant housing affordability criteria. In addition, the cost of transportation and its relationship to household income was ignored in the criteria system identified by their research. Therefore, to identify relevant SPC contributing to sustainable housing affordability, an extensive review of peer reviewed articles in highly ranked journals ( Section 2 ) was undertaken. As a result, a holistic set of SPC apposite to sustainable housing affordability was identified ( Table 1 ). Followed by questionnaire design, which was pilot surveyed and administered to urban low- and medium-income residents and affordable housing applicants in the 26 urban areas of the 6 geopolitical regions in Nigeria. Before the questionnaire design, a pilot survey was performed on the potential list of SPC for sustainable housing affordability. The reason for process was to test the comprehensiveness and the significance of the possible SPC. One affordable housing district was used in the pilot study comprising low- and medium-income earners, who had experienced or experiencing housing affordability stress. The respondents were asked to evaluate if the criteria list contained a suitable number of performance criteria and if other potential critical performance criteria may be included or expunged from the list. One criterion was added under the attribute “Public Facilities and Amenities” and the rest under “Architecture and Innovative Design” in the social sustainability performance criteria. Consequently, a total of 13 criteria was added to the comprehensive set of SPC through the pilot survey as shown in Table 1 . The completeness and relevance of the criteria were finalized and confirmed after the pilot study. Study area and research scope This paper is a household level study focused on determining critical sustainability performance criteria (CSPC) that influence sustainable housing affordability using Nigeria as a case study. Nigeria sits on an area of 356,669 sq mile (923,768 sq km) with an estimated population of 190.9 million people [87] , nearly 48% of this population resides in urban areas. Politically, Nigeria is partitioned into 6 geopolitical regions and administratively into 36 states plus the Federal Capital Territory. The states are divided further into 774 local government areas and legally the headquarters of these local government areas are established as urban centers (National Urban Development Policy, 2006 cited in [65] ). Urban areas in Nigeria are established based on population and legal or administrative criteria, adopting a threshold population of 20,000 persons as a criterion for defining an urban area. This could mean according to Ofem, [65] that Nigeria has a total of 774 urban areas. However, an urban area is a continuous urban development of built up land mass with high population density and infrastructure of built environment, within a labor market, and with no regards for administrative, political or city boundaries [82] . This implies that there are only 26 urban areas in Nigeria, according to Demographia's "World Urban Areas" study (2019). The case study approach was applied to the 26 urban areas in Nigeria. These urban areas were preferred on the basis that they could represent Nigeria's housing affordability dilemmas’ better. The study centers on the urban housing sector and thus, it is concerned with the housing affordability assessment of the urban poor households. The purpose of restricting the study scope to urban housing sector is that, problems of urban housing in Nigeria are normally more profound and severe than that of rural housing both in complexity and intensity. The 26 urban areas (listed in Table 2 ) in the 6 geopolitical regions of Nigeria as shown in Fig. 2 experiences higher population growth rates, higher rates of population density, higher property value and land cost, high degree of in-migration, and higher employment and income inequalities. Resultantly, slums and squatter settlements, high rents, overcrowding, are common Nigerian urbanscape features. Therefore, this research concentrates on the urban sector due to the severity of its housing problems. Another justification of the scope is that the major housing problems in rural areas center on qualitative improvement concerns regarding infrastructure and sanitation for existing units. Thus, housing affordability concerns are nearly insignificant in rural areas in comparison with urban areas. Issue of study relevance to current policy reforms on housing in the country was another consideration. Given that consecutive policies and housing programs are mainly targeted at urban areas and most of the contentious housing policy dilemma and issue the study sets to debate are mostly applicable to the urban housing sector. Data collection This research consists of a comprehensive literature review to enable the assemblage of a holistic set of SPC for sustainable housing affordability. All together 81 SPC was identified from literature review ( Table 1 ), with the first objectives to determine the criticalities of the SPC from urban households’ view point, then, finding out the disparities (if any) between respondents based on income group and region of residence on the ranking of the established CSPC and lastly, classifying the established CSPC into underlying categories. The A Section of the questionnaire demanded the respondents’ background data. It requested background questions in order to form filters, which enable comparisons of different group's opinions on criteria importance, like gender, age, income, respondent's current housing situation, e.g. Squatter houses, and apartment buildings, as well as the geopolitical region the household resides. It is important to determine the reliability of the responses before further analysis is conducted on subsequent data. On Section B respondents were requested to assess the criticality of the 81 SPC via a 5-point Likert Scale of 1–5 as follows: 5 = Very important; 4 = important; 3 = slightly important; 2 = less important and 1 = least important; which represent the importance of sustainable housing affordability from the urban households’ perspective. The adoption of this scale was due to its relative brevity. Thoughtfully, at the end of the 81 st SPC spaces were provided for respondents to rate and list the criticality of other SPC for sustainable housing affordability as may be identified by them. This allowed the inclusion of additional 13 SPC to the comprehensive list. To set a general background for participants to respond appropriately on the SPC, prior to the question on CSPC an immediate question directed at the goals or on a set of performance outcomes for sustainable housing affordability. The set of performance outcomes is intended to solicit the opinion of potential respondents on the rating of these outcomes and to pre-inform them on the aim for sustainable housing affordability. Then, based on the rating on the performance outcomes, respondents can then adequately rate the criticality of the criteria for achieving sustainable housing affordability. Sampling The convenience sampling technique was used for the collection of primary data. In convenience sampling, the survey administration is targeted at willing, available and accessible respondents [73] . This technique is suitable where adequate information on population size is lacking e.g. Nigeria. Therefore, findings drawn may not be generalizable, however, using bigger respondents, the findings can be representative [73] . The case study utilizes data gathered from two questionnaire surveys carried out between January and June 2019, focusing on urban residents and applicants of affordable housing. Housing experiences of affordable housing applicants can provide clues on how low-income groups choose their housing or behave when confronted with high housing costs. Therefore, obtaining the views of eligible applicants and occupants of affordable housing scheme portend salient information on the criteria importance of indicators for sustainable housing affordability. By comparison, the residents of affordable housing, is perceived as direct beneficiaries of affordable housing schemes, hence their housing experiences can shed light on how housing policies sharp housing outcomes. Specific techniques to approach these two target groups are detailed below. For applicants of affordable housing, information was collected through self-completed questionnaires. There were 1315 participants in the survey, collected by three approaches: Questionnaires obtained from the federal ministry of housing state offices After careful consideration, state offices within the twenty-six (26) urban areas of the 6 geopolitical regions were surveyed. Questionnaires were administered to applicants by director(s) of 1 to 2 states offices within every region. Like snowballing, potential respondents were beseeched to send the questionnaire to any other potential applicant of affordable housing they considered can answer the questionnaire appropriately. Therefore, it is difficult to pin-point the actual number of questionnaires distributed through this means. However, nearly 1864 questionnaires were distributed. Each applicant was sent an email which consists of a letter of introduction with a concise research information statement as well as a web link option to answer the questionnaire via “Survey monkey” app. These flexible options ensured convenience in responding to the questionnaire and it enhanced the response rate. 653 responses were received out of 820 questionnaires were administered, making a very high response rate of 78%. Email distribution This technique was used in the region with the highest application rate (about 3,456,000 applications) during the three months survey work. Applicants' names and email address in the application information, was obtained from the Federal Housing Authority's (FHA) website. This data is available to public on the website for 7 days after the release of the information. However, only 400 responses sent in this way were received out of 3036 questionnaires administered, making a low response rate of 13%. Questionnaires obtained from affordable housing districts To complement previous techniques and increase response rate, questionnaires were as well administered in affordable housing districts. Thoughtfully, the authors made small modifications to the applicant questionnaire to explore housing experiences of residents prior to their stay in the current housing. These respondents have resided in their present housing for years, hence were sometime in the past, applicants of affordable housing. In that light, responses obtained by this approach were considered comparable with those collected through the other techniques. Out of 1800 questionnaires distributed in this way only 262 responses were received, a response rate of 15%. For residents of affordable housing, information was also collected through self-completed questionnaires. One affordable housing project with the highest number of residents was chosen in each of the geopolitical regions. These projects are popular affordable housing programs in Nigeria. 1211 responses were obtained out of 4009 questionnaires administered in this manner, making a 30% response rate. The authors recognized that the survey utilizes a small sample size with a relatively low response rate, which could constrict the representativeness of the survey results. Thus, the findings of this study are indicative and insightful rather than conclusive. Future studies are therefore encouraged to use a larger sample size to generalize the understanding of the issues discussed in this research. Respondents profile: socio-economic characteristics The socio-demographic characteristic of the respondents in Table 3 indicates that 54.3% and 45.7% of the household respondents were males and females, respectively. However, on aggregate, there are more males than females in the study and this gender difference is statistically significant ( W = 11.533 ; p = 0.045). The distribution of age range of the respondents shows that the mean age of the respondents is 38.8 years with a standard deviation of 5.8 years. Table 3 also shows that a total of 543 representing 21.5% of the respondents is homeowners; 1,243 (49.2%) reside in a rented apartment while 740 (29.3%) live in shared houses. Analysis of educational qualification revealed that about 93.6% of the total respondents have better education; while on the distribution of employment status there is a close match between those who are fully employed and those who are unemployed, 37.3% and 32.2%, respectively; 19.1% are temporary employed while 11.4 are retired. The income status shows that most of the respondents are low- and middle-income earners (40.6 and 38.5%), respectively, about 63.9% earn below N 100,000 (277 USD) monthly. Statistics of the household number show that most of the respondents, despite not earning a good salary are faced with large family size of 3 members and above. Most of their house types are apartments/flats and terraced houses which are about 10 years or below. About 49.1% of them own vehicles while 50.9% do not. Result of proximity to various facilities was normal as most of them are located not farther than 5 km away from the respondent's residence. Analytical tools and techniques Descriptive statistics The weighted means and standard deviations were employed in the data description and evaluation of the critical sustainability performance criteria (CSPC). The weighted mean and standard deviations were computed as: Where, (1) x ¯ = ∑ i = 1 n w i x i ∑ i = 1 n w i is the weight of the i w i th cell; = ∑ w i n = sample size of the study. Similarly, the weighted standard is obtained as (2) S = ∑ i = 1 n w i ( x i − x ¯ ) 2 n − 1 Where, is the weighted mean; and every other variable in the equation retained its original identity. x ¯ Normality test– one sample kolmogorov-smirnov (K-S) test One sample Kolmogorov-Smirnov (K-S) test was used to ascertain whether the data series follows normal distribution and provided evidence of disagreement. Therefore, in this study nonparametric test was adopted, in addition to the Kruskal-Wallis H. test which was used to compare the respondents’ opinion across the geopolitical zones. Kruskal-Wallis H test Kruskal-Wallis test is a non-parametric test of comparison for k-independent samples or populations. The Kruskal-Wallis H test for K ≥ 3 independent populations is estimated as: (3) H = [ 12 n t ( n t + 1 ) ∑ i = 1 k R j 2 n j ] − 3( n t + 1 ) Where, k = number of populations; = number of questions in factor j; n j = total number of questions in all factors; n t = ∑ n j = sum of the ranks for factor R j j . The null hypothesis was that there is insignificant difference between the mean ratings of different groups. Factor analysis The principal component method of factor analysis was used to extract the critical sustainability performance criteria for sustainable housing affordability in the study area. The operational equation of the factor analysis is given by: (4) P 1 = a 11 X 1 + a 12 X 2 + … + a 1 k X k P 1 = a 11 X 1 + a 12 X 2 + … + a 1 k X k ⋮ ⋮ ⋮ ⋮ ⋮ P k = a k 1 X 1 + a k 2 X 2 + … + a k k X k } The tests confidence level for all inferential statistics was 95%, which implies 0.05 level of significance. Results Respondents housing affordability stress experience The 2002 New National Housing and Urban Development Policy (NNHUDP) advocated that on no account shall any household spend above 20 percent of their monthly income on the housing units provided by FHA (Aribigbola, 2008). However, the descriptive statistics result represented in Table 4 , with a cluster mean value of 3.22 > 3.00 and associated standard deviation estimate of 1.073 < 1.581 indicates that the household respondents are experiencing housing need and suffering affordability stress; hence the necessity for this research. This result is in-line with studies which have clearly demonstrated that previous affordable housing schemes in Nigeria failed to assist the targeted population [ 41 , 64 ] largely because of the high cost of housing units offered [43] . Turok, [80] argued that housing program and policy should serve a more expansive purpose beyond the sheer increase in number of houses. The author noted that carefully-designed human settlement policy can assist in lifting households out of poverty by providing avenues for people to become more productive, support urban areas to function more effectively and expand economic activities (jobs and investment). Ranking of sustainable housing affordability criteria based on household's opinion A total of eighty-one (81) list of criteria apposite to sustainable housing affordability were extracted from the literature. These 81 set of potential criteria comprise of nineteen (19) economic sustainability criteria, forty-eight (48) social sustainability criteria and fourteen (14) environmental sustainability criteria. Descriptive statistics (mean and standard deviation) were used to extract the key criteria through opinion ranking of the households. The result is as presented in Table 5 below: Criticality of SPC for sustainable housing affordability From the comprehensive ranking of the household respondents, the first thirty (30) ranked criteria were extracted and presented in their order of importance, as shown in Table 6 below. These criteria which are considered most critical by the household respondents form the target for further analysis in this study. Table 6 shows the most critical SPC for sustainable housing affordability from urban households view point is House price in relation to income (ES101). One interesting finding is that the criterion “clean and attractive” (SS303) is rated as the least CSPC, indicating that the attractiveness of housing perceived as affordable by households in the urban areas of Nigeria is not considered as a priority. Comparison of household views with industry professionals and stakeholders The comparison of the criticality of the views of stakeholders and industry professionals from previous studies, with that of households established in this study; indicates that households have distinct and unique views of the criteria relevant for sustainable housing affordability. For instance, in the study of Mulliner & Maliene [58] all four economic-related (housing cost) criteria were ranked among the top four considering that affordability is commonly assessed and defined based on economic terms by academics and professionals. Though households rated three economic-related criteria as part of the top four, they also considered a non-housing criterion as equally critical. Reduced transportation cost was ranked third by household respondents. This position had been affirmed by studies which argued that housing affordability must address not only the monetary dimension of housing, but likewise other wide range of costs that confront households, such as transportation cost (Acolin & Green, 2017). It is worthy to note that while safety (reduced incidence of crime) was lowly rated by industry professionals according to Mulliner & Maliene, [58] at 15 th position. Stakeholders did not consider safety at all amongst the top 20 criteria in Gan et al. [34] study. However, in this study households perceive safety (reduced incidence of crime) amongst the key criteria relevant to sustainable housing affordability and was ranked 12th. This aligns with the concerns of Maina [49] who opined that insecure housing locations in Nigeria prevent households from occupying housing units. Therefore, it is safe to say that households have distinct and unique views on the criteria importance when compared with the industry professionals and stakeholder's opinion. This study recommends periodic assessment of household views on the CSPC as such a subjective perception is characterized by instability over time. Table 7 presents other comparison results of the top 20 critical criteria as adjudged by industry professional, stakeholders and households. Household perceptions based on the geopolitical region and income group To compare group differences to answer the research questions and hypothesis shown in Table 8 ; the Kruskal-Wallis test was adopted. Households geopolitical region of residence in Nigeria From the ranking statistics across the six (6) geopolitical regions in Nigeria, as presented in Table 9 , the significant criteria were: House price in relation to income (ES101), reduced energy bill (ES311), Reduced transportation cost (ES312), Rental cost in relation to income (ES102), Ensure balanced housing market (ES310), Financial viability (ES301), Accessibility to working place (SS107), Type of building (SS305), Air quality (ENS103), and Reduced footprint (ENS206). Particularly, in the North East region, effective maintenance and management of properties (SS205) was ranked highest. This is followed by house price in relation to income (ES101), reduced energy bill (ES311), and Tenure (SS211). The least factor of importance was air quality (ES103). In the North West region, the major identified factors were number of fire place (SS313), energy efficiency (ENS203), accessibility to workplace (SS107), house price in relation to income (ES101), and ensure balanced housing market (ES310). The least critical factor was thermal comfort (ENS207). The five most ranked criteria in the North central region were ENS206, ES101, ES301, ES102, SS305, SS203, ENS207 and SS208. In the South East Zone, the top five ranked criteria are ES312, ES311, ENS208, ES310, ES101, and ES301; in the South West region, we have ES311, ES312, ENS103, ENS203, ES101 and ENS206; while in the South-South region, the most ranked factors are ES101, ES102, ES310, ES311, and ES312. Table 9 presents other comparison results by geopolitical regions of Nigeria. Therefore, to measure the variations in the rankings of the respondents across the geopolitical regions, the Kruskal-Wallis test for k-independent variables were employed. The result indicates a significant difference in the rankings of the respondents across the 6 geopolitical regions (Kruskal-Wallis H (5) = 21.433; p -value = 0.001) at p < 0.05. This implies that the respondent's region of residence has significant impact on the ranking of criteria importance for sustainable housing affordability in Nigeria. However, a multiple comparison test was performed to ascertain the regions with varying opinion and those with similar opinions. The number of comparisons necessary for the post hoc Mann-Whitney test was determined as 6(6−1)/2 = 15. Using the formula k(k − 1)/2, where k is the number of groups. The Bonferoni multiple comparison test result indicates significant differences in opinion rankings of South-South to North West, South-South to North Central, South South to South East, and between North East to North Central region. Household income group The variation in the average rankings of criteria importance based on the participant's income group is as shown in Table 10 . The general average order of criteria ranking is as represented in the “overall ranking column”. This was compared with the average rank gotten by income group. Descriptive statistics (mean and standard deviation) were employed to achieve this aim. Also, to measure the variations in the rankings of the respondents’ opinion by income group, the Kruskal-Wallis test for k-independent variables were again employed. The result indicates no significant difference in the rankings of the respondents on criteria importance based on income group (Kruskal-Wallis H(2) = 1.620; p -value = 0.445) at p > 0.05. This implies that the respondent's opinions on criteria importance do not differ based on income group in Nigeria. A Joint assessment of the criteria performance by income group and geopolitical zone indicates that out of the thirty (30) sustainable housing affordability criteria, twenty-one criteria were considered relevant from the opinion responses of the respondents using descriptive statistics (mean and standard deviation). These 21 criteria were thus subjected to factor analysis for final extraction and development of a general framework. Table 11 presents the 21 criteria importance and their respective comparison result using the Kruskal-Wallis H test technique. The result in Table 11 indicates significant variations in household rankings of the factors at 5% level of significance. Table 12 shows the key criteria importance as extracted using principal component method of factor analysis. From the principal component result, there are five (5) key criteria for sustainable housing affordability in Nigeria. These criteria are Waste management (ENS201) which explains about 29.9% of the total variations in the system, Safety/Security (reduced incidence of crime) [SS203] which accounts for about 27.6% of the total variations, Energy efficiency (ENS203) which accounts for about 21.2% of the variations, financial viability (ES301) accounting for about 12.9% of the total variations, and Ensure balanced housing market (ES310) which explains about 8.3% of the total variations in the system in Nigeria. The general framework is therefore developed based on these criteria as extracted through factor analysis and shown in Fig. 3 . From the framework, the major criteria for attaining sustainable housing affordability in Nigeria are as follows; in the economic sustainability criteria are financial viability (ES301) and ensure balanced housing market (ES310); while in the environmental sustainability criteria are Waste management (ENS201) and Energy efficiency (ENS203). Also, Safety/Security (reduced incidence of crime) [SS203] is the only criterion in social sustainability criteria. Discussion The quantitative study performed in Section 4 shows the analysis of 2526 questionnaires, carried out with households residing in 26 urban areas in 6 geopolitical regions of Nigeria. This survey enabled the authors to establish the criticality of 81 potential SPC relevant to sustainable housing affordability. 30 CSPC was established using household's opinion. Research results showed that presently households in Nigeria perceive the economic criteria, such as “house prices in relation to income” and “rental costs in relation to income” amongst the key significant criteria for sustainable housing affordability, ranking them 1st and 3rd respectively. This result is not surprising, owning to the fact that housing cost and its relation to income (ratio income method) has been typically used to measure and define housing affordability situations; due to their ease of computation and appeal to peoples’ common-sense experience, since they generally require information on housing cost and income. One interesting thing about the results of this study is that non-housing cost criteria such as “reduced transportation cost” and “reduced energy bill” had an equal rank with “rental costs in relation to income” at 3rd position each. This implies that households are beginning to place very high importance to non-housing cost. This is in line with the debates of some researchers who demonstrated that the relationship between housing cost, housing location and cost of transportation ensures an actual measure of housing affordability [47] . It is worthy of note that a very high rank of importance (5th) of criterion “Ensure balanced housing market” reflects Baranoff [16] assertion that housing affordability is a growing crisis in urban areas with constrained housing markets. As urban population grows exponentially, many urban areas experience continuous and rapid growth, as in-migration continue from rural to urban areas in quest for better living conditions, leading to an increase in housing demand. However, as the Nigeria's economic woes continue unabated; there also exists an even more increasing demand for affordable housing. Thus, demands for housing always outstrip housing supply by a very wide margin in the study area. This inevitably brings about the survival of the “fittest syndrome”. This housing inequality concept in addition to other scales and types of disparity in housing services availability as investigated by Awotona [13] are still very much present. One striking feature of Nigerian urban areas is a sharp disparity in housing standards. This results in the acquisition of ostentatious luxury housing quarters by rich individuals, while according to Obiadi Bons et al. [64] the urban poor are left with little or no choice but to make do with shanty houses in less desirable areas such as marshy sites, neighborhood adjacent to refuse dumps, among others. Furthermore, social and more qualitative criteria like “Accessibility to working place” and “Type of building”, were ranked as 7th and 8th; and environmental criteria “Air quality” and “Reduced footprint”, were equally ranked 9th, are considered highly important by households and could significantly inform decision making. It therefore does seem that households recognize, to some extent at least, the importance of quality and ecosystem related criteria in line with several scholars [ 2 , 50 ]. Given that different interpretations have been employed in the housing affordability analysis by researchers of diverse orientation [51] , questionnaire data analyses were performed using non-parametric statistics. The purpose was to establish if any significant differences exist in perceptions among respondents; first, based on the households’ location in the 6 geopolitical regions of Nigeria and second, according to the income group which the household belongs. Findings from the first analysis show that opinion on criteria importance differs considerably across the geopolitical regions of Nigeria. This is an indication that households’ perceptions and views on criteria importance were inconsistent within the country. Criteria were ranked based on the current economic (affordability) situation and safety concerns reported in several regions of Nigeria (e.g., unaffordable housing prices in the South West region (like Lagos) than in other regions of Nigeria [10] and increasing rates of hostilities reported in the North East and North West regions than in other parts of the country [11] . Such reports detail the irregularities in households’ assessment and interpretation of sustainable housing affordability criteria across the entire regions of Nigeria. This conceptual irregularity is in line with the views of Gabriel et al. [33] that diverse groups (which in this study is perceived as the different regions of Nigeria) struggle to impose their own concept and definition of housing affordability. However, the criteria importance established in this study could be considered equally relevant for every geopolitical region of Nigeria if such criteria system is employed in future studies. The second analysis showed that the households’ opinion does not depend on the household income group, but instead with part of the region in which the household resides. The research, therefore, rejects the alternative hypothesis and accepts the null hypothesis for H O1 since there p-value is more than 0.05. The null hypothesis H O1 states that household opinions on criteria representing sustainable housing affordability do not significantly differ based on household income group. However, it accepts the alternative hypothesis H A1 and rejects the null hypothesis for H O2 which states that household opinions on criteria representing sustainable housing affordability do not significantly differ based on geopolitical region of residence, since it has a p -value of 0.001 which is less than 0.05. Furthermore, from the descriptive statistical result, the households unanimously agreed that broader dimensions and wide-ranging criteria relevant to sustainable housing affordability as propagated by researchers, are presently not incorporated in housing delivery practices in the study area. This assertion aligns with studies which have reported lack of consideration to socio-cultural related criteria like kingship and security [49] ; poor solid waste management system [68] , problems with open/recreational space delivery and management [66] , spatial variations in housing quality [54] , and improper utilization of natural resources available in the housing environment [67] as well as low user participation in housing delivery processes in Nigeria [39] amongst many others. Limitations and future research agenda This survey suffers some weaknesses which can be suggested as themes for future studies. The study sample size is acknowledged to be relatively small. This may limit the representativeness of the survey results. Consequently, the generalization and interpretation can be improved by future research which employ larger sample size of respondents. Hence, the available data are inadequate to provide a thorough cross-country view; further studies can increase the data coverage and substantiate the quality of this study finding. Future studies employing bigger responses can adopt statistical analysis like ANOVA to determine and compare statistical differences between the opinion of low- and medium-income families. More so, future studies can corroborate the CSPC established by this study using evidence-based case studies. Another limitation of this study is that only the opinions of urban households were assessed. Future studies should consider the opinions of rural households to ascertain the urban-rural differences in the criteria importance, since housing experience in the rural and urban settings are dissimilar. In addition, the views of stakeholders, academics and industry professionals were excluded in this report. It would be interesting if further studies analyze households’ views on CSPC representing sustainable housing affordability alongside the opinions of academics, stakeholders, and industry professionals. Furthermore, future research can study household preferences and compare them with our study results. Research contribution to existing literature Although this research suffers some weaknesses, it however makes some salient contributions that are worthy of note. First, it deepens housing affordability literature theoretically, by bringing a different type of perception to the conversation. The study argues that urban households have different, distinct, and unique views on the criteria representing sustainable housing affordability than professionals and stakeholders since households bear the direct brunt of the housing affordability problem. Thus, it broadens the housing affordability concept and meaning by revealing the housing affordability perceptions of urban low- and medium-income families. Secondly, the study has the potential to contribute to the housing affordability literature practically, since it heeds to the call of two recent scholarly studies ([ 20 , 58 ]) for an investigation into how low- and moderate-income families perceive the criteria system for sustainable housing affordability. Uncovering diverse and wide-ranging criteria influencing housing affordability of urban households takes a critical role in improving the quality of life, quality of housing layout and environment. Therefore, the study results can help architects, housing authorities and city planners in the design as well as the construction of better livable, sustainable, and affordable housing settings in accordance with the expectations and needs of urban households. Additionally, the established criteria system can offer policy makers, local authorities, and governments, with wide ranging criteria to consider in making more informed and sustainable decisions about the affordability of housing. The presented system of criteria representing sustainable housing affordability will assist in formulating techniques that can be used in assessing affordable housing locations in a sustainable manner. The criteria rankings can be employed in placing degrees of importance in affordable housing policies and programs. It is hoped that this research will inspire future studies into establishing a broader housing affordability concept that is better aligned with sustainability. Conclusion, implications and recommendation Many criteria influence housing affordability and recent studies emphasized the need for reconsideration in the way housing affordability is assessed and conceptualized. From this study, it could be said that housing affordability is also a product of subjective judgment which arises from the overall perception which households hold towards what they view as important features of an acceptable housing setting at a given time. This is a value judgment to some extent. Therefore, housing affordability concerns also arise from the overall peoples’ experience and account of the difficulties suffered in their quest to secure decent and affordable housing. Thus, this study provides an alternative lens to view housing affordability from the perspective of urban low- and medium-income households. It discusses the concept and broader criteria apposite to sustainable housing affordability, which transcends mere economic terms widely adopted in assessing housing affordability. Through a systematic literature review and pilot survey the study identified a comprehensive list of criteria through which housing affordability could be assessed more holistically within the ambit of sustainability. Then a case study in the 26 urban areas of Nigeria is applied to exemplify how households conceptualize and assess their housing affordability situation in a specific region and national context. From the results, the criterion “House price in relation to income” is the most important which is consistent with similar studies in this domain, but it was also found that households placed high priorities to other non-monetary criteria such as security (safety), location and building type; as well as other non-housing related cost like transportation cost and energy bill. However, ranking the criteria contributing to sustainable housing affordability is a daunting task, as household views are distinct and unique particularly in a multi-ethnic country like Nigeria. Thus, this study recommends that household perceptions be considered on every affordable housing program, because neglecting household views will derail affordable housing goals. For instance, studies have demonstrated that the neglect of housing quality perception [ 50 , 78 ] and socio-cultural concerns of households often result in housing facility abandonment. Therefore, this study recommends that a pilot study should be carried out to assess the views, expectations and needs of the intended households, prior to the construction of affordable housing projects. Periodic assessment of these needs is essential for the needs of the household are ephemeral. Regular assessment would guarantee that the expected affordability concerns of households are met. This research will guide stakeholders and industry professionals, particularly the contractors and architects about the criteria that are exceptionally relevant to sustainable housing affordability. The theoretical purpose of this study is to guarantee that households are contented with the houses and that “reasonable” profit margin is made by developers. The study's policy implications are that the views and perceptions of households should be routinely assessed and drive the delivery of affordable housing. Furthermore, implicit in the study findings is that respondents approached these criteria from the angle to lighten their affordability burden. Thus, industry professionals must guarantee that households will not spend excessively to commute to workplaces, health facilities, markets and parks on account of them residing in the house. In addition, the house must discourage and not contribute to crime and vandalism. To provide broader findings, further research on this subject might include other household compositions such as nonfamily households. It is appealing to consider performing a continental survey on this subject, e.g., Africa, however, this may be superfluous. Thus, it may be more realistic to compare the study findings with other populous nations in Africa like Ethiopia and Egypt. Moreover, housing prices in Luanda, Angola is higher compared to other nation's urban areas in Africa; hence, the applications of the study findings to other developing countries should be interpreted carefully. Funding No funding was received for this research. Declaration of Competing Interest The authors affirm that there is no conflict of interest. Acknowledgment The authors are grateful to the various households who magnanimously participated in the survey. Their munificent assistance aided the author's current understanding of the criteria system representing sustainable housing affordability and other salient aspects of housing affordability concerns in Nigeria. | [
"ABELSON",
"ABOLORE",
"ACOLIN",
"ADABRE",
"ADABRE",
"ADABRE",
"ADEGUN",
"AHADZIE",
"ALAGHBARI",
"ALIU",
"APUKE",
"ATOLAGBE",
"AWOTONA",
"AZEVEDO",
"BABALOLA",
"BARANOFF",
"BARDHAN",
"BRATT",
"CAI",
"CHAN",
"CHAROENKIT",
"CHIU",
"CHOON",
"CHAN",
"DAVE",
"DEMPSEY",
... |
33083ff501814079a3d591ac654275cf_A simple mathematical model to determine the ideal empirical antibiotic therapy for bacteremic patie_10.1016_j.bjid.2013.11.006.xml | A simple mathematical model to determine the ideal empirical antibiotic therapy for bacteremic patients | [
"Tuon, Felipe F.",
"Rocha, Jaime L.",
"Leite, Talita M.",
"Dias, Camila"
] | Background
Local epidemiological data are always helpful when choosing the best antibiotic regimen, but it is more complex than it seems as it may require the analysis of multiple combinations. The aim of this study was to demonstrate a simplified mathematical calculation to determine the most appropriate antibiotic combination in a scenario where monotherapy is doomed to failure.
Methods
The susceptibility pattern of 11 antibiotics from 216 positive blood cultures from January 2012 to January 2013 was analyzed based on local policy. The length of hospitalization before bacteremia and the unit (ward or intensive care unit) were the analyzed variables. Bacteremia was classified as early, intermediate or late. The antibiotics were combined according to the combination model presented herein.
Results
A total of 55 possible mathematical associations were found combining 2 by 2, 165 associations with 3 by 3 and 330 combinations with 4 by 4. In the intensive care unit, monotherapy never reached 80% of susceptibility. In the ward, only carbapenems covered more than 90% of early bacteremia. Only three drugs combined reached a susceptibility rate higher than 90% anywhere in the hospital. Several regimens using four drugs combined reached 100% of susceptibility.
Conclusions
Association of three drugs is necessary for adequate coverage of empirical treatment of bacteremia in both the intensive care unit and the ward. | Introduction Antibiotic therapy is essential for the proper treatment of infections. For community-acquired infections, protocols or consensus guidelines are extremely useful, since the susceptibility profile of bacteria may be quite similar in many areas of the world. Even when there are differences in antimicrobial susceptibility, administration of a large-spectrum antibiotic is sufficient to overcome this problem. In different hospitals, the ideal choice of antibiotics is hampered by extensive variability of bacteria and susceptibility profiles. Thus, the establishment of a consensus by the medical societies, even for regional entities, is rather challenging. Therefore, each hospital must assess its microbiological profile and propose treatment recommendations and protocols based on local data. 1 The knowledge of the local susceptibility profile can be used for choosing monotherapy. However, selecting the best antibiotic combination is far more complex since it involves dynamic calculations of the combination. Additionally, three to four drugs may be required to achieve an adequate coverage in high resistance scenarios, and this will generate hundreds of possible combinations. The aim of this study was to demonstrate a simplified mathematical calculation to determine the most appropriate antibiotic association, considering dynamic but easily accessible epidemiological variables. Materials and methods A cross-sectional study was carried out at the Hospital Universitário Evangélico de Curitiba, a 660-bed tertiary-care university hospital in Curitiba, Brazil. This hospital is reference for renal transplantion, trauma and burn. All patients aged ≥18 years with a positive blood culture collected from January 2012 to January 2013 were included in the study. Only the first episode per patient was analyzed. Blood cultures yielding coagulase-negative Staphylococcus were considered contaminated and thus excluded. Data were collected from hospital computer system databases. Blood cultures were collected according to the standard protocol used in the hospital and were processed using the BACT/Alert ® (BioMerieux, Durham, USA). Bacteria were identified by Vitek 2 (Biomérieux, Marcy-L’Étoile, France). Susceptibility testing was performed using the disk diffusion method according to the CLSI guidelines. Molecular confirmation of extended and pan-resistant strains was routinely done. 2 The susceptibility pattern of 11 antibiotics was analyzed. The choice was based on local policy: aminoglycosides (amikacin and gentamycin), semi-synthetic penicillin with beta-lactamases inhibitor (ampicillin/sulbactam), third and fourth generation of cephalosporins (ceftriaxone, ceftazidime and cefepime), fluoroquinolones (ciprofloxacin and levofloxacin), ureidopenicillin with beta-lactamase inhibitor (piperacillin/tazobactam), glycopeptides (vancomycin and teicoplanin), polymyxin, carbapenems (meropenem and imipenem) and tetracyclines (tigecycline). For statistical analysis, aminoglycosides and carbapenems were considered to be resistant if any of the tested antibiotics in the same class was identified as resistant (e.g. if amikacin was susceptible and gentamycin resistant, the aminoglycoside group was considered resistant). Ceftazidime and ceftriaxone were evaluated separately. Tigecycline was included in the analysis once the incidence of KPC-producing Klebsiella pneumoniae was high in this hospital. 3 The length of hospitalization before bacteremia and the unit (ward or intensive care unit – ICU) were the variables analyzed. Bacteremia was classified as early (<6 days), intermediate (6–14 days) and late (>14 days). This classification was created just for clinical purposes in the hospital. For assessing combinations, the antibiotics were combined 2 by 2, 3 by 3, and 4 by 4 according to the following combination formula: where “ s n = n ! s ! ( n − s ) ! n ” is the number of antibiotics and “ s ” the number of combined antibiotics. A large table was assembled to evaluate the percentage of antibiotic susceptibility for each bacterium to each group of antibiotic and for each combination of antibiotics (2 by 2, 3 by 3, and 4 by 4) (supplement table, http://infectopedia.com/dados-estatisticos/category/6-dados-estatisticos ). When combinations were analyzed, the bacteria were considered susceptible for that combination if at least one antibiotic was active. The percentages in the table are the susceptibility rates against all bacteria during the period (100% for total susceptibility and 0% for no susceptibility). Statistical analysis Continuous data are expressed as mean ± standard deviation (SD) or median with ranges. Frequencies are expressed as percentages. Dichotomous variables were compared using χ 2 test and Mann–Whitney test was used for continuous variables. Significance level was set at 0.05. All data were recorded using the software Excel (Microsoft, New York, USA) and the statistical analysis was performed using the software SPSS 16 (SPSS, Chicago, USA). GraphPad Prism 5.0 (GraphPad, La Jolla, USA) was used for graphics. Results A total of 216 bacteremias were evaluated. Staphylococcus aureus was the most common bacteria found in the study (31.5%), followed by Klebsiella spp. (13.4%). The species distribution is detailed in Table 1 . Early bacteremia (44.0%) was more frequent than late (34.7%) and intermediate (21.3%) ( p < 0.05). Bacteremia was more frequent in the ward (67.6%) than in the intensive care unit (32.4%) ( p < 0.05). Using the combination formula, 55 associations were found combining 2 by 2, 165 associations with 3 by 3, and 330 combinations with 4 by 4. A total of 561 options for treatment were available. The susceptibility to antimicrobials with respect to the unit of admission and length of hospitalization is detailed in Table 2 . In the ICU, no antibiotic reached 80% of susceptibility, with the exception of tigecycline (87%). This drug is not indicated for severely ill patients, and we excluded it from the analysis. Based on these data, one antibiotic is not enough as empirical treatment for bacteremia in the ICU setting. In the ward, early bacteremia showed better susceptibility pattern, with four antibiotics (cefepime, piperacillin/tazobactam, fluoroquinolone, and tigecycline) having more than 80–90%, and one (carbapenems) having 92% coverage. However, susceptibility rate was lower than 60% for all antibiotics (excluding tigecycline) in case of intermediate and late bacteremia. Detailed rates of susceptibility for combined therapy are available as supplement material or at the site INFECTOPEDIA ( http://infectopedia.com/dados-estatisticos/category/6-dados-estatisticos ). In the table there are redundant antibiotic combinations such as cefepime with ceftriaxone. These combinations were included in the table just for statistical analysis, but must not be considered as clinical options for therapy. In the ward, monotherapy for intermediate and late bacteremia had low coverage (up to 59% and up to 56%, respectively). Combination of two antibiotics reached a maximum of 85% susceptibility with carbapenem plus glycopeptide for intermediate bacteremia, and 83% for late bacteremia with carbapenem plus polymyxin. Only three drugs combined reached a coverage higher than 90%, using polymyxin plus glycopeptide associated with one of three options (cefepime, fluorquinolone or aminoglycoside). For intermediate bacteremia in the ICU, two combined antibiotics reached a susceptibility rate of 89% for glycopeptide plus polymyxin, 89% for carbapenem plus polymyxin, and 95% for polymyxin plus fluoroquinolone. Tigecycline was not included in this analysis as it was not indicated for use in bacteremic patients. For late bacteremia, the same regimen using carbapenem plus polymyxin reached 91%, which increased to 97% by adding a glycopeptide. Several regimens using four drugs combined reached 100% susceptibility. Discussion There are several publications showing the importance of an adequate antibiotic for the treatment of severe infections. Inadequate antibiotic therapy can increase mortality to a varying extent according to the population studied, severity of infection, clinical underlying conditions, and other epidemiological variables. 4–7 However, other studies have not confirmed these findings, mainly in case of bacteremia in the ICU setting with comorbidities and advanced age. 8 3,9,10 However, none of these studies defined the ideal rate of antibiotic coverage, regardless of the patient's clinical conditions. The question is: what should be the acceptable margin of error in the prescription of antibiotics for bacteremic patients? The ideal is 100% adequacy, but this figure can only be reached with at least three drugs, a suggestion not found in guidelines, including the IDSA guidelines for developing an institutional program to enhance antimicrobial stewardship. 11 One could extensively discuss this issue. Several factors influence the decision of the physician when prescribing the antibiotic therapy, such as disease severity, patient age, comorbidities, organ dysfunction, previous colonization/infection, and probable site of infection. In some cases, it might be possible to delay the choice of antibiotics until the final result of cultures. However, in patients with severe sepsis, the decision must be taken in the first hours. Laboratory tests to identify bacteria and their susceptibility pattern to antibiotics in this window of time are not a reality yet. 12 Nonetheless, when properly collected, culture results will allow the clinician to de-escalate all drugs to a broad spectrum specific therapy. De-escalation may be performed as soon as the result of Gram-positive blood culture is obtained, and it may be performed again after final identification. 13 Other questions about antibiotic prescription are: to determine if the patient is really infected; to consider candidemia or other fungi; how to de-escalation when blood cultures are negative; additional epidemiological variables may improve empirical decisions. This simulation must be validated considering the following facts: (1) most cases of “fever” and laboratory alterations (mainly leukocytosis with immature cells) are not infection; (2) the infection is not severe, and we can wait for culture results; (3) the need for and effectiveness of antibiotic therapy in terminal patients; (4) side effects of extremely toxic associations (e.g. vancomycin, polymyxin, aminoglycoside); and (5) resistance induction and microbiote modification with further superinfection or Clostridium difficile colitis. Although combined therapy may be used for a short period, constant vigilance and follow up by an infectious diseases specialist is mandatory. Falagas et al. published an interesting article where they suppose that 50% of fever is not infection. 14 The current results are the epidemiological panorama of a university hospital in a developing country, which cannot be applied to other health services and hospitals. The combination of three or four drugs can be used as a policy of antibiotic restriction. Carbapenems may be avoided with a regimen of ciprofloxacin + piperacillin/tazobactam. This can be an option in hospitals that practice antibiotic cycling. Some bias probably may have occurred in this study, once patients with bacteremia in the ward had been previously admitted at the ICU, falsely increasing the rate of resistant in the ward. The current study does not have the intention to convince clinicians to use bizarre and threatening antibiotic combinations, but rather to consider revision of current local guidelines and consider the mathematical formula of combination to construct an ideal regimen in that context. The same approach can be used for other infections, including urinary tract and respiratory infections, although one should consider cultures obtained from these sites. After this study, our hospital changed the antibiotic protocol for treating bacteremia, and other studies will be conducted in different sites. An internal validation must be performed followed by an external validation, if possible. Antibiotic regimens with two, three or even four antibiotics seem so remote from what is currently considered to be good antibiotic prescribing practice that it may be difficult to be applied in clinical practice. Conflicts of interest Felipe F. Tuon received grants from Bayer, Astellas, Merck, Pfizer, Novartis, AztraZeneca and United Medical (Gilead). Jaime L. Rocha received grants from Bayer, Merck, Pfizer, Novartis, Sanofi and AztraZeneca. | [
"TUMBARELLO",
"TUON",
"PAUL",
"CHEN",
"PISKIN",
"RETAMAR",
"TUON",
"TUON",
"TUON",
"DELLIT",
"DELLINGER",
"MOTOSHIMA",
"FALAGAS"
] |
044f8b7ddda14b30bfe70c35ca255f98_Behavioral tests in rodent models of stroke_10.1016_j.hest.2020.09.001.xml | Behavioral tests in rodent models of stroke | [
"Ruan, Jingsong",
"Yao, Yao"
] | Rodents are the most widely used experimental animals in stroke research due to their similar vascular anatomy, high reproductive rates, and availability of transgenic models. However, the difficulties in assessing higher brain functions, such as cognition and memory, in rodents decrease the translational potential of these studies. In this article, we review commonly used motor/sensorimotor and cognition tests in rodent models of stroke. Specifically, we first briefly introduce the objective and procedure of each behavioral test. Next, we summarize the application of each test in both ischemic stroke and hemorrhagic stroke. Last, the advantages and disadvantages of these tests in assessing stroke outcome are discussed. This review summarizes commonly used behavioral tests in stroke studies and compares their applications in different stroke types. | 1 Introduction Stroke is the second most common cause of death and one leading cause of disability worldwide. It is mainly categorized into ischemic stroke and hemorrhagic stroke, which are caused by occlusion and rupture of cerebral blood vessels, respectively. Hemorrhagic stroke is further categorized into intracerebral hemorrhage (ICH) and subarachnoid hemorrhage (SAH) depending on the location of hemorrhage. Among all stroke cases, ischemic stroke accounts for ~ 87%, while hemorrhagic stroke makes up ~10%. 1 Although less common than ischemic stroke, hemorrhagic stroke has a higher mortality rate (up to 67.9%) and worse prognosis. 2 When stroke occurs, ischemia or hemorrhage in the brain impairs CNS function and causes a series of neurological defects, predominantly affecting the sensorimotor and coordination functions. 3 In addition, studies have shown that stroke also leads to long-term cognitive impairment. 4 3–6 To enable stroke research and drug development, animal models of stroke have been established in several species, including rodents, pigs, and non-human primates. Among them, rodents are most widely used due to high reproductive rates, low maintenance cost, availability of transgenic models, and similar vascular anatomy as humans. 4,7–10 Specifically, middle cerebral artery occlusion (MCAO) is commonly used to induce ischemic stroke, 11 while intracerebral injection of collagenase or autologous blood is frequently used to induce ICH in rodents. 8 SAH can be induced in rodents simply by advancing suture through the external carotid artery to penetrate the vessel near intracranial bifurcation. 12,13 Accumulating evidence suggests that rodents exhibit multiple sensorimotor deficits and neurological impairments similar to human patients after stroke. 14,15 There is, however, one major challenge in rodent stroke models. Due to anatomical and functional differences between humans and rodents, it is difficult to evaluate higher brain functions, such as cognition and memory. Accurate assessment of neurological function and stroke outcome in rodents, which has translational applications, depends on the use of appropriate behavioral tests. 16–18 Here we review commonly used motor/sensorimotor ( Table 1 ) and cognition ( Table 2 ) tests in rodent models of stroke. Specifically, we first describe the objective and procedure of each behavioral test. Next, the application of each test in different stroke types and major findings are discussed. Last, the advantages and disadvantages of each test are summarized. This review aims to summarize commonly used behavioral tests in rodent stroke studies and compare their applications in different stroke models. 2 Motor and sensorimotor tests 2.1 Neurological score Neurological score was originally developed to evaluate stroke outcome in human patients. It assesses multiple aspects of neurological functions, including consciousness, vision, motor function, sensory function, verbal response, and brainstem reflexes. Various scoring systems, including the Bederson score, modified neurologic severity scores (mNSS), and Garcia scale, have been developed to evaluate stroke outcome in rodents. 19 2.1.1 Bederson score The Bederson score, which can be applied on both rats and mice, evaluates global neurological function. 20 The Bederson score grades rodents on a scale of 0–3. Grade 0 (no defects) is assigned if animals extend both forelimbs toward the floor when suspended above the floor. Grade 1 (mild defects) is given when animals show forelimb flexion without any other abnormality. Graded 2 (modest defects) is given when animals display forelimb flexion and decreased resistance to lateral push toward the paretic side. Animals further show circling behavior are scored Grade 3 (severe defects). To reflect more severe neurological deficits, the Bederson score system has been modified to incorporate Grade 4 and Grade 5. Specifically, animals with longitudinal spinning after stroke are scored Grade 4, and those without movement are scored Grade 5. 8 21 The Bederson score has been widely used in ischemic studies. It is usually performed within 24 hours after surgery to evaluate acute stroke outcome. For example, rats showed circling behavior and were scored Grade 3 at 24 hours after MCAO. Similar result was observed in mice after ischemic stroke. 8 Unlike in ischemic stroke, the Bederson score is less frequently used in ICH. This is because the Bederson score, as a simple assessment for locomotor function, is not sensitive enough for ICH. More sophisticated and sensitive tests are needed for outcome assessment in ICH. In addition, since the Bederson score mainly evaluates body asymmetry (e.g. circling behavior and unilateral limb deficits), it is not useful in SAH, which causes global brain injury. 6 The Bederson score does not require special equipment and is easy to perform. However, it has several limitations. First, it cannot accurately assess brain damage in certain regions. For example, rats with dorsolateral frontal cortex ablation failed to show circling behavior. Next, the Bederson score is unable to evaluate long-term outcome because several deficits recover quickly after stroke. Third, the Bederson score cannot be used in models with global brain injury, such as global ischemia and SAH. Furthermore, the translational potential of the Bederson score is not high. This is mainly due to anatomical differences of human and rodent brains and the complexity of stroke outcome on humans. 22 2.1.2 mNSS mNSS is another widely used neurological test in rodents after stroke. It combines neurological evaluations on multiple aspects, including motor function, sensory function, 23–25 and reflex function, 26 with a total score of 14. A score of 1 ~ 4 indicates mild defects, a score of 5 ~ 9 suggests moderate defects, and a score of 10 ~ 14 indicates severe defects. 27,28 The mNSS has been used to assess long-term outcome in various stroke models. In a focal cerebral ischemia model, stroke rats demonstrated significantly higher mNSS score at days 1 and 28 after MCAO, indicating neurological impairment. Interestingly, bone marrow stromal cell transplantation largely lowered the mNSS score at day 28 after stroke, 29 suggesting improved neurological function. Similarly, rats displayed a high mNSS score at day 1 after ICH, 29 indicating severe neurological impairment. Although recovered gradually, the neurological deficits remained detectable at days 24 and 42 after ICH. 30 In an SAH model, rats scored up to 16 points at day 1 after injury and 7–8 points at day 14 after injury. 30 Mesenchymal stem cell transplantation significantly lowered the mNSS to 3 points at day 14 after injury, 31 indicating improved recovery. These findings suggest that mNSS is a useful assessment for long-term stroke outcome and can be used to assess neuroprotective effects of novel treatments. 31 One advantage of mNSS is that it allows a comprehensive evaluation on multiple neurological functions, which makes it useful for long-term stroke outcome assessment. The multiple aspects, however, increase the complexity of mNSS test, making it less favorable. In addition, the combined score may cover some neurological deficits unique to certain stroke models. For example, some mice may get a high score on hindlimb motor function, while others may get a high score on forelimb motor function after hemorrhagic stroke. Due to body asymmetry and locomotor dysfunction, this difference may be covered by the similar overall score. Thus, individual score on each task should also be considered, especially in studies that investigate region-specific deficits. 2.1.3 Garcia scale To evaluate the functional outcome of MCAO, Garcia developed a neurological scale that assesses motor and sensory functions on rats. Specifically, animals are scored on their spontaneous activity (0–3), symmetry of limb movement (0–3), forelimb outstretching (0–3), climbing and grip strength (0–3), symmetry of body proprioception (0–3), and sensory function of vibrissae (0–3). The Garcia scale is represented as the sum of score in each test. In this scale, a lower score indicates more severe stroke outcome. 32 The Garcia scale is mainly used in ischemic stroke. In a transient MCAO model, rats got lower scores at day 7 after injury compared to sham controls. Similar result was found in a permanent MCAO model. 32 In another study, mice displayed a low score at 22 hours after MCAO, which was elevated by Nrf2 activator. 32 These findings suggest that the Garcia scale is a sensitive assessment for motor and sensory functions after ischemic stroke. In addition, the Garcia scale has also been applied in other stroke models. In an ICH model, stroke mice exhibited a significantly lower score at 24 hours after injury compared to sham controls. 33 In an SAH model, stroke mice (11.2–12.4) scored much lower than sham controls (16.7–16.9) in the first 3 days after injury. 34 These results suggest that the Garcia scale can be used to assess neurological function in the acute phase after hemorrhagic stroke. 35 The Garcia scale is easy to perform, and assesses both sensory and motor functions. However, it should be noted that the Garcia scale focuses more on forelimb function than hindlimb function. In case that animals have severe hindlimb deficits but mild functional impairment on the forelimbs, the higher score in forelimb assessments will compromise the overall outcome. Thus, Garcia scale should be combined with other functional assessments to get a more comprehensive evaluation of stroke outcome. In addition, since the lesion site is more controllable in ischemic stroke and ICH has a less predictable functional outcome, the Garcia scale is relatively more efficient in ischemic stroke. 2.2 Open field test The open field test assesses locomotor ability and exploration behavior. It is widely used and has become a “standardize test” in rodent studies. The open field test was first developed on rats to study the open field behavior. It was then modified and applied on mice. 36 Basically, a wooden or plastic open field maze with a size of 50 cm (length) × 50 cm (width) × 38 cm (height) is used in this test. A mouse is placed in the open field and allowed to explore for 10 minutes. Its exploration behavior is recorded and analyzed through the video. Locomotor ability, the principle objective of this test, is revealed by the distance the mouse travels. A longer distance indicates a higher locomotor ability. By using automated analysis system, such as video tracking software, the route of the mouse can also be visualized and analyzed. Total travel distance and travel route are the most common parameters measured by the automated analysis system. 37 Additionally, the immobile time and rear-up behavior can also be collected and analyzed to enable a comprehensive evaluation on locomotor ability. 38 The automated analysis system eliminates human errors and makes the result more objective. 38 In addition to locomotor ability, the open field test is also widely used to evaluate emotional behavior. On one hand, defecation and urination in the open field have been used as signs of emotionality. However, defecation and urination are highly associated with the amount of food and water taken by the animals, which differs significantly among individuals. Additionally, strong emotions, such as anxious and scared, might decrease food intake, leading to reduced defecation. These make it relatively hard to standardize the signs of emotional behavior. On the other hand, the time animals spend in specific regions in the open field has been used to analyze anxiety level. Because rodents tend to stay at the corner or perimeters in a novel environment, the time they stay in the center of the open field reflects their anxiety level. The shorter time animals stay in the center of the open field, the higher anxiety level they have. 36 39 The open field test has been used to assess locomotor ability and anxiety level in ischemic stroke. Controversial results, however, have been observed. Many studies found no significant difference in travel distance in rats at days 3, 19, and 28 after MCAO. In contrast, one report showed decreased travel distance at days 19, 28, and 42 after stroke. 40–42 This discrepancy may be caused by different rat lines. Although travel distance varies between studies, reduced rear-up behavior was commonly observed in rats from day 3 up to 6 weeks after MCAO, 43 indicating impaired locomotor ability. In addition, a hyperactive behavior was observed in rats 2–3 weeks after MCAO. 40,41 The hyperactivity may possibly be caused by hippocampal damage and spatial memory deficits, which slow down the habituation process and prolong the novel stimulation. 44 Furthermore, the open field test has also been used to determine anxiety level after ischemic stroke. It was reported that mice spent significantly less time in the center 48 hours after MCAO. 45 Similarly, rats spent longer time in the corner after transient forebrain global ischemia, 46 indicating an acute increase of anxiety level. It should be noted that multiple factors, including habituation to environment and size of center area, may affect the anxiety-like behavior. Therefore, other anxiety tests, such as elevated-plus maze, should be used to more objectively evaluate anxiety level. 47 The open field test has also been used in hemorrhagic stroke. In an ICH model, rats showed significantly reduced travel distance at day 1 after injury compared to sham controls, indicating decreased motor function. The motor impairment gradually recovered and was undetectable by day 14 after ICH. 48 Consistent with this finding, ICH rats showed no difference in open field test 2 and 8 weeks after injury compared to sham controls. 48 Unlike locomotor ability, the anxiety-like behavior after ICH is rarely studied by open field test. In an SAH model, rats exhibited declined travel distance at day 21 after injury and spent less time in the center, 48,49 indicating chronic motor impairment and increased anxiety. Similar results were observed on SAH mice at days 13 and 27 after injury. 50 These results suggest that the open field test can be used to evaluate locomotor ability and anxiety after hemorrhagic stroke. 51 The open field test is an easy test and does not require sophisticated equipment. However, it should be noted that the open field test has various protocols and many factors may affect the result. For example, while 10 minutes is commonly used as the exploration duration, one may apply shorter time, such as 2 minutes. In this case, the test will focus more on the exploration behavior that responds to novel environment instead of locomotor ability. Moreover, when testing disease models that cause locomotion deficits, a dark environment might be needed to promote the exploration, because light may further decrease the locomotion activity of rodents. Recording in a dark environment, however, requires a higher standard of videoing device, which increases the difficulty of the test. 2.3 Pole test The pole test assesses overall locomotor function in rodents. It was initially developed on mice to study bradykinesia and later adapted to stroke studies. Although the pole test has been used in rats for other studies, 52,53 it is seldom used in rat stroke models. This is possibly because rats are heavier than mice, which makes them unable to perform the task after stroke. In this test, a mouse is placed on the tip of an 8 mm (diameter) × 50 cm (length) wooden pole with its head upward ( 54,55 Fig. 1 A). The mouse then tries to descend to the floor by turning its head downward ( Fig. 1 A). The latency to make the turn ( T ) and the time to descend ( turn T ) are recorded. For mice that descend without turning their heads downward, the D T is used to represent D T . If mice make the turn but fall in the halfway when descending, the total time until they reach the floor is recorded. If mice fall immediately, the maximum duration of 120 s is assigned to the turn T and turn T . It should be noted that mice cannot pause during descending. If this happens, the trial should be excluded and repeated. For better performance, mice need to be trained before stroke induction. D The pole test is a useful assessment of locomotor function in ischemic stroke. It was reported that ischemic mice showed increased T and turn T compared to sham controls up to 11 days after injury, D suggesting locomotor impairment. By day 17 after stroke, however, similar 4,56 T and turn T were observed in sham and ischemic mice, indicating recovery. D In addition, fluoxetine was able to improve animal performance in the pole test at days 12 and 20 after stroke, 4,56 suggesting that the pole test can be used to assess neuroprotective effects of novel treatments. 57 In addition, the pole test has also been applied in hemorrhagic stroke. In an autologous blood-induced ICH model, mice spent more time descending compared to sham controls at day 2 after stroke, and dexmedetomidine treatment significantly improved their performance. In contrast, another study showed that mice failed to exhibit locomotor deficits in the pole test at days 2–7 after collagenase-induced ICH. 58 These controversial results might be caused by different ICH protocols. In SAH models, however, the pole test is seldom performed. 59 The pole test is relatively easy to perform and requires minimal equipment. It is an objective and sensitive test for motor function. Additionally, the pole test is able to assess long-term motor function in ischemic stroke. 2.4 Foot-fault test The foot-fault test, also known as grid walking test, assesses motor function and limb coordination in rodents. Basically, animals are placed on an elevated grid with square openings (1.69 cm 60–62 2 for mice and 6.25 cm 2 for rats) and allowed to move across the grid ( Fig. 1 B). They move around the grid by placing feet on the wire frame. If the paw falls from or slips off the frame, one foot-fault is recorded. The total number of steps it takes to cross the grid and the foot-fault for each limb are quantified. Sham controls have few foot-faults with no bias toward either side, while mice with stroke display increased foot-faults toward the contralateral side. 63 To reduce individual variation, a pre-stroke test should be performed to customize the post-stroke result. 62 The foot-fault test has been used to study limb coordination in ischemic stroke. Short-term studies revealed that mice made significantly more foot-faults at day 2 after MCAO compared to sham controls, indicating acute coordination deficits. The coordination deficits can be observed up to 90 days after stroke. 62,64 Compared to wildtype controls, mice with attenuated astrocyte reactivity displayed substantially more foot-faults up to 4 weeks after MCAO, indicating slower recovery. 62 Similarly, the foot-fault test is also able to detect short-term and long-term deficits in ischemic rats. It was reported that increased foot-faults were observed in rats at days 2–28 after MCAO. 64 In addition, rats exhibited increased foot-faults in a hypoxia–ischemia model and the motor coordination deficits could still be detected 5 weeks after surgery. 65 These findings suggest that the foot-fault test is sensitive enough to detect both short-term and long-term motor coordination deficits in ischemic stroke. 66 The foot-fault test has also been used to detect limbs deficits in hemorrhagic stroke. Increased foot-faults were observed in mice at days 3 and 7 after ICH, indicating short-term coordination impairment. Similar results were reported in ICH mice 2 and 3 weeks after stroke, 67,68 indicating long-term coordination deficits. Like mice, rats also displayed limb deficits at both acute (day 1) and chronic (day 21) phases after ICH. 69 In addition, rats also showed increased foot-faults 2 weeks after SAH. 70 These results suggest that the foot-fault test is also useful in the assessment of motor coordination deficits in hemorrhagic stroke. 71 The foot-fault test is effective and objective in assessing motor function and limb coordination. It can also evaluate long-term stroke outcome in both ischemic and ICH models. 2.5 Cylinder test One of the most significant symptoms of stroke is sensorimotor asymmetry, which can be assessed by many behavior tests, including the cylinder test. The cylinder test was initially used to evaluate spontaneous motor activity of the forelimbs after CNS injury. It was modified to test limb-use asymmetry after stroke. 72 Generally, animals are placed in a clear cylinder container (9 cm diameter × 15 cm height for mice, and 20 cm diameter × 30 cm height for rats) and their exploring behavior (use of forelimbs to contact the wall and land) is recorded with a camera ( 73 Fig. 1 C). A mirror is placed behind the cylinder to enable recording when animals turn away from the camera. A total of 20 limb movements within a maximum of 10 minutes are recorded. By quantifying the percentage of impaired and non-impaired forelimb use for exploring and landing, limb-use asymmetry can be assessed. Since rodents are less likely to move after certain injuries, including stroke, this test can be performed in dark environment to encourage locomotion. The cylinder test has been used to determine limb use asymmetry in ischemic stroke. It was reported that mice used ipsilateral forelimb much more frequently than contralateral limb after MCAO. The asymmetry peaked at day 3 after injury, remained detectable at day 15 after injury, but disappeared by day 40 after injury. 74 In addition, mice with hypoxic ischemia showed more ipsilateral limb movements compared to sham controls at days 10 and 21 after injury, 74 indicating long-term limb asymmetry. Similarly, rats with focal ischemia displayed higher ipsilateral forelimb usage than pre-surgery baseline level up to 28 days after injury. 75 Interestingly, bone marrow cell treatment significantly enhanced the recovery of ischemic rats, which exhibited no limb use asymmetry in cylinder test at day 28 after injury. 76,77 Furthermore, in a long-term cortical ischemic study, ischemic rats demonstrated a strong asymmetry even at day 77 after injury. 76,77 These results suggest that the cylinder test is a sensitive assessment for long-term limb asymmetry in ischemic stroke and can be used to evaluate neuroprotective effects of novel treatments. 78 In addition, the cylinder test can also be used in hemorrhagic stroke. In the collagenase-induced ICH model, rats showed increased ipsilateral limb usage up to 4 weeks after stroke. Interestingly, rehabilitation treatment was able to improve the performance of ICH rats in this test at day 28 after stroke. 79,80 Similar results are also observed in autologous blood-induced ICH model. For example, rats showed elevated ipsilateral forelimb usage at day 28 after injury, and oxygenase inhibitor significantly improved the outcome by restoring contralateral forelimb usage to baseline level at day 14 after stroke. 80 In sharp contrast to ICH, SAH rarely induces limb asymmetry in the cylinder test, 81 probably because SAH causes global injury rather than unilateral damage in the brain. 82 The cylinder test is an easy and objective test for behavioral asymmetry. No special equipment and little administrative requirements are needed for this test. The cylinder test is able to detect long-term stroke outcome in both ischemic stroke and ICH models. However, it is not useful in models with global brain injury, such as SAH. 76–80 2.6 Corner test The corner test assesses the direction pattern of sensorimotor dysfunction. It was initially applied on rats, and later modified to use on mice. 83 Generally, two 30 cm × 20 cm cardboards are attached at 30° angle with a small opening at the joint ( 62 Fig. 1 D). Animals are placed in the middle of the two cardboards, so that vibrissae on both sides can be stimulated when they enter the corner ( Fig. 1 D). The direction in which they turn back is recorded. Normally, animals do not have preference to either direction. After stroke, which causes contralateral limb deficits, however, animals tend to use their ipsilateral limbs to turn back. Higher tendency to one side indicates more severe stroke outcome. As a sensorimotor asymmetry assessment, corner test has been applied in unilateral stroke studies to evaluate stroke outcome. Ischemic mice made more ipsilateral turns, while sham controls turned to either side equally. This behavioral asymmetry was detected from 2-90 days after injury. 62 Similarly, ischemic rats displayed significantly more ipsilateral turns than sham controls after ischemic stroke, and the difference could still be observed 4 weeks after injury. 62 In addition, ischemic rats with bone marrow cell treatment made fewer ipsilateral turns within 28 days after injury, 84 indicating decreased sensorimotor deficits. These results suggest that the corner test can be used to assess sensorimotor asymmetry after ischemic stroke and evaluate neuroprotective effects of novel treatments. 76,77 The corner test is also effective in assess sensorimotor function in hemorrhagic stroke. It was reported that ICH rats showed significant increased ipsilateral turns compared to sham controls, and that this asymmetry remained detectable 4 weeks after injury. In addition, ICH rats with deferoxamine treatment made less ipsilateral turns at day 28 after stroke, indicating improved recovery. 85 These findings suggest that the corner test is also able to assess sensorimotor function in ICH. It should be noted that the corner test cannot be used in SAH, which causes global brain damage and thus minimal sensorimotor asymmetry. 86 Compared to other tests that assess sensorimotor asymmetry, the corner test has several advantages. First, it is an objective and quantitative test. Next, it is able to evaluate sensorimotor deficits in the chronic phase after stroke. One drawback of the corner test, however, is that animals may not perform well when they are too sick or lose motivation due to repeated testing. 87 To reduce or avoid this effect, long intervals should be given between tests. In addition, the use of corner test should be limited to unilateral stroke models, since global stroke models (e.g. global ischemia and SAH) result in mild sensorimotor asymmetry. 88 2.7 Adhesive removal test The adhesive removal test is widely used in rodents to evaluate sensorimotor dysfunction and motor asymmetry. Briefly, animals are placed into a 15 cm × 25 cm transparent box and two similar adhesive tapes are attached to the hairless part of each forepaw with the same pressure. The time it takes to contact and remove the stimuli is recorded. In general, animals spend more time contacting and removing the adhesive tape from the contralateral forepaw, while they have no problem contacting and removing the adhesive tape from the ipsilateral forepaw. 4,73,89–91 In addition, adhesive removal test with modified protocols can also be used in stroke studies. For example, adhesive tapes were attached on the vibrissae of rodents to study sensory function in a photothrombotic stroke model, 4 and adhesive tapes of various sizes were used to determine the stimulating threshold for removal behavior in a focal ischemic study. 92 73 The adhesive removal test has been used to assess long-term stroke outcome in ischemic models. In a distal focal cerebral ischemia model, the adhesive removal test was able to detect sensorimotor deficits in mice up to 3 weeks after injury. Another study showed that cortical ischemic rats spent significantly more time removing the stimuli on forelimb wrists compared to sham controls 6 weeks after injury. 93 Using a modified protocol, it was reported that larger sticky tapes were required to trigger the removal behavior on contralateral forelimb (function impaired side) in ischemic rats in both acute (1–14 days after injury) and chronic (21–30 days after injury) stages. 94 In addition, cerebral ischemic rats with bone marrow cell treatment exhibited reduced time to remove the stimuli at day 14 after injury, 73 indicating faster recovery. Together, these results suggest that the adhesive removal test is a sensitive test for long-term function assessment after ischemic stroke and can be used to evaluate neuroprotective effects of novel treatments. 95 The adhesive removal test has also been used to assess sensorimotor function in hemorrhagic stroke. Mice with ICH showed a longer latency to contact the stimuli and took more time to remove it, indicating sensorimotor impairment after stroke. This difference could be detected up to 21 days after injury and bone marrow cell treatment significantly improved the sensorimotor function. 96 Similarly, rats exhibited a longer latency to contact the stimuli and/or spent more time removing the sticker at day 21 after SAH. 96 Additionally, SAH rats perforated from the right internal cerebral artery (ICA) showed more significant sensorimotor deficits on left paw than right paw at day 6 after stroke, and these deficits were attenuated by intranasal stem cell treatment. 97 These findings suggest that the adhesive removal test can be used to assess long-term outcome and recovery pattern in hemorrhagic stroke. 98 The adhesive removal test has various advantages. It is an objective and sensitive test for sensorimotor function assessment, and it can be used at the chronic phase to assess long-term stroke outcome. It should be noted, however, that the adhesive removal test has large variations and requires multiple rounds of training and strict control of the variables to obtain reliable data. For example, animals are usually trained once a day for 5 days before stroke induction to minimize individual variation. 2.8 Rotarod test The rotarod test evaluates equilibrium behavior and locomotor ability in rodents. It has been used in many disease models, including stroke. 99–101 Basically, a 3 cm (diameter) × 40 cm (length) rod is covered with sticking plasters to increase the roughness. An electric motor is applied to rotate the rod at the speed of 20 rpm. A landing platform with soft surface is placed 18 cm below the rod to protect falling animals. Before the test, animals are trained to perform the task. They are placed on the rod with no rotation for 30 seconds, then with a constant slow (4 rpm) rotation. Animals should be trained until they can stay on the rod for at least 1 minute. After the training trial, animals are placed on the rod for the testing trial, in which the rod rotates with an increasing speed from 4 rpm to 20 rpm over 2 minutes ( 102 Fig. 1 E). The latency before they fall on the platform is recorded. Rodents with impaired locomotor function fall from the rod faster than those with normal locomotor function. 101 The rotarod test has been used to assess locomotor function after ischemic stroke. While rats were able to stay on the rod throughout the whole task after training, those with cerebral ischemia fell out quickly compared to sham controls, and the difference remained significant even after 6 weeks. Interestingly, ischemic rats receiving treatments, such as neurotrophic factors and bone marrow cells, showed improved long-term recovery up to 35 days after injury. 94,95 These findings suggest that the rotarod test is capable of detecting long-term outcome and recovery pattern in ischemic stroke. 94,95 In addition, the rotarod test has also been applied in hemorrhagic stroke. In an ICH model, stroke mice had an acute decrease of latency in the rotarod test at day 3 after injury compared to sham controls. A gradual recovery pattern was observed at days 3–21 after injury, and it was improved by stem cell treatment. 96 Consistent with this report, rats with neural stem cell transplantation exhibited improved performance in rotarod test at day 60 after ICH. 96 Similarly, SAH mice showed reduced latency before falling compared to sham controls at 24 hours after injury. 103 In another study, SAH rats displayed faster falling in the rotarod test at 48 hours after injury, and Nrf2 activator improved their performance. 104 A long-term study found motor deficits in SAH mice 3 weeks after injury. 105 These studies suggest that the rotarod test is able to detect long-term outcome and recovery pattern in ICH and SAH. 106 The rotarod test is a sensitive, objective, and quantifiable test. It is capable of detecting equilibrium behavior and locomotor function at both acute and chronic phases after stroke. It can also be used to determine recovery pattern and evaluate neuroprotective effects of novel treatments. However, the rotarod test requires special equipment and training is usually needed to obtain consistent results. 2.9 Wire hanging test Wire hanging test assesses multiple aspects of locomotor ability, including grip strength, endurance, and body corporation. It is widely used in rodents with neurological disorders and/or muscle weakness. Basically, animals are placed on a wire hanging 50 ~ 60 cm above the ground for a maximum of 4 minutes, so that they have to suspend their bodies with limbs. The time animals spend on the wire (latency before falling), which reflects muscle strength, is recorded. While hanging, animals might use forelimbs or all four limbs to hold the wire. The different ways of hanging affect their performance, creating variation. To obtain more objective and reliable results, strategies that limit the ways of hanging, such as covering hindlimbs with adhesive tapes, may be applied. 107–109 In addition to muscle strength, the wire hanging test can also evaluate body corporation ability. In this case, different hanging behaviors are scored as described below: 0, animals directly fall off the wire; 1, animals hang with forelimbs; 2, animals hang with forelimbs and attempt to climb up; 3, hindlimbs are involved in hanging; 4, animals hang with four limbs and tail; and 5, animals successfully escape. 110 This scoring system correlates well with the locomotor ability and body corporation ability. Moreover, a modified hanging test that uses a 111 vertically hanged wire hoop has been developed to more objectively assess muscle performance and endurance. The hoop rolls as animals climb, keeping them in the same position. Animals tend to move consistently in one direction with similar behaviors. 112 The wire hanging test has been used to assess locomotor function in rodents after ischemic stroke. It was reported that rats failed to stay on the wire until day 5 after hypoxic ischemia, and the latency to fall gradually increased overtime and became fully recovered on day 17 after stroke. Additionally, minocycline was able to improve their performance in the wire hanging test during the recovery phase. 113,114 Similarly, an acute decrease of hanging latency was observed on mice after ischemic stroke. 113,114 However, unlike rats that fell off immediately during the first 5 days post stroke, mice were able to hang on the wire 24 hours after stroke. 115 This difference might be caused by their different body weights. In a short-term study, the wire hanging test revealed locomotor differences between mice receiving rosuvastatin treatment and vehicle control at days 2, 3, and 5 after cerebral ischemia. 114,115 A long-term study also demonstrated that ischemic mice had less latency on wire up to 3 weeks after injury. 115–117 These findings suggest that the wire hanging test is able to evaluate long-term locomotor function after ischemic stroke and assess neuroprotective effects of novel treatments. 118 In addition, the wire hanging test has also been used to evaluate locomotor outcome in hemorrhagic stroke. In a cortical ICH model, mice showed decreased falling latency after injury, which recovered to the baseline 2 weeks after stroke. It should be noted that mice with hippocampal ICH and sham controls exhibited comparable falling latency, 110 suggesting that hippocampal impairment hardly affects the hanging ability and grip strength. Interestingly, fingolimod, a sphingosine-1-phosphate receptor modulator, substantially increased the latency on wire at days 1 and 3 after ICH. 110 Similarly, in an SAH model, mice displayed locomotor deficits at days 1, 3, and 14 after surgery. 119 In addition, cyclooxygenase-2 inhibitor increased the falling latency of SAH mice, indicating improved locomotor function. 71,120 These results suggest that the wire hanging test is useful in the assessment of hemorrhagic stroke outcome and potential neuroprotective effects of novel treatments. 120 The wire hanging test is relatively easy to perform and has low requirements on equipment. It can assess long-term stroke outcome in ischemic models. However, training is required to reduce individual variation and obtain consistent results. 2.10 Skilled reaching tasks Based on the association of limb function and cortex region, skilled reaching tasks were developed to assess limb motor function. They are widely used to evaluate functional outcome after stroke. 121 The skilled reaching tasks are a series of tasks that train animals to reach their limbs through limited space, such as slots, staircases, or tubes. 122 Food pellets are commonly used to trigger the reaching behavior. Single pellet reaching task is a simple but widely used form of skilled reaching tasks. It was initially developed on rats and later adapted to mice. 123–125 In this test, an animal is placed in a plexiglass reaching box (19.5 cm × 8 cm × 20 cm for mice and 30 cm × 14 cm × 45 cm for rats) with a 10 mm-wide slot in the middle of the front wall ( 123,126 Fig. 1 F). A tray is placed in front of the slot with two indentations spaced 1 cm away from the slot on both sides ( Fig. 1 F). The animal first receives a training trial to learn how to obtain food pellets. In the training trial, a food pellet is placed in the tongue distance on the tray, and then gradually moved farther so that the animal needs to use its paw to retrieve the pellet. After a week of training, animals learn how to retrieve food pellets. Then, pellets are placed in both indentations to reveal animal’s preferred limb. Twenty pellets are given each day. A successful reach is counted when the animal obtains the pellet and a failure is counted if the animal knocks the food away or drops the pellet. Unilateral brain damage can be assessed by the percentage of successful reaching to the pellets on the contralateral limb. The single pellet reaching task can be used to distinguish compensation and recovery after unilateral neurological injury, such as stroke. Compensation can develop after stroke to make up the functional impairment, which may interfere with the evaluation of recovery pattern. 127 The single pellet reaching task is able to assess motor functions of ipsilateral and contralateral limbs independently. Enhanced performance of unimpaired limb may be a sign of compensation instead of recovery. 128 The single pellet task has been used to assess skilled motor recovery in various ischemic models. Rats with ischemic insult on caudal forelimb area showed a significantly lower success rate of reaching task, which gradually recovered in 10 days after injury. Similarly, rats with ischemic stroke in the sensorimotor cortex exhibited poor performance in the single pellet task. 129 Compared to those without rehabilitation, which had a low recovery rate and severe skilled motor deficits, ischemic rats with rehabilitation training recovered faster and showed pre-stroke-level performance in the single pellet task 5 weeks after injury. 130 In addition, focal ischemic stroke also dramatically reduced the success rate in the single pellet task. 130 It was reported that skilled motor deficits remained at day 28 after stroke and mice treated with bone marrow stromal cells displayed increased recovery 1 week after injury. 131 These results suggest that the single pellet reaching task is able to evaluate long-term skilled motor deficits in ischemic stroke and assess potential neuroprotective effects of novel treatments. 131 The single pellet task has also been applied in hemorrhagic studies. In an ICH model, rats showed skilled motor deficits at both days 7–11 and days 28–32 after stroke with relatively better performance in the latter time point. ICH mice also demonstrated significantly decreased reaching success up to 28 days after stroke. 132 These findings suggest that the single pellet reaching task is also suitable for skilled motor deficit assessment in hemorrhagic stroke. It should be noted, however, that the severity of ICH may affect the outcome of this test. A study reported that rats with severe ICH failed to perform the task. 133 79 Skilled reaching tasks, such as the single pellet task, are specified for assessing skillful use of forelimbs and detecting skilled motor deficits. The single pellet task is a sensitive test, and can be used to evaluate both short-term and long-term stroke outcomes. However, a series of training trials are required to perform this task. In addition, individual variation largely affects the performance of animals. 3 Cognition tests Cognition involves a variety of important neurological functions, including memory and emotions. Many behavioral tests have been developed to assess memory in animals. In primates, such as humans and monkeys, visual recognition is commonly used to reflect memory. Specifically, the subjects need to identify familiar and unfamiliar visual stimuli. This is, however, too complex for rodents due to the physical differences between species. Thus, tasks that follow their natural responses have to be used in rodents. A series of behavioral tests, including Morris water maze, Y-maze, novel object recognition/location tests, radial-arm maze, and elevated-plus maze, have been developed to assess memory and cognition in rodents. 134,135 3.1 Morris water maze Morris water maze test assesses both cognition and locomotor function. It is widely used to evaluate long-term cognitive function after stroke in rodents. In this test, animals are placed in a large (120 cm diameter for mice and 180 cm diameter for rats) circular water pool and forced to swim ( 136,137 Fig. 1 G). The pool is divided into four quadrants, and a platform is hidden just below the water in one quadrant ( Fig. 1 G). The water is colored with paint to prevent visualization of the hidden platform. Animals can escape once they find the hidden platform. The latency to reach the platform is used as a readout to reflect the cognitive function. Increased latency indicates memory deficits and cognitive decline. In addition to latency to reach platform, the time animals spend in the target quadrant may also be used as a readout. With the help of automated video tracking software, the route of animals, total swimming distance, and average swimming speed can also be visualized and analyzed. 138 Morris water maze has been used to assess cognitive function after ischemic stroke. It was reported that mice with focal ischemic stroke showed increased latency to find the hidden platform 2, 4, and 6 weeks after injury compared to sham controls. Similar result was observed in rats 12–14 weeks after ischemic stroke, 139,140 indicating declined cognitive function in rodents after ischemic stroke. In addition, rats that stayed in an enriched environment spent less time finding the hidden platform than those in deprived environment 5 weeks after ischemic stroke, 90 suggesting a beneficial role of enriched environment in cognitive function after stroke. It should be noted, however, that there is also evidence showing unaltered cognitive function in mice after stroke. For example, mice with and without stroke failed to exhibit any difference in the latency to find the hidden platform 2 weeks after MCAO. 141 This discrepancy may be due to different protocols used in these studies. 4 In addition, Morris water maze has also been used to evaluate cognitive outcome in hemorrhagic stroke. In an ICH model, rats displayed increased swimming distance 2 but not 8 weeks after injury compared to sham controls. In this study, the swimming distance instead of latency was used as a readout for spatial memory. This is because the swimming speed may interfere with the interpretation of the result. For instance, animals with higher swimming speed may find the platform faster even with a poorer spatial memory. Using swimming distance eliminates the effect of different swimming speeds and more accurately reflects the spatial memory of tested subjects. Consistent with this finding, another study also reported that ICH rats failed to show any difference in the Morris water maze 8 or 16 weeks after injury. 49 These results suggest that Morris water maze is able to detect cognitive impairment in the chronic phase (<8 weeks) after ICH. In an SAH model, SAH rats displayed significantly increased escape latency at day 5 but not days 1–4 after stroke compared to sham controls. 142 In another study, SAH rats displayed spatial learning deficits at days 4 and 5 after SAH, 143 indicating short-term cognitive impairment. In a long-term study, SAH rats exhibited prolonged escape latency at days 29–35 after stroke, 144 indicating long-term cognitive decline. The Morris water maze is less frequently used in mice than in rats in SAH. A recent study revealed that SAH mice spent longer time to locate the hidden platform during learning trials on days 1–4, and less time in target quadrant during probe trial on day 5 compared to sham controls. 145 Similar result was observed in SAH mice on days 14–19 after injury. 146 These results suggest that Morris water maze is able to detect SAH-induced spatial learning deficits in rats up to 5 weeks after injury and in mice up to 19 days after injury. 147 Future research should determine if Morris water maze is able to assess cognitive function in SAH mice at later time points. 143–147 Morris water maze allows assessment of both cognition and locomotor function in rodents. It is useful in the evaluation of both short-term and long-term outcomes of stroke. This test, however, requires long training trials, and the training protocols may affect animal performance. Additionally, its results are affected by the locomotor ability of animals. Therefore, it is better to use this test for animals with similar swimming speed. Alternatively, modifications may be applied to obtain more reliable results. For example, swimming distance before reaching the platform rather than time spent in target quadrant may be used in case that animals have different locomotor ability. 3.2 Y-maze The Y-maze is commonly used to assess spontaneous alternation and spatial memory in rodents. It contains three arms with 120° angle between each other. The Y-maze was initially used to study exploration behaviors 148 and behavior pattern (known as spontaneous alternation). 149,150 Later, the Y-maze was used to assess spatial memory in rodents by evaluating spontaneous alternation. 149 This test includes a single trial, in which the animal is placed in one arm of the Y-maze and allowed to explore for a period of time ( 148 Fig. 1 H). The exploration time varies from 3 to 15 minutes depending on the types of research. The sequence of each entrance and total number of entrances are recorded. Spontaneous alternation is shown by the tendency of the arm visits that are different from previous two visits. For example, if the three arms are represented as arms 1–3 and the sequence of animal visits is (1,2,3,2,1,3,1,3), the total alternation opportunities would be all the visits except the first two, which is six. The third, fifth, sixth visit were different from previous two, so the percentage of alternation is three in six (50%). 148,151–153 Because rodents get habituated to the environment, long-time exposure to the maze environment may decrease their exploration activity and affect their alternation tendency. To minimize this effect, a two-trial memory task using Y-maze was developed based on the single trial Y-maze. 149 In the first trial, one of the arms is blocked and the animal is allowed to explore the rest two arms ( 154 Fig. 1 H). In the second trial, the blocked arm is opened and considered as the novel arm. The animal is allowed to explore all three arms and the entrance to each arm is recorded. The spatial memory of animals can be revealed by a discrimination to the novel arm in the second trial. It should be noted that the exploration time and inter-trial interval duration may also affect the result. It was reported that animals exhibited higher discrimination to the novel arm in the first 3 minutes of exploration with 30 minutes interval. 154 The Y-maze task has been used as a cognitive assessment after ischemic stroke. In a focal cerebral ischemic model, the spontaneous alternation of ischemic rats (67.1%) was slightly less than sham controls (76.0%) at day 3 after injury, indicating decreased short-term cognitive function. Similarly, reduced spontaneous alternation in Y-maze was also observed in mice at days 7–13 after ischemic injury. 41 In a long-term study, the same result was found in rats 8 weeks after focal cerebral ischemia. 155 These findings suggest that the Y-maze can be used to assess cognitive function at both short-term and long-term after ischemic stroke. 156 Unlike in ischemic stroke, the Y-maze test is less frequently used in hemorrhagic stroke and controversial results exist. In an ICH model, no difference in Y-maze was observed in stroke and sham-operated mice at day 3 or 10 after injury. Similarly, stroke mice and sham controls failed to show any difference in spontaneous alternation at day 3 after SAH. 157 In contrast, another study found that SAH mice displayed decreased spontaneous alternation at day 3 after stroke compared to sham-operated mice. 158 159 The Y-maze utilizes natural exploring behavior of rodents to assess cognitive function. It is a simple test and requires no training. The parameter (spontaneous alternation) is easy to record, and thus this test is less demanding on the observers. The Y-maze is able to assess both short-term and long-term stroke outcomes in ischemic models, but less sensitive in hemorrhagic models. Additionally, the interval should be considered if the test is performed at multiple time points. This is because the interest of exploration decreases if the animals get habituated with the maze. 149 3.3 Novel object Recognition/Location tests The one-trial object recognition test was first developed to assess memory on rats. This test involves a reward training using a large number of stimuli. Specifically, two identical items are placed in two arms of a Y-maze and the animal receives rewards after choosing one side. Next, a novel object is placed in the third arm and the animal gets rewards when choosing the novel arm. Then, the other two objects are replaced by two new objects, one identical to the one in the third arm and one novel. The animal is rewarded again after choosing the novel one. This trial will repeat for ten times. This test is complex, and thus a simplified version known as novel object recognition test has been developed. 160 This test is performed in an open field instead of Y-maze. In the first session, two identical objects are placed in the back corners ( 161 Fig. 1 I). An animal is placed near the center of the front wall with its back toward the objects and allowed to explore for 5 minutes. In the second session, one identical object is replaced with a novel object and the animal is allowed to explore for 3 minutes ( Fig. 1 I). The total time the animal spends exploring each object is recorded. Discrimination to the novel object indicates object recognition. In addition, similar measurement has also been applied to assess memory associated with object location. In the first session, two identical objects are placed in the field in two adjacent corners ( 162 Fig. 1 I). The animal is placed in the field for 3 minutes and then returned to its home cage. The second session starts 15 minutes later, in which one of the objects is placed in the other adjacent corner ( Fig. 1 I). The animal is placed in the field for 3 minutes, and the time it spends on each object is recorded. The recognition to novel object location is revealed by discrimination to the object in the new location. The novel object recognition/location tests have been widely used in ischemic stroke. Rats with transient global ischemia spent significantly less exploration time on the novel object at day 6 after injury compared to sham controls, indicating short-term visual working memory deficits after stroke. Similarly, these ischemic rats spent much less exploration time on the object at new location at day 6 after injury compared to sham controls, 163 indicating spatial memory deficits. Interestingly, long-term estradiol treatment was able to improve both visual and spatial memory in ischemic rats. 163 Another study showed that ischemic rats had spatial memory deficits at 6 but not 18 months after stroke. 163 Long-term visual memory deficits, on the other hand, were not detected in ischemic rats. 164 These results suggest that the novel object recognition/location tests are able to assess memory after ischemic stroke and evaluate neuroprotective effects of potential treatments. 164 The novel object recognition/location tests have also been used to assess cognitive function in hemorrhagic stroke. It was reported that both the amount and percentage of time exploring the novel object were reduced in mice with hippocampal ICH at day 21 after stroke. In addition, rats with ICH in the cortex, ventricle, and hippocampus showed lower discrimination toward the novel object at day 21 after stroke, while those with ICH in the striatum exhibited no difference compared to sham controls, 110 indicating that visual recognition deficits might be less correlative with striatal neuronal loss. Similarly, SAH rats displayed no discrimination toward the novel object 4 weeks after injury, while sham controls showed a significant preference to the novel object, 165 indicating cognitive impairment after SAH. These results suggest that the novel object recognition test is able to assess long-term cognitive function after hemorrhagic stroke. 166 The novel object recognition/location tests are relatively simple and objective. These tests are also very sensitive and able to assess long-term cognitive function after stroke. However, training trials are required. In addition, the result might be affected by various factors, such as the size of exploration field, shape of objects, interval between sessions, and animal activity & anxious level. For example, it was reported that the discrimination toward novel object was significantly decreased after an inter-trial interval of over 24 hours. 161 3.4 Radial-arm test The radial-arm maze was developed to study working memory on rats. Unlike the Y-maze that primarily uses spontaneous alternation to assess memory function, the radial-arm maze uses rewards as motivation to promote decision making. The radial-arm maze has eight arms radiating from a central platform. In the original radial-arm maze test, a reward (usually food or water) is placed at the end of each arm ( 167,168 Fig. 1 J). A food-deprived animal (80–85% of normal body weight) is placed in the central platform and allowed to freely explore the maze until all rewards are found. The total number of entrances to arms before obtaining all the rewards is recorded. Low entrance number indicates that the animal is more likely to visit unfamiliar arms, suggesting a stronger working memory. It is worth noting that the original test works better in rats than in mice. For example, unlike rats, mice fail to improve the average correct choices to unvisited arms after 20 days of training, although they choose more correct arms. Interestingly, when barriers are used to slow down the movement of mice (mice are more active than rats), their accuracy is increased. 167,169 In addition, mice show significant improvement on making correct choices in a modified six-arm maze with only three arms baited ( 170 Fig. 1 J). These results indicate that a well-designed training trial is essential for memory acquisition on rodents. 170 The radial-arm maze is widely used in transient forebrain ischemia model, which induces hippocampal damage. In a short-term study, ischemic rats showed more visits to the unbaited arms and visited arms during days 5–11 after stroke, indicating more severely impaired short-term working memory and reference memory. In a long-term study, ischemic rats displayed increased working and reference memory errors throughout all trials up to 65 days after stroke, 171 suggesting impaired long-term memory. Similar results have also been observed on mice. For example, mice with forebrain ischemia demonstrated increased working memory and reference memory errors 2 and 3 weeks after stroke, and neuroprotective treatments were able to decrease these errors. 172 These findings suggest that the radial-arm maze is able to assess long-term working/reference memory after ischemic stroke and evaluate neuroprotective effects of potential treatments. 173,174 In addition, the radial-arm maze has also been applied to assess cognitive function in ICH. ICH rats exhibited significantly higher number of entrances to unbaited arms 2 weeks after injury compared to sham controls, and dexmedetomidine improved their performance in the radial-arm maze. In addition, ICH rats and sham controls showed comparable performance in the radial-arm maze 18–28 weeks after ICH, 175 suggesting that this test is unable to assess long-term cognitive function after ICH. It should be noted that the radial-arm maze is rarely used in SAH. 142 The radial-arm maze is a relatively simple and sensitive test. It is able to assess long-term cognitive function in ischemic stroke. Like most tests that use food as rewards, food deprivation can be used to enhance animal performance. It should be noted, however, that designing a proper trial that meets the need of the research objective is the key to a successful and efficient radial-arm maze test. Thus, preliminary study and training may be required to determine the appropriate conditions for each study. 3.5 Elevated-plus maze Accumulating evidence suggests that patients show symptoms of anxiety after stroke. To evaluate anxiety on animals, various behavior tests have been developed. The elevated plus-maze, a widely used anxiety test, utilizes natural behavior of rodents: they tend to stay at the corner or perimeters under fear or stress. 176,177 The elevated-plus maze consists of four arms with a “plus” shape that are 50 cm above the floor. Two of the four arms are open without walls (or with 0.5 cm walls to prevent mice from falling), and the other two arms are enclosed with 16 cm walls ( 178,179 Fig. 1 K). An animal is placed in the enter area with its head toward the closed arm and allowed to explore for 10 minutes ( Fig. 1 K). The number of entrances and the time it spends in each arm are recorded. Animals with low anxiety level have higher percentage of open arm entrance and stay in the open arms longer. 180 The elevated-plus maze has been used to evaluate anxiety in rodents after ischemic stroke. Mixed results, however, were observed. In a global ischemia model, ischemic rats spent less time in the open arms at day 1 after injury, while significantly more time in the open arm at day 5 after injury compared to sham controls. These rats and sham controls failed to show any difference at days 15 and 30 after injury. 181 Similarly, increased time spent in the open arms was observed in rats at days 3 and 7 after global ischemia. 181 In a transient global cerebral ischemia model, however, injured mice showed elevated anxiety level at day 2 but not day 7 after stroke, compared to sham controls. 182 In general, a bi-phase anxiety level has been commonly observed after ischemic stroke in both mice and rats: an acute increase of anxiety level in the first week and a decrease of anxiety level in the second week. In contrast to these findings, a long-term study reported that mice with MCAO spent less time in the open arms compared to sham controls 9 weeks after injury, indicating persisted anxiety after ischemic stroke. 183 The different results may be caused by multiple factors, including different animal strains/species, stroke methods, and training protocols. 184 In addition, the elevated-plus maze has also been used to assess anxiety in hemorrhagic stroke. In an ICH model, stroke animals showed no significant difference in anxiety level at day 30 after injury compared to sham controls. In an SAH model, however, injured rats spent less time in the open arms 3 weeks after stroke. 49,185 Another study showed that SAH mice spent less time in the open central area at day 13 but not day 27 after surgery, although they spent similar time in open arms at both time points. 50 In general, the elevated-plus maze is less effective in assessing anxiety level in hemorrhagic stroke than in ischemic stroke. This may be due to different neurological pathology in ischemic stroke and hemorrhagic stroke. Another possibility is that there are fewer studies examining anxiety in hemorrhagic stroke. 51 The elevated-plus maze is able to evaluate anxiety after stroke. It is an easy test, although special equipment is required. Since elevated-plus maze utilizes natural behavior of rodents to assess anxiety, training is not required for this test. It should be noted that the elevated-plus maze is more widely used in ischemic stroke than in hemorrhagic stroke. 4 Summary Rodent models are widely used in stroke studies. Compared to other species, rodents have several advantages, including fast breeding, low maintenance cost, and higher ethical acceptance. The neurological behaviors and higher brain functions, however, are largely different in rodents and humans. For example, unlike human patients whose neurological functions can be assessed by answering questions and following commands, rodents can only perform tasks that follow their natural responses. This makes it difficult to assess stroke outcome, especially the cognitive function and consciousness. Choosing appropriate behavioral tests that meet the objective of research is crucial in assessing stroke outcome and connecting findings in rodents to clinical trials. Here, we reviewed commonly used behavioral tests in rodent models of stroke, compare their applications in different stroke models, and discuss their advantages and disadvantages. 11 It should be noted that most behavioral tests were initially developed on rats for studies other than stroke, and later modified to be used in mice and/or for stroke. To obtain objective and accurate results, several factors should be considered. First, a baseline behavior should be acquired before stroke to minimize individual variation. In case that different animals show various behaviors, the baseline may be used to customize final results in these tests. In addition, trainings are required for certain behavioral tests. Since experimental animals grow up in a laboratory environment, they may lack the experience to perform certain tasks required in behavioral tests, such as removing adhesive stimuli on the limbs or keeping balance on a rotating rod. Failure to perform such tasks may be due to their lack of experience rather than functional deficits. Trainings prepare animals for such tasks and reduce individual variation, leading to more objective and consistent results. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgement This work was supported by NIH R01HL146574 (to YY), NIH R21AG064422 (to YY), and American Heart Association Scientist Development Grant 16SDG29320001 (to YY). | [
"FEIGIN",
"WRITINGGROUPM",
"BENJAMIN",
"BOUET",
"PATEL",
"HATTORI",
"FISHER",
"BEDERSON",
"WAGNER",
"VIRLEY",
"MACRAE",
"DELBIGIO",
"RYNKOWSKI",
"KAMII",
"BEDERSON",
"GHARBAWIE",
"GRABOWSKI",
"ZAUSINGER",
"CHEN",
"KLEINSCHNITZ",
"DEMEYER",
"COWEY",
"LONGA",
"BORLONGAN",... |
af28bd0954e04d09964a79ed8818482d_κ-deformed BMS symmetry_10.1016_j.physletb.2019.01.063.xml |
κ-deformed BMS symmetry | [
"Borowiec, Andrzej",
"Brocki, Lennart",
"Kowalski-Glikman, Jerzy",
"Unger, Josua"
] | We present the quantum κ-deformation of BMS symmetry, by generalizing the lightlike κ-Poincaré Hopf algebra. On the technical level our analysis relies on the fact that the lightlike κ-deformation of Poincaré algebra is given by a twist and the lightlike deformation of any algebra containing Poincaré as a subalgebra can be done with the help of the same twisting element. We briefly comment on the physical relevance of the obtained κ-BMS Hopf algebra as a possible asymptotic symmetry of quantum gravity. | In the recent years there is a surge of interest in BMS symmetry at null infinity of asymptotically flat spacetimes [1] , [2] , [3] . This renewed interest in the seemingly exotic aspects of classical general relativity was fueled by the discovery of a surprising and close relationship between asymptotic symmetries and soft gravitons theorem [4] that, as it turned out, has its roots in Ward identities for supertranslations [5] , [6] . Moreover, the gravitational memory effect [7] appears to be related to the two [8] , so that there is a triangle of interrelationships, which vertices are BMS symmetry, Weinberg's soft graviton theorem, and the memory effect. The extensive discussion of these effects can be found in recent reviews [9] and [10] . It has also been argued recently [11] that the similarity of null infinity and black hole horizon suggests that the charges associated with the BMS symmetry might be present at the black hole horizon, so that the black hole might have an infinite number of hairs, so-called soft hairs, in addition to the three classical one: mass, charge, and angular momentum. It was argued in [11] and [12] that the presence of these charges may help solving the black hole information paradox. In this letter we will investigate the properties of a κ -deformed generalization of the BMS symmetry. There are several reasons to be interested in such a generalization. The main one is that it is believed that investigations of κ -deformation might shed some light on the properties of elusive quantum gravity. The deformation parameter of κ -deformation of Poincaré algebra [13–17] , see [18] for a recent review and more references, has dimension of mass, and therefore it is natural to identify it with Planck mass, or inverse Planck length, which in turn suggests a possible close relationship between this deformation and quantum gravity. In fact, it was shown in [19] that in the case of gravity in dimensions the 2 + 1 κ -Poincaré algebra is the algebra of symmetries of quantum flat space in non-perturbatively quantized quantum gravity, and therefore it plays in quantum gravity the role analogous to the fundamental role played by the Poincaré algebra in quantum field theory. There are three incarnations in which κ -deformation can appear. First, there is the mostly studied deformation in time direction, in which case the rotations subalgebra of the Poincaré algebra is kept undeformed. This deformation satisfies a natural requirement that rotations are not deformed, and in particular that the angular momentum and spin composition laws are not modified. It is for this same reason that space-like κ -deformation with a preferred spacial direction of deformation was never considered to be a phenomenologically viable possibility, and is treated as a mathematical curiosity with no real relevance for physics. The third possibility is the lightlike κ -deformation [20–27] . In contrast to the previous two cases it is implemented by a triangular r -matrix, which satisfies the Classical Yang–Baxter Equation. A remarkable property of this so-called non-standard deformation is that it is given by a twist which makes its structure much simpler to construct [28] . Particularly, any embedding of the algebra into a larger Lie algebra preserves the triangular structure and allows to perform the deformation by the same twist element. Since the BMS algebra is ultimately tied in with null infinity and contains the Poincaré subalgebra, in this paper we deform it using the lightlike κ -deformation. Another characteristic property of the twist quantization is that it deforms the coalgebraic sector of the corresponding universal enveloping algebra, leaving the Lie algebra untouched. 1 Moreover, knowing the twist one can also obtain the underlying noncommutative quantum-deformed spacetime structure. 1 However, it should be noticed that the first explicit formulas for the light-like κ -deformed coproducts have been provided in a non-classical basis [20–22] without using the twist. Let us start reviewing the construction of the lightlike deformed Poincaré algebra as a Hopf algebra following [24] . Then we will extend this procedure to the BMS algebra. We start with the Poincaré algebra in four dimensions with light-cone generators satisfying (1) M + − , M + i , M − i , M i j , P + , P − , P i , i , j = 1 , 2 (2) [ M + i , M − j ] = − i ( M i j + g i j M + − ) , [ M ± i , M ± j ] = 0 (3) [ M ± i , M j k ] = i ( g i j M ± k − g i k M ± j ) , [ M + − , M ± i ] = ± i M ± i (4) [ M + − , P ± ] = ± i P ± , [ M ± i , P j ] = i g i j P ± (5) [ M ± i , P ± ] = [ M + − , P i ] = 0 , [ M ± i , P ∓ ] = − i P i The metric used here has the nonzero entries (6) [ M i j , P k ] = i ( g i k P j − g j k P i ) , g + − = g − + = 1 . g 11 = g 22 = − 1 The primitive coproduct structure and antipode (7) Δ 0 ( X ) = 1 ⊗ X + X ⊗ 1 where (8) S 0 ( X ) = − X , X denotes an arbitrary element of the algebra above, makes it a Hopf algebra. This Hopf algebra (established, in fact, on the enveloping algebra) can be deformed by performing a “twist” on the coproduct and the antipode (see below) while leaving the commutator structure unchanged. Taking linear transformations one establishes that the Poincaré algebra is a subalgebra of the (extended) BMS algebra with generators (9) M + − = i ( l 0 + l ¯ 0 ) , M 12 = ( l 0 − l ¯ 0 ) M + 1 = i 2 ( l − 1 + l ¯ − 1 ) , M + 2 = 1 2 ( l − 1 − l ¯ − 1 ) M − 1 = i 2 ( l 1 + l ¯ 1 ) , M − 2 = − 1 2 ( l 1 − l ¯ 1 ) P + = i 2 T 11 , P − = − i 2 T 00 P 1 = i 2 ( T 10 + T 01 ) , P 2 = − 1 2 ( T 10 − T 01 ) , l m , l ¯ m with T p q , satisfying m , p , q ∈ Z [29] , [30] (10) [ l m , l n ] = ( m − n ) l m + n , [ l ¯ m , l ¯ n ] = ( m − n ) l ¯ m + n , [ l m , l ¯ n ] = 0 This is the infinite dimensional Lie algebra (11) [ l l , T m , n ] = ( l + 1 2 − m ) T m + l , n , [ l ¯ l , T m , n ] = ( l + 1 2 − n ) T m , n + l . (10) , (11) which we are going to deform as a Hopf algebra. It turns out however that this form of the algebra is not the one convenient for our purposes and therefore we will use another, obtained by linear transformations of the generators , l m , l ¯ m . To this end we will use the original Poincaré generators T p q M and P satisfying the algebra (2) – (6) along with the linear combinations of the higher BMS generators defined as follows (12) k n = l n + l ¯ n , k ¯ n = − i ( l n − l ¯ n ) , Notice that we have (13) S m n = 1 2 ( T m n + T n m ) , A m n = − i 2 ( T m n − T n m ) . , k 0 = − i M + − , k ¯ 0 = − i M 12 , k 1 = − i 2 M − 1 , k − 1 = − i 2 M + 2 , k ¯ 1 = − i 2 M − 2 , k ¯ − 1 = − i 2 M + 1 , S 00 = i 2 P − , S 11 = − i 2 P + , S 01 = − i P 1 . In terms of these generators the BMS algebra reads A 01 = − i P 2 (14) [ k n , k m ] = ( n − m ) k n + m , [ k ¯ n , k ¯ m ] = − ( n − m ) k n + m , (15) [ k n , k ¯ m ] = ( n − m ) k ¯ n + m , (16) [ k n , S p q ] = ( n + 1 2 − p ) S p + n , q + ( n + 1 2 − q ) S q + n , p , (17) [ k ¯ n , S p q ] = ( n + 1 2 − p ) A p + n , q + ( n + 1 2 − q ) A q + n , p , (18) [ k n , A p q ] = ( n + 1 2 − p ) A p + n , q + ( n + 1 2 − q ) A p , q + n , and one can check that it reproduces the Poincaré algebra (19) [ k ¯ n , A p q ] = ( n + 1 2 − p ) S p + n , q + ( n + 1 2 − q ) S q + n , p , (2) – (6) in the sector . | m | , | n | , … ≤ 1 Knowing the undeformed BMS algebra, given by the commutators above with trivial Hopf structure, we can turn to the deformation. To this end we use the fact that the lightlike κ -deformation of Poincaré algebra corresponds to the triangular classical r -matrix found in r L C = M + − ∧ P + + M + a ∧ P a [31] , for which the twisting element is known as the so-called extended Jordanian twist proposed in F [32] . It is based on the subalgebra containing six out of ten Poincaré generators: and has the following form ( M + − , P + , M + 1 , P 1 , M + 2 , P 2 ) 2 2 From this one also immediately gets the quantum R -matrix . R = F 21 F − 1 where (20) F = exp ( − i 1 κ M + i ⊗ P i ) exp ( − i M + − ⊗ log Π + ) = exp ( 1 2 κ ( i k 1 ⊗ S 01 + i k ¯ 1 ⊗ A 01 ) ) × exp ( − k 0 ⊗ ∑ j = 0 ∞ 1 j + 1 ( − i S 11 2 κ ) j + 1 ) , . Π + = 1 + P + κ = 1 + i S 11 2 κ We recall [28] that the twisting element has to be an invertible, normalized 2-cocycle for the original (i.e. undeformed) algebra F ≡ a α ⊗ b α ∈ A ⊗ A , i.e. it has to fulfill A with (21) F 12 ⋅ ( Δ 0 ⊗ 1 ) ( F ) = F 23 ⋅ ( 1 ⊗ Δ 0 ) ( F ) , ϵ ( a α ) b α = 1 , etc. It provides the deformed coproduct via the similarity transformation F 12 = a α ⊗ b α ⊗ 1 . Δ ( X ) = F Δ 0 ( X ) F − 1 The twist deformation can be extended from the Poincaré subalgebra to the whole BMS algebra by making repeated use of the Hadamar formula As a result, we get the expressions for the coproduct in the superrotation sector e A B e − A = ∑ n = 0 ∞ 1 n ! [ A , [ A , . . . [ A , ︸ n times B ] . . ] . 3 3 To simplify these expressions, in the main text we present the expressions to the leading order in only. The complete expressions can be found in the appendix. We also use the standard notation for the Poincaré generators on the right hand side of the formulae below, so as to distinguish them from the superrotations and supertranslations. 1 / κ and (22) Δ ( k m ) = 1 ⊗ k m + k m ⊗ ( 1 + m P + κ ) + m − 1 2 κ M + − ⊗ S 1 + m , 1 + i κ ( ( M − 1 ⊗ ( m + 1 2 S m 1 + m − 1 2 S 0 , m + 1 ) − M − 2 ⊗ ( m + 1 2 A m 1 − m − 1 2 A 0 , m + 1 ) ) + i 2 κ ( k m + 1 ⊗ ( 1 − m ) P 1 + k ¯ m + 1 ⊗ ( 1 − m ) P 2 ) + O ( 1 κ 2 ) (23) Δ ( k ¯ m ) = 1 ⊗ k ¯ m + k ¯ m ⊗ ( 1 + m P + κ ) + m − 1 2 κ M + − ⊗ A 1 + m , 1 + i κ ( ( M − 1 ⊗ ( m + 1 2 A m 1 + m − 1 2 A 0 , m + 1 ) − M − 2 ⊗ ( m + 1 2 S m 1 − m − 1 2 S 0 , m + 1 ) ) + i 2 κ ( k m + 1 ⊗ ( 1 − m ) P 2 + k ¯ m + 1 ⊗ ( 1 − m ) P 1 ) + O ( 1 κ 2 ) . In the supertranslation sector we get and (24) Δ ( A p q ) = 1 ⊗ A p q + A p q ⊗ ( 1 + ( p + q − 1 ) P + κ ) + i 2 κ ( 1 − p ) ( A p + 1 , q ⊗ P 1 + S p + 1 , q ⊗ P 2 ) + i 2 κ ( 1 − q ) ( A p , q + 1 ⊗ P 1 + S p , q + 1 ⊗ P 2 ) + O ( 1 κ 2 ) (25) Δ ( S p q ) = 1 ⊗ S p q + S p q ⊗ ( 1 + ( p + q − 1 ) P + κ ) + i 2 κ ( 1 − p ) ( A p + 1 , q ⊗ P 2 + S p + 1 , q ⊗ P 1 ) + i 2 κ ( 1 − q ) ( A p , q + 1 ⊗ P 2 + S p , q + 1 ⊗ P 1 ) + O ( 1 κ 2 ) . To obtain the antipodes of the deformed Hopf algebra it is often advantageous to make use of the defining property (26) m ∘ ( S ⊗ 1 ) ∘ Δ ( x ) = η ( x ) ϵ ( x ) = m ∘ ( 1 ⊗ S ) ∘ Δ ( x ) . Thus, one obtains (27) S ( k m ) = − k m ( 1 + m P + κ ) − m − 1 2 κ M + − S 1 + m , 1 − i κ ( ( M − 1 ( m + 1 2 S m 1 + m − 1 2 S 0 , m + 1 ) − M − 2 ( m + 1 2 A m 1 − m − 1 2 A 0 , m + 1 ) ) − i 2 κ ( k m + 1 ( 1 − m ) P 1 + k ¯ m + 1 ( 1 − m ) P 2 ) + O ( 1 κ 2 ) , and (28) S ( k ¯ m ) = − k ¯ m ( 1 + m P + κ ) − m − 1 2 κ M + − A 1 + m , 1 − i κ ( ( M − 1 ( m + 1 2 A m 1 + m − 1 2 A 0 , m + 1 ) − M − 2 ( m + 1 2 S m 1 − m − 1 2 S 0 , m + 1 ) ) − i 2 κ ( k m + 1 ( 1 − m ) P 2 + k ¯ m + 1 ( 1 − m ) P 1 ) + O ( 1 κ 2 ) (29) S ( A p q ) = − A p q ( 1 + ( p + q − 1 ) P + κ ) − i 2 κ ( 1 − p ) ( A p + 1 , q P 1 + S p + 1 , q P 2 ) − i 2 κ ( 1 − q ) ( A p , q + 1 P 1 + S p , q + 1 P 2 ) + O ( 1 κ 2 ) , (30) S ( S p q ) = − S p q ( 1 + ( p + q − 1 ) P + κ ) − i 2 κ ( 1 − p ) ( A p + 1 , q P 2 + S p + 1 , q P 1 ) − i 2 κ ( 1 − q ) ( A p , q + 1 P 2 + S p , q + 1 P 1 ) + O ( 1 κ 2 ) . The most important physical aspect of the co-product of an algebra is that it describes a rule of how to compose the associated charge for multiparticle systems. For example the co-product of , P + tells us what is the total energy of a system of many particles, each having the energy Δ ( P + ) ; in the case of just two particles, we have from P + ( i ) (24) so that for the two-particle system the total energy (31) Δ P + = 1 ⊗ P + + P + ⊗ ( 1 + P + κ ) + O ( 1 κ 2 ) , is P + ( 1 + 2 ) Similarly the antipode tells you how to define the ‘inverse’, denoted sometimes by (32) P + ( 1 + 2 ) ≡ P + ( 1 ) ⊕ P + ( 2 ) = P + ( 1 ) + P + ( 2 ) + 1 κ P + ( 1 ) P + ( 2 ) + O ( 1 κ 2 ) . , so that ⊖ X ≡ S ( X ) . For example, the ‘negative momentum’ in direction 1 is given by X ⊕ ( ⊖ X ) = 0 (30) (33) ⊖ P 1 = − P 1 ( 1 + P + κ ) . The presence of a non-trivial Hopf structure, whose appearance is to be regarded as a quantum gravity effect, has important physical consequences. It may, for example, shed some new light on the ongoing discussion on black hole information paradox [33] . Recently, Hawking, Perry, and Strominger [11] , [12] argued that the soft charges could dramatically change the standard analysis of black hole evaporation, because instead of being nearly bald (having – in the purely Einsteinian case – only mass and angular momentum as its charges) is in fact possessing a (potentially infinite) number of soft hair. Since the corresponding supertranslation charges are all conserved, their presence introduces previously overlooked correlations between early and late Hawking radiation, possibly rendering the black hole evaporation process unitary. In fact, the hard and soft part are not conserved separately, but the total charge is: where ± refers to past and future null infinity and (34) Q − [ f ] = Q + [ f ] , Q ± = Q s ± + Q h ± , f is an arbitrary function on the sphere. Exchange of charge between soft and hard part during the time evolution would introduce the correlations between early and late quanta. This argument was refuted in a series of papers [34] , [35] , [36] , [37] where it is argued that soft hair could not resolve the black hole unitarity problem, because their time evolution is trivial and their conservation laws are automatically satisfied. The soft modes carrying them effectively decouple from the hard ones and moreover there is a canonical transformation which strips the hard modes from all the supertranslational charges. As a result the supertranslational charges evolve trivially and each charge stays the same all the way through from to I − . One can say that although a black hole has infinitely many soft hair, they are perfectly combed, so that they do not tangle with the hard ones. I + The presence of deformation changes this qualitative picture considerably, the reason being the presence of the co-product structure and the associate modification of the composition laws. Indeed consider eqs. (25) and (24) , describing the deformed coproduct. It follows that in the associate conservation laws the total supertranslation charge will depend on the momenta carried by hard particles. Indeed consider the total supertranslation charge of the incoming system of a soft particle, having zero Poincaré momentum and some supertranslation charges and a hard one carrying only Poincaré momenta T p q P μ If the coproduct was primitive the (35) T p q T o t = T p q + 1 κ ( p + q − 1 ) T p q P + + i 2 κ ( 1 − p ) ( T p + 1 , q P 1 + i T q , p + 1 P 2 ) + i 2 κ ( 1 − q ) ( T p , q + 1 P 1 + i T q + 1 , i P 2 ) + O ( 1 κ 2 ) . would be given purely by the eigenvalue T p q T o t of the soft particle and likewise for the late quanta. This corresponds to the argument of T p q [34–37] , because the supertranslation charges conservation would constrain only soft particles. But in the case of κ -deformed BMS the total supertranslation charges of early and late quanta are also functions of the Poincaré momenta of the hard particles. This makes the clear separation of hard and soft part impossible. Notice that this effect is negligible at the LHC energy scale, since it is suppressed by , which is expected to correspond to the Planck scale. 1 / κ Thus the fundamental property of Hopf algebras, the presence of a non-trivial coproduct structure, translates into an irreducible entanglement between hard momenta and supertranslational charges. This opens a possibility that early and late Hawking radiation quanta are correlated after all. One may say that as a result of deformation, possibly being associated with quantum gravity effects, the combed hair of black hole become tangled again. Of course much more detailed calculations are required to claim that the level of deformation induced entanglement is sufficient to solve the black hole information paradox, and we hope to address this in future publication. Apart from these physical investigations our work suggests a number of possible more technical developments. Firstly, it should be noticed that the mathematical studies of coboundary Lie bialgebra structures on infinite-dimensional Lie algebras and/or their supersymmetric extensions has extensive literature, e.g. [38–40] . Particularly, the case of Jordanian deformation of the (super) Witt and Virasoro algebras have been considered in the literature before, see [41,42] and references therein. This suggests the possibility of studying supersymmetric extensions of the BMS algebra and its deformation. It was established recently [43] that the κ -Poincaré algebra with light-come deformation is associated with the integrable deformation of principal sigma model. This indicates that similar construction might be possible in the case of the κ -deformed BMS algebra. We will address this issue in a future publication. More systematic study of possible real forms of obtained deformation might be also of some interest. The one, implicitly assumed in the present paper, has been induced from the Poincaré reality condition: . It manifests itself by the choice of Hermitian Poincaré generators making the twist l m † = − l ¯ m , l ¯ m † = − l m , T m n † = − T n m (20) †-unitary. More precisely, the BMS algebra treated as a complex Lie algebra can admit several real forms; some of them are induced by the reality condition on the embedded Lie algebra. o ( 4 , C ) 4 4 See [44–46] for the discussion of reality conditions on a complex Lie bialgebra as well as quantized enveloping algebra level in the case of . o ( 4 , C ) Another issue concerning the choice of possible physical observables in a deformed algebra which might be better adjusted for description of relativistic kinematics in a quantum gravity regime. Twist deformation can offer a canonical solution to this problem (see e.g. [47,48] and references therein). Acknowledgements This research was supported by Polish National Science Center ( NCN ), project UMO-2017/27/B/ST2/01902 . We thank Michele Arzano for discussion on the physical role of Hopf structures in the context of BMS symmetry. Appendix A In this section we will list the all-orders expressions for the coproducts and antipodes. The leading order of these expressions was discussed in the main text. For the coproducts we have and (36) Δ ( k m ) = 1 ⊗ k m + k m ⊗ Π + m + i ( m − 1 ) 2 κ k 0 ⊗ Π + − 1 S 1 + m , 1 − i 2 κ k 1 ⊗ ( m + 1 2 S m 1 + m − 1 2 S 0 , m + 1 ) − i 2 κ k ¯ 1 ⊗ ( m + 1 2 A m 1 − m − 1 2 A 0 , m + 1 ) − m − 1 ( 2 κ ) 2 ( k 1 ⊗ Π + − 1 S 01 S 1 , m + 1 + k ¯ 1 ⊗ Π + − 1 A 01 S 1 + m , 1 ) + ∑ n = 1 ∞ ( − i 2 κ ) n ( n + m − 2 n ) ( k m + n ⊗ f n Π + m + k ¯ m + n ⊗ f ¯ n Π + m ) where the elements (37) Δ ( k ¯ m ) = 1 ⊗ k ¯ m + k ¯ m ⊗ Π + m + i ( m − 1 ) 2 κ k 0 ⊗ Π + − 1 A 1 + m , 1 − i 2 κ k 1 ⊗ ( m + 1 2 A m 1 + m − 1 2 A m + 1 , 0 ) − i 2 κ k ¯ 1 ⊗ ( m + 1 2 S m 1 − m − 1 2 S m + 1 , 0 ) − m − 1 ( 2 κ ) 2 ( k 1 ⊗ Π + − 1 S 01 A 1 , m + 1 + k ¯ 1 ⊗ Π + − 1 A 01 A 1 + m , 1 ) + ∑ n = 1 ∞ ( − i 2 κ ) n ( n + m − 2 n ) ( k m + n ⊗ g n Π + m + k ¯ m + n ⊗ g ¯ n Π + m ) , are defined by the following recurrent relations f n , g n , f ¯ n , g ¯ n (38) f 1 = g ¯ 1 = S 01 , g 1 = f ¯ 1 = A 01 , (39) f n + 1 = f n S 01 − f ¯ n A 01 , f ¯ n + 1 = f n A 01 + f ¯ n S 01 , Here the usual binomial notation (40) g n + 1 = g n S 01 − g ¯ n A 01 ) , g ¯ n + 1 = g n A 01 + g ¯ n S 01 . is in use and the infinite sums in the equations ( n + m − 2 n ) = ( n + m − 2 ) ( n + m − 1 ) … ( m − 1 ) n ! (34) , (35) become finite whenever . m ≤ 1 In the supertranslation sector we get and (41) Δ ( A p q ) = 1 ⊗ A p q + A p q ⊗ Π + ( p + q − 1 ) + ∑ n = 1 ∞ 1 n ! ∑ r , s n ≥ r + s > 0 ( i 2 κ ) r + s ( A p + r , q + s ⊗ g r s ( p q ) Π + ( p + q − 1 ) + S p + r , q + s ⊗ f r s ( p q ) Π + ( p + q − 1 ) ) where we abbreviate (42) Δ ( S p q ) = 1 ⊗ S p q + S p q ⊗ Π + ( p + q − 1 ) + ∑ n = 1 ∞ 1 n ! ∑ r , s n ≥ r + s > 0 ( i 2 κ ) r + s ( A p + r , q + s ⊗ h r s ( p q ) Π + ( p + q − 1 ) + S p + r , q + s ⊗ j r s ( p q ) Π + ( p + q − 1 ) ) , (43) g 10 ( p q ) = ( 1 − p ) S 01 , g 01 ( p q ) = ( 1 − q ) S 01 , (44) f 10 ( p q ) = ( 1 − p ) A 01 , f 01 ( p q ) = ( 1 − q ) A 01 , (45) g r + 1 , s ( p q ) = ( 1 − ( p + r ) ) ( g r s ( p q ) S 01 + f r s ( p q ) A 01 ) , (46) g r , s + 1 ( p q ) = ( 1 − ( q + s ) ) ( g r s ( p q ) S 01 − f r s ( p q ) A 01 ) , (47) f r + 1 , s ( p q ) = ( 1 − ( p + r ) ) ( g r s ( p q ) A 01 + f r s ( p q ) S 01 ) , and (48) f r , s + 1 ( p q ) = ( 1 − ( q + s ) ) ( g r s ( p q ) A 01 + f r s ( p q ) S 01 ) , (49) j 10 ( p q ) = S 01 , j 01 ( p q ) = ( 1 − q ) S 01 , (50) h 10 ( p q ) = ( 1 − p ) A 01 , h 01 ( p q ) = ( 1 − q ) A 01 , (51) j r + 1 , s ( p q ) = ( 1 − ( p + r ) ) ( j r s ( p q ) S 01 + h r s ( p q ) A 01 ) , (52) j r , s + 1 ( p q ) = ( 1 − ( q + s ) ) ( j r s ( p q ) S 01 + h r s ( p q ) A 01 ) , (53) h r + 1 , s ( p q ) = ( 1 − ( p + r ) ) ( j r s ( p q ) A 01 + h r s ( p q ) S 01 ) , Let us note that for the important Poincaré subalgebra these results coincide with those found in (54) h r , s + 1 ( p q ) = ( 1 − ( q + s ) ) ( − j r s ( p q ) A 01 + h r s ( p q ) S 01 ) . [25] . One can also notice that the first lines in the formulas (34) , (35) , (42) , (43) represent the coproduct deformed by the purely Jordanian twist while the remaining lines come from the twist extension (cf. J = exp ( − k 0 ⊗ ∑ j = 0 ∞ 1 j + 1 ( − i S 11 2 κ ) j + 1 ) (20) ). For the antipodes it is in general only possible to write down implicit formulas like (55) S ( A p q ) = − A p q S ( Π + − ( 1 − ( p + q ) ) ) − ∑ n = 1 ∞ ∑ r , s n ≥ r + s > 0 1 n ! ( A p + r , q + s S ( g r s ( p q ) ) S ( Π + − ( 1 − ( p + q ) ) ) + S p + r , q + s S ( f r s ( p q ) ) S ( Π + − ( 1 − ( p + q ) ) ) ) , and similar for the superrotations. (56) S ( S p q ) = − S p q S ( Π + − ( 1 − ( p + q ) ) ) − ∑ n = 1 ∞ ∑ r , s n ≥ r + s > 0 1 n ! ( A p + r , q + s S ( h r s ( p q ) ) S ( Π + − ( 1 − ( p + q ) ) ) + S p + r , q + s S ( j r s ( p q ) ) S ( Π + − ( 1 − ( p + q ) ) ) ) , To define the q -analog of the Hopf algebra at hand (see [25] ) one also needs to make sure that all structures including the coproduct are at most polynomial in the generators. In the Poincare sub-Hopf algebra this is achieved with the help of and Π + . One has to define all the Hopf algebra structures including these auxiliary objects which replace Π + − 1 . The only nontrivial commutators are S 1 1 = − i 2 P + (57) [ k m , Π + ] = ( m − 1 ) 2 κ S 1 + m , 1 , and (58) [ k ¯ m , Π + ] = ( m − 1 ) 2 κ A 1 + m , 1 (59) 0 = [ k m , Π + Π + − 1 ] = Π + [ k m , Π + − 1 ] + [ k m , Π + ] Π + − 1 (60) ⇒ [ k m , Π + − 1 ] = − ( m − 1 ) 2 κ S 1 + m , 1 ( Π + − 1 ) 2 , The coproducts then take the form (61) [ k ¯ m , Π + − 1 ] = − ( m − 1 ) 2 κ A 1 + m , 1 ( Π + − 1 ) 2 . (62) Δ ( Π + ) = 1 ⊗ 1 + i 2 κ Δ ( S 11 ) (63) = Π + ⊗ Π + , and for the counits and antipodes one has (64) Δ ( Π + − 1 ) = Π + − 1 ⊗ Π + − 1 (65) ϵ ( Π + ) = ϵ ( Π + − 1 ) = 1 , Note that for general generators of the BMS algebra (apart from the purely Jordanian case) this is not fulfilled in the current notation, e.g. the coproducts for (66) S ( Π + ) = Π + − 1 , S ( Π + − 1 ) = Π + . , k m contain infinite power series (in k ¯ m ) for 1 / κ . Finding a reformulation to remedy the situation, if possible, will be the goal of future research. m > 1 | [
"BONDI",
"SACHS",
"SACHS",
"WEINBERG",
"STROMINGER",
"HE",
"CHRISTODOULOU",
"STROMINGER",
"STROMINGER",
"COMPERE",
"HAWKING",
"HAWKING",
"LUKIERSKI",
"LUKIERSKI",
"LUKIERSKI",
"LUKIERSKI",
"MAJID",
"KOWALSKIGLIKMAN",
"CIANFRANI",
"BALLESTEROS",
"LUKIERSKI",
"MUDROV",
"BLA... |
96699dc1252546dc973524588848b176_A case of infectious thoracic aortic aneurysm after intravesical Bacillus Calmette-Guérin instillati_10.1016_j.eucr.2021.101574.xml | A case of infectious thoracic aortic aneurysm after intravesical Bacillus Calmette-Guérin instillation therapy for a superficial bladder cancer | [
"Koterazawa, Shigeki",
"Watanabe, Jun",
"Uemura, Yuichi",
"Uegaki, Masayuki",
"Shirahase, Toshiaki",
"Taki, Yoji"
] | Intravesical Bacillus Calmette-Guérin instillation therapy after transurethral resection of bladder tumor is considered as the most effective treatment for prophylaxis against the recurrence of high-risk, non-muscle bladder cancer. However, intravesical Bacillus Calmette-Guérin instillation therapy has some characteristic complications. Here, we report a case of infectious thoracic aortic aneurysm related to prior intravesical Bacillus Calmette-Guérin instillation, which consequently allows the spread into the adjacent lung tissue and secretion in sputum of Mycobacterium bovis. | Introduction Intravesical Bacillus Calmette-Guérin (BCG) instillation therapy after transurethral resection of bladder tumor (TURBT) is considered as the most effective treatment for prophylaxis against the recurrence of high-risk, non-muscle bladder cancer. However, BCG is a live attenuated strain of Mycobacterium bovis (M bovis), therefore, the therapy has some characteristic complications and one of the complications is tuberculous infectious aneurysm that affects major arteries. Here, we report a case of infectious thoracic aortic aneurysm related to prior intravesical BCG instillation, which consequently allow the spread into the adjacent lung tissue and secretion in sputum of M bovis. Case presentation An 80-year-old man presented to our hospital because of gross hematuria. Cystoscopy and TURBT revealed carcinoma in situ of the bladder. Three weeks later from the TURBT, he was given 80 mg BCG intravesical instillations every week for eight weeks. Two weeks later from the last treatment, he had suffered from long-lasting fever and poor appetite, and lost 10 kg in three months from the last BCG intravesical instillation. His laboratory findings were as follows: CRP 4.31mg/dL, AST 66U/L, ALT 70U/L. The urinary sediment analysis and computed tomography (CT) revealed no evidence of infection. In addition, no evidence of recurrence or progression of bladder cancer was apparent during urine cytodiagnosis and cystoscopy analysis. Furthermore, the gastrointestinal endoscopy and echocardiographic inspection did not show any specific finding. In addition to the lack of definitive evidence of BCG dissemination, because of spontaneous improvement of the symptoms, observation without treatment was adopted at that time. Ten months later from the intravesical instillations, he developed bloody sputum. CT revealed an infiltrative consolidation to his upper lung ( Fig. 1 ), and M.bovis was detected from his sputum by polymerase chain reaction. Under the diagnosis of pulmonary infection of M.bovis, medical therapy for tuberculosis (isoniazide, rifampcin, and ethambutol)was started. His bloody sputum regressed soon after the introduction of this therapy. After ten months of medical therapy for tuberculosis, there was no improvement in chest X-ray examination. CT revealed an infectious thoracic aortic aneurysm with air in the vascular wall ( Fig. 2 ), and he was diagnosised with tuberculous infectious thoracic aortic aneurysm. He was treated with thoracic endovascular aortic repair (TEVAR), and antibiotic therapy was continued. There was no recurrence at 24 months after treatment with TEVER. Discussion BCG immunotherapy after TUR is the gold standard treatment for non-muscle-invasive bladder cancer (stage pTa, pT1, pTis). Although its mechanism of action against bladder carcinoma is not fully understood, use of BCG induces a multifaceted inflammatory response that has antitumor effects. Owing to the presence of viable mycobacteria, several adverse reactions such as fever, hematuria, and lower urinary tract symptoms have been reported. 1 On the other hand, rare cases of serious complications, such as interstitial pneumonitis, military tuberculosis, hepatitis, sepsis, and infectious aortic aneurysm have been reported. 2 Among these ectopic dissemination of BCG, infectious aneurysm is a relatively rare entity. To date, only 30 cases of tuberculous infectious aortic aneurysm after intravesical BCG therapy have been reported in the English-language literature. The infectious aortic aneurysm often has a long time of onset from BCG therapy, and the mean time is approximately 25 months. 3 This long latency of Mycobacterium in the aortic wall might make the diagnosis difficult. Lamm et al. recommend that if the fever greater than 38.5 prolongs for more than 24 hours after BCG instillations, initial treatment should be aggressive and utilize isoniazid. 4 3 It was reported that the mean growth rate of the aortic aneurysm was 0.12 cm/year. Therefore, several factors such as infection of the aortic wall should be considered in this case of the rapid growth of the aneurysm. Considering the negative blood culture results, the infiltrative consolidation to the lung adjacent to the aneurysm and the detection of M.bovis from the sputum, this case may be an infectious aortic aneurysm caused by the disseminated BCG infection. 5 In our case, bacteria were disseminated hematogenously and colonized his blood vessel wall during continuous mild fever and anorexia after BCG therapy. Retrospectively, the lack of therapeutic intervention at this latent period might facilitate successive aneurysmal formation and invasion of pathogen to the adjacent lung tissue. Therefore, in addition to the prevention of dissemination, urologists should be aware of the possibility of long-lasting latency of ectopic infection of M.bovis such as the case of aortic wall, for the appropriate therapeutic intervention. Conclusion Infectious aneurysm caused by M.bovis is one of the complications, which has difficult diagnosis because of its long latency. Moreover, our case suggests that delay of therapeutic intervention could cause the contagious lung infection consequently. Every urologist should be aware of how to manage this rare complication, to prevent epidemiologic consequence. Funding sources This research did not receive any specific grant from finding agencies in the public, commercial, or not-for-profit sectors. Declaration of competing interest The authors have no conflicts of interest to declare. Acknowledgement The authors thank the patient, who participated in this case report, for his important contributions. | [
"BRAUSI",
"BRAUSI",
"LAMM",
"ORII",
"COADY"
] |
3d64886fbf38432f824ce4dfb4cf1100_Bilateral external auditory canal masses following repair of ruptured abdominal aortic aneurysm and _10.1016_j.radcr.2020.10.009.xml | Bilateral external auditory canal masses following repair of ruptured abdominal aortic aneurysm and open decompressive exploratory laparotomy for compartment syndrome: A rare case of spontaneous bilateral otorrhagia | [
"Teh, Richard Andrew",
"Hoshal, Steven",
"Hoshal, Gillian L.",
"Ozturk, Arzu",
"Chang, Jennifer",
"Assadsangabi, Reza",
"Ivanovic, Vladimir",
"Bobinski, Matthew",
"Raslan, Osama A."
] | Very few cases of spontaneous otorrhagia (SO) following nonotolaryngologic surgery have ever been reported in surgical literature and none in radiographic. Of the surgical cases reported, SO occurred in the perioperative period following laparoscopic surgeries in the Trendelenburg position. We report the first case of spontaneous bilateral otorrhagia which presented as bilateral external auditory canal masses following endovascular surgery and open decompressive laparotomy in a 60-year-old male with a prior history of hypertension and smoking. We seek to inform radiologists that SO can present on neck imaging as external auditory canal masses as a complication of nonotolaryngologic surgery away from the imaged field of view. | Case report A 60-year-old male smoker hypertensive patient was initially seen at an outside institution with a chief complaint of sudden onset epigastric pain which radiated to his back. A computed tomography (CT) of the abdomen was performed which showed an infrarenal abdominal aortic aneurysm with a mural hematoma. During his stay in the outside emergency department he became hypotensive requiring transfusion, receiving 8 units of packed red blood cells and 2 units of fresh frozen plasma. His blood pressure stabilized, and he was subsequently transferred to our institution for urgent surgery. The patient had no history of prior otorrhagia, ear infections, or ear surgery. At our institution, the patient underwent an emergent endovascular repair of the ruptured abdominal aortic aneurysm. Following endovascular surgery, the patient underwent open decompressive laparotomy for intra-abdominal compartment syndrome secondary to an intra-abdominal hematoma. The patient received intraoperatively 10 units of packed red blood cells (pRBCs), 5 units of fresh frozen plasma, 2 units of platelets, and 4 L of crystalloids. During the laparotomy the patient developed unexpected bleeding within his bilateral ear canals and the procedure was terminated. The patient was transferred to the SICU intubated and sedated. The following morning ENT consult revealed bilateral hemotympanum, tympanic membrane perforations and otorrhagia ( Figs. 1 A and B). A CT examination of the temporal bones was requested to rule out alternative underlying middle ear lesions, vascular malformations, glomus tumors, or temporal bone fractures. Noncontrast CT temporal bones ( Fig. 1 C and D), demonstrate well-defined, broad-based soft tissue lesions in the bilateral external auditory canals consistent with hemorrhage. There was no aberrant vascular anatomy, soft tissue abnormality or bony erosion. The patient was subsequently discharged from the hospital without further otolaryngologic complications. On subsequent audiology clinic visit 2 months later, the patient had bilateral symmetric mild-to-moderately severe sensorineural hearing loss consistent with presbycusis and normal tympanometry. The patient denied any otalgia, recurrent otorrhagia, or subjective loss in hearing. Otoscopic examination revealed dried blood products in the external auditory canal. Discussion This is the first report of spontaneous otorrhagia (SO) during nonotolaryngologic surgery in the radiologic literature. Most reported cases occurred after laparoscopic surgery and in the Trendelenburg position, and are described in surgical, anesthesiology, and otology journals. Our case is the third reported to occur in the supine position, with nonlaparoscopic surgery, and the first to happen after endovascular surgery. The etiology of SO is unclear. Theories include venous hypertension, changes to arterial blood pressure, pneumoperitoneum, and increased middle and inner ear pressures. Contributing factors may include preexisting hypertension, anticoagulation and advanced age, which were present in our case. In a review and case report published by Aloisi et al, advanced age is suggested as a potential contributing factor. Seven of 10 patients in prior reports were over the age of 59. Our patient was 60 years old [1] . Administration of drugs or factors that may increase the risk of bleeding complications were implicated as well. Our patient's intraoperative INR was 2.51 and may have contributed to the spontaneous otorrhagia [1] . The head-down tilt of Trendelenburg positioning results in an increase in venous return from the lower body because of gravity. There is an increase in central blood volume and mean arterial pressure, with a resultant increase in cardiac output. This may lead to increased venous congestion in the head and neck area, interstitial edema, and vascular congestion [2–7] . Complications linked to procedures performed in Trendelenburg vary, with the most commonly reported complication involving the head and neck area being otorrhagia [2] . Prolonged deep Trendelenburg positioning has also been associated with increased intraocular pressures, increased optic nerve sheath diameters, and in several reported cases, ischemic optic neuropathy [2 , 3] . Increases in middle ear pressure may be an alternative mechanism. Vascular congestion in the middle ear canal may lead to desquamation and bleeding [4] . Increases in arterial and venous pressures can cause rupture of subcutaneous capillaries and may result in otorrhagia [5 , 6] . Impedance of the Eustachian tube's ability to maintain patency between the nasopharynx and middle ear may lead to tympanic membrane rupture. In a case review published by Maerz and Gainsburg [ 7 ], a 63-year-old man underwent robotic-assisted laparoscopic prostatectomy with bilateral lymph node dissection and developed right sided otorrhagia and tympanic membrane rupture. The Eustachian tube is partially collapsed at rest and opens via contraction of the tensor veli palatini muscle. Venous congestion from Trendelenburg positioning can cause dysfunction in this mechanism and the pressure equilibrium with the tympanic membrane [7] . Radiologists should be aware that perioperative SO may occur following remote non-otolaryngologic surgery and that their main role is to exclude other causes of otorrhagia. Knowledge of this rare complication will aid in preventing undue investigations and management. Consent for publication All patient data has been removed. Images used have no identifying information. No informed consent is required to publish. | [
"ALOISI",
"ARVIZO",
"OWENS",
"MAERZ",
"CHAN",
"BASLER",
"MAERZ"
] |
ae2e7e8a492d4644a39584f7a43db86d_Predicting Glaucoma Progression to Surgery with Artificial Intelligence Survival Models_10.1016_j.xops.2023.100336.xml | Predicting Glaucoma Progression to Surgery with Artificial Intelligence Survival Models | [
"Tao, Shiqi",
"Ravindranath, Rohith",
"Wang, Sophia Y."
] | Purpose
Prior artificial intelligence (AI) models for predicting glaucoma progression have used traditional classifiers that do not consider the longitudinal nature of patients’ follow-up. In this study, we developed survival-based AI models for predicting glaucoma patients' progression to surgery, comparing performance of regression-, tree-, and deep learning–based approaches.
Design
Retrospective observational study.
Subjects
Patients with glaucoma seen at a single academic center from 2008 to 2020 identified from electronic health records (EHRs).
Methods
From the EHRs, we identified 361 baseline features, including demographics, eye examinations, diagnoses, and medications. We trained AI survival models to predict patients’ progression to glaucoma surgery using the following: (1) a penalized Cox proportional hazards (CPH) model with principal component analysis (PCA); (2) random survival forests (RSFs); (3) gradient-boosting survival (GBS); and (4) a deep learning model (DeepSurv). The concordance index (C-index) and mean cumulative/dynamic area under the curve (mean AUC) were used to evaluate model performance on a held-out test set. Explainability was investigated using Shapley values for feature importance and visualization of model-predicted cumulative hazard curves for patients with different treatment trajectories.
Main Outcome Measures
Progression to glaucoma surgery.
Results
Of the 4512 patients with glaucoma, 748 underwent glaucoma surgery, with a median follow-up of 1038 days. The DeepSurv model performed best overall (C-index, 0.775; mean AUC, 0.802) among the models studied in this article (CPH with PCA: C-index, 0.745; mean AUC, 0.780; RSF: C-index, 0.766; mean AUC, 0.804; GBS: C-index, 0.764; mean AUC, 0.791). Predicted cumulative hazard curves demonstrate how models could distinguish between patient who underwent early surgery and patients who underwent surgery after > 3000 days of follow-up or no surgery.
Conclusions
Artificial intelligence survival models can predict progression to glaucoma surgery using structured data from EHRs. Tree-based and deep learning-based models performed better at predicting glaucoma progression to surgery than the CPH regression model, potentially because of their better suitability for high-dimensional data sets. Future work predicting ophthalmic outcomes should consider using tree-based and deep learning-based survival AI models. Additional research is needed to develop and evaluate more sophisticated deep learning survival models that can incorporate clinical notes or imaging.
Financial Disclosure(s)
Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article. | Glaucoma is a progressive disease of the optic nerve that causes vision loss and irreversible blindness. However, the clinical trajectory of glaucoma can vary dramatically between patients, with some patients progressing quickly to surgery and others remaining stable for many years. Although elevated intraocular pressure (IOP) is a major risk factor for glaucoma progression, many other ancillary factors crucially influence or are indicators of the clinical trajectories of patients with glaucoma (e.g., medication use, eye examination findings, or ancillary testing results). 1 Thus, identifying patients who are at high risk and predicting glaucoma progression is complex and requires multifactorial data inputs, rendering the task ripe for artificial intelligence (AI) prediction algorithms. 2–4 Some previous work has used AI models to predict glaucoma progression using electronic health records (EHRs). These include traditional machine learning classification models, such as logistic regression, random forest, and support vector machines, for structured data from EHRs, 5 , as well as deep learning models using natural language processing for free-text notes from EHRs. 6 However, most AI predictive models are classifiers that provide a binary outcome prediction and do not explicitly consider the longitudinal nature of follow-up with patients. 7 Survival analyses are longitudinal analyses commonly used in traditional inferential studies but are not as common for developing AI prediction models, especially for ophthalmology. Cox regression is the most widely used model for longitudinal analysis. Still, it operates under many restrictive assumptions, such as the assumption of proportional hazards and of uncorrelated features. These restrictions may be difficult to satisfy, especially for large data sets with many features, such as those typically used for AI predictive models. Alternative tree-based survival model approaches, such as random survival forest (RSF) and gradient-boosting survival (GBS), have shown superior performance in diagnosing several diseases, including breast cancer, lung cancer, and brain tumors, such as glioma. 8 Deep learning approaches to survival analyses, such as DeepSurv and DeepHit, have also achieved outstanding results in multiple studies. 8–11 11 , 12 The purpose of the present study was to predict glaucoma progression to surgery using survival-based AI models and comparing the performance of different approaches. In this article, we applied regression-based (Cox regression), tree-based (RSF and GBS), and deep learning–based (DeepSurv) survival AI models to our glaucoma data set to evaluate the performance of these 4 models and their associated analytic approaches. Methods Study Population and Cohort Construction We identified from EHRs 4512 patients with glaucoma seen by the Stanford Department of Ophthalmology from 2008 to 2020. These patients included patients who had either undergone incisional glaucoma surgery (Current Procedural Terminology codes 66150, 66155, 66160, 66165, 66170, 66172, 66174, 66175, 66179, 66180, 66183, 66184, 66185, 67250, 67255, 0191T, 0376T, 0474T, 0253T, 0449T, 0450T, 0192T, 65820, 65850, 66700, 66710, 66711, 66720, 66740, 66625, and 66540) and who had ≥ 2 instances of a glaucoma diagnosis but did not undergo glaucoma surgery (International Classification of Diseases [ICD] 9 codes H40- (excepting H40.0-), H42-, Q150-, and their ICD9 equivalents). At least 120 days of baseline follow-up after the first visit (and before surgery for the surgical patients) was required to allow for adequate baseline testing to be gathered on new patients, a process which could take several visits at our center. The cohort was split into training, validation, and test sets in a 6:3:1 ratio. All models were trained on the training set, with hyperparameters tuned on the validation set or by crossvalidation on the training set, and final results were reported on the test set. This study adheres to the tenets of the Declaration of Helsinki and was approved by the Stanford Institutional Review Board with a waiver of informed consent. Feature Engineering The structured features considered in the modeling included demographics, eye examination findings, diagnoses, and medication information from the baseline period, defined as the first 120 days after the initial ophthalmology visit. All baseline features were converted into either categorical variables or continuous numeric variables. Categorical variables included all diagnoses, medications, gender, race, and ethnicity. Race and ethnicity were included as defined in the EHR of the patient. Numeric variables included age at baseline, best visual acuity for both eyes during the baseline period, and maximum IOP for both eyes during the baseline period. For categorical variables, features with < 1% variance were removed; for numeric variables, missing values were filled in using column-mean imputation. A total of 361 features were included in the input data set. The follow-up time was defined as the number of days from the baseline date to either surgery or the last visit. Modeling We developed AI survival models using regression-, tree-, and deep learning–based approaches to predict the time of patients with glaucoma progression to surgery. Regression-based models predict outcomes by constructing linear combinations of multiple predictive factors, in contrast with tree-based and deep learning–based models that capture highly nonlinear relationships between predictive factors and predicted outcomes. We also sought to characterize the most important features contributing to the prediction. Two Cox regression models were constructed with principal component analysis (PCA). We built 2 tree-based survival models using RSF and GBS models. A deep learning survival model was also developed and evaluated. Cox Proportional Hazards Model The Cox proportional hazards (CPH) model is a regression model that uses hazard rate as the measure of risk or probability of occurrence of a certain event. The CPH model has several important assumptions, including independence of survival times, absence of correlation between features, a multiplicative relationship between the predictors and the hazard, and a constant hazard ratio. The following formula illustrates the associations between risk factors and the outcome: where ln { h ( t ) h 0 ( t ) } = b 1 X 1 + b 2 X 2 + . . . + b p X p is the expected hazard at time t; h ( t ) is the baseline hazard; h 0 ( t ) are the predictors or risk factors; X 1 , X 2 , . . . X p are regression coefficients to qualify the associations between predictors. b 1 , b 2 . . . b p 1. CPH: the baseline model was a CPH with regularization, commonly known as penalized Cox regression. Hyperparameters, including the number of iterations and penalty term weight (alpha), were optimized using threefold crossvalidated grid search. 2. Cox proportional hazards model with principal component analysis (PCA_CPH): because there were numerous input features, to reduce the dimensionality of the input feature space, we built a machine learning pipeline with PCA added as the first step. The PCA-derived components were then input into the CPH model. Hyperparameters, including number of principal components, number of iterations, and alpha, were fine-tuned using threefold crossvalidated grid search. RSF Model The RSF model, an extension of the random forest model, ensembles a number of survival trees and uses averaging to reduce predictive variance and control overfitting for time-to-event data. We used the RandomSurvivalForest method from the sksurv package (version 0.17.1) to build the RSF model. 13 Using a threefold crossvalidated grid search, we fine-tuned the number of survival trees, the maximum depth of each tree, the minimum number of samples required to split an internal node, and the maximum number of features to consider when looking for the best split. 14 GBS Model A GBS model is an extension of traditional gradient-boosting models. Gradient-boosting survival implements gradient boosting with Cox proportional loss with regression trees as the base learner, and the regression tree is fit on the gradient descent of the loss function. We used GradientBoostingSurvivalAnalysis method with partial likelihood loss from the sksurv package, and tuned the learning rate (shrinkage of the contribution of each regression tree) as well as the abovementioned hyperparameters using threefold crossvalidated grid search. 14 Deep Learning Survival Model To investigate the performance of deep learning survival models compared with regression-based and tree-based models, we trained DeepSurv, a deep feed-forward neural network that uses multiple fully connected layers, to estimate the cumulative hazard of the outcome. Baseline input features 11 from the data set are input into multiple hidden layers to get the output layer x (a single node) with a linear activation equal to the log-risk hazard estimation. In the present study, we trained the DeepSurv model with 2 hidden layers and dropout. Hyperparameters, including the number of nodes, dropout rate, training batch size, and learning rate, were optimized on the validation set. h ˆ θ ( x ) Evaluation Metrics Model performance was evaluated using the concordance index (C-index) and mean cumulative/dynamic area under the curve (AUC) score. Concordance index is the standard performance metric for survival models. It measures the rank correlation between predicted risk scores and observed time points ; in other words, it gives the probability of concordance between predicted and observed survival. The mean AUC score is the mean value of all time-dependent AUC scores from across the study duration. Because, at any given time point in the study, the number of patients who have experienced the outcome and the number remaining at risk varies, the receiver operating characteristic (ROC) curve is expected to vary among different study time points. Thus, the ROC curve is time dependent. The time-dependent AUC score is the area under the time-dependent ROC curve, which is calculated using cumulative cases and dynamic controls at a given time point t, where cumulative cases are all individuals who underwent glaucoma surgery before or at time t (ti ≤ t), whereas dynamic controls are those with ti > t. By computing the area under the cumulative/dynamic ROC at time t, we can determine how well a model can distinguish patients who require surgery by a given time point (ti ≤ t) from patients who do not require surgery at or before this time point (ti > t). 15 16 All models’ hyperparameters were tuned for optimal C-index, with the final evaluation performed on the test set. A summary of tuned hyperparameters for each model is shown in Table S1 (available at www.ophthalmologyscience.org ). Explainability and Interpretability To better explain the models’ predictions, we investigated which were the most important features of regression-, tree-, and deep learning–based models using SHapley Additive exPlanations, a model-agnostic method derived from coalitional game theory. Because the calculation of Shapley importance values does not depend on the underlying model architecture, this method enables a fair comparison of important features across different types of models. 17 , 18 We also plotted cumulative hazard curves of different models for the same group of patients with glaucoma to investigate how models predict risks for surgical and nonsurgical patients. We selected 3 patients from the test set to highlight the models’ interpretability by plotting cumulative hazard curves: 1 patient who underwent early surgery at day 3 from the baseline time; 1 patient who had late surgery at day 3472; and 1 patient who did not undergo surgery during his follow-up period as of day 3330. Results Population Characteristics Out of the 4512 patients with glaucoma included in the study, 748 progressed to require glaucoma surgery. The median follow-up time was 601 days for surgical patients and 1139 days for nonsurgical patients. Population characteristics are summarized in Table 2 . White and Asian racial groups constituted the predominant population in this cohort. The patients’ mean age was approximately 65 years old, the mean logarithm of the minimum angle of resolution visual acuity for both eyes at baseline was about 0.43, and the mean baseline IOP for both eyes was approximately 18.3 mmHg. Model Performance Model performance is summarized in Table 3 . In general, tree-based and deep learning–based models performed better than regression-based models, achieving higher C-index and mean AUC. For the regression-based models, lowering the dimension of the input features via PCA increased the C-index and the time-dependent AUC scores ( Table 3 ). Figure 1 A illustrates that PCA_CPH outperformed the original CPH model at almost every time point in terms of AUC score. Figure 1 B shows that the RSF and GBS models had similar time-dependent AUC scores. Although the RSF model had slightly higher C-index and mean AUC scores than the GBS model, the 2 tree-based models both outperformed the regression-based survival models. Figure 1 B also more clearly shows that the RSF model had a higher AUC than the GBS model at most time points. Figure 1 C shows that the DeepSurv model had similar performance to the RSF and GBS models. Among these 3 models, RSF had the best time-dependent AUC score, but DeepSurv had the highest C-index, as well as slightly better performance than GBS. Explainability Predicted cumulative hazard curves for patients from the test set with different outcomes (no surgery or surgery at different time points) generated by different survival AI models are shown in Figure 2 . All models appropriately predicted the steepest rise in cumulative hazard of surgery for the patient who actually underwent surgery early in the follow-up period. Most models were also able to discriminate between a patient who had not yet undergone glaucoma surgery as of their last follow-up time at day 3330 and a patient who underwent surgery at day 3472, with the patient who eventually underwent surgery having a higher predicted cumulative hazard. We further studied the most important features relied upon by the CPH and GBS models to predict the outcome by calculating the Shapley values of features across the test set. The most important features were similar across models, including features from the broad categories of demographics, medications, diagnoses, and examination components ( Table 4 ). Reassuringly, clinical features, such as IOP and visual acuity, were important predictive factors in these models, as was the usage of many common glaucoma medications. Discussion In the present study, we developed and compared the performance of different survival-based AI models to predict glaucoma progression to surgery using structured EHR data from patients with glaucoma. We compared regression- and tree-based survival models, as well as a deep learning AI survival model. According to our evaluation metrics, we found that the deep learning model DeepSurv had the best overall performance, followed by the tree-based RSF and GBS models. DeepSurv, RSF, and GBS models have the advantages of robustness against multicollinearity and the ability to discern highly nonlinear relationships among predictors without prior feature selection. Previous research has shown that deep learning survival models perform better, especially in high-dimensional data sets. 19 , Although DeepSurv showed the best overall performance, explainability analyses revealed important features that were common for all our model predictions, such as age, visual acuity, and the use of glaucoma medications. 20 Our work develop and compare survival-based predictive models is relatively novel in the ophthalmology AI literature. Most previous models for predicting glaucoma progression are structured as classification models, whether they operate on EHR data or imaging data. 5–7 21 , The outputs of classification models can be interpreted as the probability of experiencing the predicted event, which is relatively simple for users to understand and potentially act upon. However, classification models do not account for the longitudinal nature of patient outcomes. The challenges of modeling for longitudinal data with loss-to-follow-up and censoring are well-known in statistical inference; thus, Kaplan Meier survival analysis and CPHs models are commonly employed for inferential studies that focus on quantifying the relationship between a predictor and an outcome. Survival-based prediction models have begun to be explored for predicting outcomes in other medical domains, including cancer survival prediction 22 19 , and dementia prediction. 23 Overall, the performance range of our survival AI models was comparable to similar studies utilizing inputs from EHRs, including Kim et al 20 for oral cancer (C-index, 0.694–0.781) and Spooner et al 19 for dementia prediction (C-index up to 0.828). In addition, our results were similar to those from studies of survival-based AI models in other medical fields. We also found that tree-based methods and deep learning models outperformed regression-based models, potentially because of their suitability for high-dimensional data sets. 20 19 , 20 , 23 , Similar reasons can explain the improved performance after adding PCA to the CPH model, further illustrating that dimensionality reduction is crucial for prediction models using complex input features from EHRs. 24 A potential drawback of survival-based AI models is that their prediction outputs seem less interpretable to the user than the simple probabilities of experiencing the outcome that classification models provide. Thus, to provide better insight into model outputs, we showed the cumulative hazard predicted by different models (e.g., patients who underwent early, late surgery, or no surgery). These curves illustrate the stark differences in the predicted cumulative hazard curves between a high-risk patient who underwent surgery early in their clinical trajectory (steeply rising cumulative hazard curve) and patients who did not undergo surgery even after long periods of follow-up (slowly rising cumulative hazard curves). Incredibly, most models also were able to distinguish between a patient who had surgery after approximately 10 years versus a patient who did not have surgery after 10 years of follow-up, predicting a slightly higher cumulative hazard in the former. Thus, although it may seem simpler to interpret a predicted probability for glaucoma progression to surgery in traditional classification models, this information does not provide an expected time horizon, and there may not be any inherent relationship between a predicted probability of surgery and its temporal nearness. One potential method of incorporating temporal information into a classification model could be through a multiclassification approach that provides probabilities of glaucoma surgery occurring over discrete future time windows. However, this approach may not naturally account for censoring and may produce probabilities for adjacent time windows that may not be related, plausible, or easily interpretable. Future research could focus on developing classification models that address these limitations or on combining classification with survival models. A cumulative hazard output of a survival AI model may therefore be beneficial for clinical decision support tools that predict future events. In addition, although traditionally criticized as being opaque and unexplainable, AI tree-based and even deep learning–based models can retain the explainability benefits of the more commonly favored Cox regression models. It is important to note that, although we can shed light onto which features exert a stronger influence on prediction in different models, that does not necessarily suggest a true biologic relationship between the features and the outcome, as would be the goal in a hypothesis-driven inference study. Nevertheless, it is striking that among the top most important predictors of glaucoma surgery are factors that clinicians themselves would consider important, such as visual acuity, age, and the use of various glaucoma medications, including second- and third-line medications, such as dorzolamide and brimonidine. These reassuring explainability studies serve to increase the apparent trustworthiness of these AI prediction models. Despite the above advantages of this study, there are several limitations. The data we used are from a single clinical center, and models may not generalize well with data from other sites. However, in service of the goal of personalized algorithms to deliver personalized medicine, a fully generalized algorithm that applies universally is not likely to be the goal. Rather, these approaches can and should be fine-tuned to each population they may be deployed upon. Another limitation of single-center data is that patients may seek care at other institutions. In our study, the models were designed to predict the first glaucoma surgery performed at our institution for new patients. To address this limitation, future research could explore the use of natural language processing to extract external surgery information from clinical notes. Single-center longitudinal studies may also be limited by censoring events, such as death or patient departure from our clinical center. Additional challenges in this study included the imbalance in the ratio of surgical to nonsurgical patients with glaucoma, which in our data set was approximately 1:5 and posed challenges to our models. We also did not incorporate time-varying features in this analysis containing hundreds of inputs. In addition to the resultant challenges in cohort construction with multiple time-varying features, this approach would also reduce the ability to perform dimensionality reduction using PCA or elimination of near-zero variance features and would introduce assumptions during inference that may not be tenable. In addition, our analysis only included structured input data from EHRs. Although this included important measures, such as visual acuity and IOP, measures, such as corneal thickness and refractive error, had a high degree of missingness. Furthermore, unstructured data, such as images and clinical notes, contain a wealth of information about a patients’ prognosis. Further studies combining features from these 2 additional modalities of data can be undertaken, using approaches, such as embedding data extracted from images or text into the baseline features. In conclusion, identifying which patients with glaucoma are at high risk of progressing is an important aspect of clinical care. In our study, observational clinical data were collected from a single academic center, and multiple survival AI models were developed to predict which patients progress to glaucoma surgery. After a comparison of evaluation results across different models, we concluded that the neural network model DeepSurv and tree-based survival AI models outperformed regression-based models. Future research can be conducted to explore larger and more diverse data sets from multiple clinics and integrate multiple modalities of input data, such as text or imaging. Supplementary Data Table S1 | [
"CHAUHAN",
"FRIEDMAN",
"NEWMANCASEY",
"RIVERA",
"BAXTER",
"BAXTER",
"WANG",
"QIU",
"WIECZORKOWSKA",
"MA",
"KATZMAN",
"LEE",
"ISHWARAN",
"POLSTERL",
"STECK",
"SCHMID",
"LUNDBERG",
"KIM",
"SPOONER",
"LI",
"THAKUR",
"MOHAMMED",
"KVAMME"
] |
52bda6802c4a4ef682dbb364363f194f_Performance evaluation of XY all-male hybrids derived from XX female Channa argus and YY super-males_10.1016_j.aqrep.2021.100768.xml | Performance evaluation of XY all-male hybrids derived from XX female Channa argus and YY super-males Channa maculate
| [
"Ou, Mi",
"Chen, Kun-Ci",
"Luo, Qing",
"Liu, Hai-Yang",
"Wang, Ya-Ping",
"Chen, Bai-Xiang",
"Liang, Xin-Qiu",
"Zhao, Jian"
] | Snakeheads are economically important freshwater fish in China, of which males grow much faster than females, and the individual size determines the trading price. All-male NBS (northern snakehead (Channa argus, NS) ♀ × blotched snakehead (Channa maculata, BS) ♂) were produced by combining sex control, sex-specific molecular marker and hybridization of YY super-male BS and normal XX female NS. In this study, the culture performance of all-male NBS was evaluated, including yield, survival rate, growth rate, sex ratio, size uniformity, feed conversion ratio, and low-temperature resistance. The results showed that all-male NBS exhibited superior productivity traits compared to the existing cultured snakehead varieties. The average body weight of all-male NBS was heavier than that of mixed-sex NBS by 23.1–38.8 %, or inbred NS by 16.8–34.0 %; the average daily gains of all-male NBS were faster than those of mixed-sex NBS by 28.4–39.3 %, or inbred NS by 18.6–34.1 %. The male ratio of all-male NBS was more than 93.0 %, and there was a small proportion of female or bisexual individuals. The rates of body weight above 1 kg of all-male NBS were 91.3–96.4 %, which were far higher than those of mixed-sex NBS or inbred NS, which were only 62.0–75.8 %. Therefore, all-male NBS individuals are larger and more uniform. The feed conversion ratios of all-male NBS decreased 12.5–17.6 % compared to mixed-sex NBS fed artificial compound feed and 72.3 % compared to inbred NS fed iced fresh fish. In addition, the low-temperature resistance of all-male NBS was high, with only 4.1 % mortality vs. 28.5–37.4 % in other hybrids. In snakeheads trading, large individuals are more favored, and the price of snakeheads above 1 kg is approximately 40 % higher than that of individuals in the range of 0.5–2 kg. Therefore, the large and uniform all-male NBS population could bring tremendous economic benefits, because input costs are falling, while output profits are increasing. The large and uniform all-male NBS population is thus quite promising in snakeheads market applications. | 1 Introduction Sexual dimorphism, in which males and females exhibit different characteristics, is a common phenomenon in many sexually reproducing animals. In aquaculture practice, sexual dimorphism is widespread in fish species and involves many aspects of biology, including body shape ( Rosenthal and Evans, 1998 ), color ( Casalini et al., 2009 ; Liang et al., 2020 ), size ( Mei and Gui, 2015 ), and physiological behavior ( Ventura, 2018 ; Zupanc, 2020 ). Growth is one of the most valuable economic traits for genetic improvement in fish. Some fish species display different growth rates and body sizes, thus monosex culture of either all-male or all-female production has significant economic implications in aquaculture. Monosex stock has many advantages such as high economic value brought by rapid growth, uniform size and low waste of energy in gonadal development ( Ng and Wang, 2011 ). Additionally, monosex culture reduces the impact of phenotypic sex on product quality due to a lack of spawning ( Budd et al., 2015 ). Furthermore, the culture conditions are tailored to better suit the energetic demands of the cultured gender when culturing one sex ( Ventura, 2018 ). Monosex culture is practiced in many fish species by manual segregation, interspecific hybridization, environmental manipulation, pulse-electric field induction, chromosome ploidy manipulation, direct hormone-induced sex reversal to the desired sex, and sex reversal to the undesired sex, which will produce the desired sex in its progeny ( Budd et al., 2015 ; Bunthawin et al., 2015 ; Abo-Al-Ela, 2018 ; Ventura, 2018 ). Interspecific hybridization produces monosex offspring such as all-female striped bass ( Morone saxatilis ) × yellow bass ( Morone Mississippiensis ) hybrids ( Wolters and DeMay, 1996 ) and all-male Nile tilapia ( Oreochromis niloricus ) × blue tilapia ( Oreochromis aureus ) hybrids ( Marengoni et al., 1998 ). Chromosome ploidy manipulation is a useful tool for monosex production in fish species. All-female triploid Atlantic salmon ( Salmo salar ) and O. mossambicus were produced using sperm from sex-reversed gynogenetic males and triploidy induced by heat shock ( Benfey and Sutterlin, 1984 ; Varadaraj and Pandian, 1990 ). Similarly, all-female triploid crucian carp ( Carassius auratus ) were produced by combining artificial gynogenesis, sex reversal and diploid-tetraploid hybridization ( Luo et al., 2011 ). The sex of fish is easily influenced by environmental manipulation such as social factors, temperature, pH, and density. Rubin (1985) reported that acidic waters caused 100 % male populations in Xiphophorus helleri and 90 % male populations in Poecilia melanogaster . Azuma et al. (2004) found that all-female sockeye salmon ( Oncorhynchus nerka ) were produced when the temperature was raised from 9 °C to 18 °C for long periods during embryonic and alevin stages. Xiong et al. (2020) indicated that all-female yellow catfish (Pseudobagrus fulvidraco ) could be obtained by aromatase inhibitor and 34 °C high temperature treatment. For decades, 90–100 % of all-female and all-male populations were obtained by direct hormone-induced sex reversal from medaka ( Oryzias latipes ), rainbow trout ( Salmo gairdneri ), European sea bass ( Dicentrarchus labrax L.), black crappie ( Pomoxis nigromaculatus ), Nile tilapia, etc. ( Pandian and Kirankumar, 2003 ; Budd et al., 2015 ). Although direct hormone-induced sex reversal has been successfully applied in numerous aquaculture species, direct hormone-induced sex reversal causes adverse effects on the farmers themselves and consumers or the environment. To better promote monosex culture, genetic manipulation combined with artificial gynogenesis, sex reversal, cross-testing and interspecific hybridization has been developed. All-male tilapia ( Mair et al., 1997 ), all-female and all-male populations of rainbow trout ( Guiguen et al., 1999 ) were obtained by the incorporation of hormone-induced sex reversal and progeny testing. Kitano et al. (1999) adopted gynogenesis and androgen-induced sex reversal to obtain all-female Japanese flounder ( Paralichthys olivaceus ). Liu et al. (2007 , 2013 ) utilized estrogen-induced sex reversal and gynogenesis to produce YY super-males and finally obtained all-male yellow catfish by mating YY super-males and normal XX females. Tan et al. (2019) combined screening, artificial gynogenesis, hormonal sex reversal and test crossing to establish all-male yellow catfish with pure germplasm. Monosex breeding has made important progress in diverse fish and brought about great economic benefits. Therefore, monosex breeding has practical significance for other fishes with sexual dimorphism to cultivate all-female or all-male populations. Because of its tender flesh, high protein content, delicious taste, and fewer intermuscular spines, snakeheads are quite popular with consumers in China ( Ou et al., 2017 ). The main species in China are northern snakehead ( Channa argus , NS), blotched snakehead ( Channa maculata , BS) and their hybrids (BNS (BS ♀ × NS ♂) and NBS (NS ♀ × BS ♂)). The annual production of snakeheads in China has been approximately 500,000 tons since 2012 ( China’s Ministry of Agriculture, 2013-2020 ). However, there is significant sexual dimorphism in the growth rate and body size of males versus females in field surveys and aquaculture practices of snakeheads ( Ou et al., 2018 ). In particular, male hybrid snakeheads are twice as large as females, and large individuals are more favored in snakeheads trading. The unit price of fish above 1 kg is approximately 40 % higher than that of fish in the range of 0.5−2 kg, and large and uniform individuals mean high economic benefits ( Zhao et al., 2021 ). In addition, unplanned reproduction occurring in grow-out ponds will generate crowding and higher densities than intended and lead to energy waste due to sexual activity at the expense of growth, resulting in the slowdown of the growth rate and the increase in the feed conversion ratio ( Budd et al., 2015 ; Ventura, 2018 ). Therefore, all-male populations can dramatically increase the quality, output, and economic value of snakeheads through genetic manipulation. For this reason, all-male NBS were produced by combining hormone-induced sex reversal, sex-specific molecular marker and hybridization of YY super-male (YY-M) BS and normal XX female (XX-F) NS ( Zhao et al., 2021 ). In brief, XY sex-reversed female (XY-F) BS were produced by the 17 alpha-ethinylestradiol (EE2)-induced sex reversal technique, and then XY-F BS were mated with normal XY male (XY-M) BS to produce YY-M BS. The specific operation of EE2 induction was as Zhao et al. (2021) described before with minor modification. The fry of 5 days post-hatching (dph) were reared in aerated freshwater containing 100 μg/L EE2 for 7 days. Then larvae were fed in commercial feed, a dose of 50 mg/kg EE2 was mixed in commercial feed for hormone inducement. EE2 induction lasted to 40 dph, subsequently, the normal culture was carried out. The sex-specific molecular marker was used to accurately identify XY-F and YY-M individuals in this process. Finally, all-male NBS were obtained by fertilization of eggs from XX-F NS with sperm from YY-M BS. This study aimed to evaluate the culture performance (in terms of yield, survival rate, growth rate, sex ratio, size uniformity, feed conversion ratio and low-temperature resistance) of all-male NBS derived from YY-M BS and XX-F NS compared with that of the existing cultured snakehead varieties. 2 Materials and methods 2.1 Broodstock maintenance and artificial reproduction NS and BS founder stocks were collected from the wild and used to raise purely bred progeny. More narrowly, NS were raised to sexual maturity with iced fresh fish in the aquaculture fishery of Hubei Province. The body weight of female stocks was 0.5–1.2 kilograms (kg), and male stocks were 0.6–1.5 kg. BS were raised to sexual maturity with commercially available floating snakehead feed (Rongchuan, China) in the aquaculture fishery of Guangdong Province. The body weight of female stocks was 0.4−0.8 kg, and male stocks were 0.5–1.0 kg. In the breeding season, all broodstock were transported to Nanhai Bairong Improved Aquatic Seed Co., Ltd. (FoShan City, Guandong Province, China) and reared for one week in indoor conditions with temperatures from 26 °C to 28 °C. YY-M BS were identified from the offspring of XY-M and XY-F by sex-specific molecular marker ( Ou et al., 2017 ). One hundred mature XX-F NS, 100 mature XY-M NS, 100 mature XY-M BS and 100 mature YY-M BS were chosen for breeding. Female stocks were selected with well-rounded, soft and distended abdomens, while male stocks showed elongated urinogenital papillae. The breeding methods were employed according to our previous study ( Ou et al., 2018 ) with some minor modifications. Eggs from each XX-F NS individual were divided into three equal parts and examined under the microscope to ensure that no uncontrolled fertilization had occurred, as indicated by the absence of polar bodies. Subsequently, three mixed eggs from 20 XX-F NS individuals were fertilized with mixed sperm from 20 YY-M BS, XY-M BS and XY-M NS, respectively. All-male NBS were obtained by mating YY-M BS with XX-F NS, mixed-sex NBS were obtained by mating XY-M BS with XX-F NS, and inbred NS were produced by mating XY-M NS with XX-F NS. There were five parallels in artificial reproduction. The fertilized eggs were incubated in hatching barrels (1 m × 1 m × 1.5 m) at the water temperature of 26 ± 1 °C and water flow of approximately 20 L/min. Approximately 2000 embryos were taken at random from all-male NBS, mixed-sex NBS and inbred NS for the examination of the fertilization rate and hatching rate, respectively. Five replicate analyses were performed per sample. Fertilization rate (%) = number of embryos at the gastrula stage/total number of eggs taken for fertilization × 100 %, hatching rate (%) = number of hatched fry/number of fertilized eggs × 100 %. 2.2 Fry culture and fingerling rearing Hatching occurred at 48 h post-fertilization. Three days post-hatching (dph), the fry began to swim flat, three types of fry, all-male NBS, mixed-sex NBS and inbred NS, were cultured in three separate cement ponds (4 m × 4 m × 2 m), and the stocking density was 100,000–150,000 fry/667 m 2 . First, the fry fed on the plankton in the water without supplementary feeding. Three days later, an appropriate amount of soymilk or surimi could be sprinkled into the ponds to maintain the abundance of plankton in the ponds. If the plankton were insufficient, the food was supplemented in time, and each 10,000 fry fed on 0.5 kg large zooplankton or earthworms per day. Water exchange was performed every three days at a 20 % exchange rate. When the fingerlings were 3 cm long, the stocking density was adjusted to 40,000–50,000 fingerlings/667 m 2 . At this time, all-male NBS and mixed-sex NBS fingerlings began to be trained to change their feeding habits, the specific rearing process is described below. First, the fingerlings were fed a mixture of zooplankton and surimi, the proportion of zooplankton was reduced, and the portion of surimi was gradually increased in the diet, until fingerlings were all fed with surimi. Once fingerlings were fed with surimi, a small amount of artificial compound feed was added to the feed. Later, the proportion of artificial compound feed was gradually increased, and the consumption of surimi was reduced until artificial compound feed completely replaced surimi. The whole weaning process lasted approximately 8–12 days. The fingerlings were fed 3–5 times daily to satiation with artificial compound feed containing 45 % crude protein, and the daily feeding amount was 8–10 % of the fish weight. Subsequently, all-male NBS and mixed-sex NBS were fed twice daily with artificial compound feed, and the protein content in the feed and feeding amount were adjusted according to the size of the fish. The feeding rates were adjusted to minimize size differences for the initiation of the growth experiments. During the whole feeding process, inbred NS were fed iced fresh fish. 2.3 Performance evaluation 2.3.1 Culture in the same earthen pond In July 2019 to August 2020, all-male NBS (n = 1000; weight, 20.2 ± 1.1 g; length, 11.6 ± 0.9 cm) and mixed-sex NBS (n = 1000; weight, 22.4 ± 4.8 g; length, 12.7 ± 1.0 cm) were tagged with radio frequency identification (RFID) tags and reared together in A302 earthen pond (approximately 667 m 2 ), which was located at the Danzao base of Nanhai Bairong Improved Aquatic Seed Co., Ltd. Fish were sampled periodically over a 390-day period. The body length (± 0.1 cm) and body weight (± 0.1 g) of 100 randomly selected individuals were measured every month from July to December 2019. In February and August 2020, 100 fish were randomly selected for measurements every two months. Every sampled fish was scanned every time to confirm which group the fish was according to the tag code, ensuring that at least 30 fish from each group were measured. During the winter (November 2019 through March 2020), fish were fed once daily if the water temperature was warm enough to stimulate feeding activity (∼ above 15 °C). In August 2020, fish were harvested by seining, all-male NBS and mixed-sex NBS were distinguished according to RFID tags, and the total yields of each type of fish were recorded. The growth performance experiment in the same earthen pond was repeated from June to December 2020 at the Danzao base. One thousand all-male NBS (weight, 1.9 ± 0.3 g; length, 5.1 ± 0.4 cm) and 1000 mixed-sex NBS (weight, 1.8 ± 0.4 g; length, 5.1 ± 0.4 cm) were reared in different 4 m × 4 m × 2 m cement tanks. One month later, all-male NBS and mixed-sex NBS were tagged with RFID tags and cultured together in A303 earthen pond (approximately 667 m 2 ), which was located at the Danzao base of Nanhai Bairong Improved Aquatic Seed Co., Ltd. Other operations were the same as above. 2.3.2 Culture in different earthen ponds Growth performance experiments in different earthen ponds were conducted in four aquaculture facilities in 2019–2020. Three sites were in Guangdong Province: Zhuhai (ZH), Sanshui (SS) and Zhongshan (ZS), and one site was in Zhejiang Province: Deqing (DQ). The stocking densities were as follows: (1) 48,000 all-male NBS (weight, 45.0 ± 3.4 g) and 48,000 mixed-sex NBS (weight, 48.2 ± 7.4 g) were separately reared in 7 × 667 m 2 earthen ponds at the ZH test site, (2) 70,000 all-male NBS (weight, 31.9 ± 2.1 g) and 70,000 mixed-sex NBS (weight, 36.3 ± 4.8 g) were separately reared in 9 × 667 m 2 earthen ponds at the SS test site, (3) 39,000 all-male NBS (weight, 33.1 ± 2.4 g) and 39,000 mixed-sex NBS (weight, 70.0 ± 10.4 g) were separately reared in 7 × 667 m 2 earthen ponds at the ZS test site, (4) 12,000 all-male NBS (weight, 89.3 ± 7.3 g) and 12,000 inbred NS (weight, 95.6 ± 13. g) were separately reared in 4 × 667 m 2 earthen ponds at the DQ test site. All-male NBS and mixed-sex NBS were fed twice daily (at 9:00 and 16:00) with artificial compound feed containing 38–40 % crude protein, and inbred NS were fed iced fresh fish. At each test site, each type of fish was reared under similar environmental conditions (i.e., feeding regimes, dissolved oxygen, ammonia nitrogen, pH value). In 2020, fish were harvested by seining, and the total yields of each pond were counted. The growth performance experiments in different earthen ponds were repeated in July 2020 in the ZH, SS, ZS and DQ fisheries. Body length (± 0.1 cm) and body weight (± 0.1 g) of 50 randomly selected individuals from each group were measured and used as the initial data. All tested fish were fed twice daily with artificial compound feed containing 40 % crude protein during the fingerling period and 38 % crude protein during the adult period. All tested fish were fed under similar conditions as mentioned above. Fifty individuals from each group were randomly selected for body length (± 0.1 cm) and body weight (± 0.1 g) measurements every month from August to November in 2020. 2.3.3 Statistical analyses All weight and length data were presented as mean ± standard deviation and analyzed with Student’s t -test using the SPSS Version 20.0 (SPSS Inc., Chicago, USA) statistical package to determine the difference between the parameters of the selected group and the control group, and significant differences were recognized as valid if P < 0.05. The weight coefficient of variation (WCV) and length coefficient of variation (LCV) of each type of fish at each sampling point were calculated based on the following formulae: where WCV (%) = 100 × SD / w A w SD is the standard deviation of body weight and w A is the average body weight. w where LCV (%) = 100 × SD / l A l SD is the standard deviation of body length and l A is the average body length. l One hundred fish of each type were randomly selected to calculate the final average size. The survival rate (SR), feed conversion ratio (FCR), mean length gain ratio (LGR), mean weight gain ratio (WGR), mean length-specific growth rate (SGR L ), mean weight-specific growth rate (SGR W ) and average daily gain (ADG) for each type of fish were calculated by the following formulae: where SR (%) = 100 × N / h N i N is the harvest survival number, and h N is the initial fish number. i FCR = Feed intake/Weight gain where LGR (%) = 100 × ( L - h L )/ i L i L is the average harvest body length, and h L is the average initial body length. i where WGR (%) = 100 × ( W - h W )/ i W i W is the average harvest body weight, and h W is the average initial body weight. i SGR (%) = [100 × (ln L L - ln h L )]/( i D - h D ) i SGR (%) = [100 × (ln w W - ln h W )]/( i D - h D ) i where ADG = ( W - h W )/( i D - h D ) i D is the harvest day, and h D is the initial day. i 2.4 Low-temperature tolerance test To determine low-temperature tolerance, all-male NBS, mixed-sex NBS and mixed-sex BNS, whose body weight was approximately 50.0 g, were tagged with RFID tags and reared together in a 4 m × 4 m × 2 m cement pond. After a period of rearing, approximately 120 individuals were randomly selected from each group for testing, weighed and put into the same cement pond. To reduce the stress caused by drastic changes in temperature, after 7 days of acclimation, the water temperature was cooled at a rate of 2 °C/d by an air circulation cooling system until it was approximately 4 °C. During the test, the fish were checked every 12 h, and dead fish were eliminated. Once the fish was unbalanced, the abdomen turned over, and there was no response to touch, the fish was taken out immediately, and the tags were scanned to confirm which group the fish was according to the tag code. When the fish stopped dying and stabilized for 5 days, the water temperature was recovered to room temperature at a rate of 2 °C/d, and the low-temperature tolerance test was complete. These fish stopped feeding throughout the test, and water quality was monitored by a dissolved oxygen instrument and water quality kit twice a day to ensure that the water quality indexes were within the requirements of aquaculture water quality. The dissolved oxygen was more than 5 mg/L, the ammonia nitrogen was less than 0.5 mg/L, the nitrite was less than 0.01 mg/L, and the pH value was 7.0-7.5. 3 Results 3.1 Fertilization rate, hatching rate and fingerling survival rate The fertilization, hatching and fingerling survival rates of all-male NBS, mixed-sex NBS and inbred NS are shown in Table 1 . The fertilization and hatching rates of inbred NS were higher than those of all-male NBS and mixed-sex NBS ( P < 0.05), while the fingerling survival rate was the lowest of the three types of fish, which was only 43.5 % ± 7.4 %. There were no significant differences in terms of fertilization and hatching rates between all-male NBS and mixed-sex NBS ( P > 0.05). However, the fingerling survival rate of all-male NBS was the highest in the three types of fish, which was 54.8 % ± 4.5 %, and much higher than that of mixed-sex NBS ( P < 0.05). 3.2 Growth performance and yields 3.2.1 Results of growth performance in the same earthen pond After 390 days of feeding, 1778 fish were harvested from the A302 earthen pond located at Danzao base from July 2019 to August 2020. According to RFID tags, 917 all-male NBS and 861 mixed-sex NBS were distinguished, and the SR of all-male NBS and mixed-sex NBS were 91.7 % and 86.1 %, respectively. 884 all-male NBS and 612 mixed-sex NBS weighed above 1 kg, thus, the rates of body weight above 1 kg of all-male NBS and mixed-sex NBS were 96.4 % and 71.1 %, respectively. Table 2 shows the growth performance results in the A302 earthen pond. At harvest time, the average body length of all-male NBS was 44.7 cm, which was significantly longer than that of mixed-sex NBS (40.7 cm) by 9.8 % ( P < 0.01) ( Fig. 1 A). The average body weight of all-male NBS was 1491.1 g, which was significantly heavier than that of mixed-sex NBS (1132.1 g) by 31.7 % ( P < 0.01) ( Fig. 1 B). All-male NBS exhibited a higher growth rate than mixed-sex NBS over the entire period, and the ADGs of all-male NBS and mixed-sex NBS were 3.8 g/d and 2.8 g/d, respectively. Table 3 shows the LCVs and WCVs of all-male NBS and mixed-sex NBS in the same earthen pond from July 2019 to August 2020. The LCVs of all-male NBS were smaller than those of mixed-sex NBS. Similarly, the WCVs of all-male NBS were 5.6–16.4 %, which were much lower than those of mixed-sex NBS, whose WCVs were 12.9–30.7 %. The sex of the hybrid snakehead was identified by dissection. Ninety-four of 100 all-male NBS individuals were male, and 6 were female or bisexual, while 49 out of 100 mixed-sex NBS individuals were male, and 51 were female. Both testes and ovaries were present in the gonads of these unanticipated individuals of all-male NBS (Fig. S1), which were defined as ovotestes. The testes were divided into several segments by the ovaries (Fig. S1A, S1B), or testes were more or less adjacent to the ovaries (Fig. S1C, S1D). The paraffin sections of ovotestes showed that various stages of oocytes and spermatocytes typically coexisted in the same microscope view ( Fig. 2 ), and massive developing oocytes (oogonia, primary oocytes and growing oocytes) and spermatocytes (spermatogonia, primary spermatocyte and secondary spermatocyte) were observed in ovotestes. However, mature spermatocytes (spermatids and spermatozoa) were observed in the gonads from normal male NBS (Fig. S2A), and vast of mature oocytes were observed in the gonads from normal female NBS (Fig. S2B). It means that the maturity of the gonads from bisexual individuals was later than that of normal male and female NBS individuals. The growth performance test in the same earthen pond was repeated in the A303 pond at the Danzao base from June to December in 2020, and the results, which were consistent with the previous results, are shown in Tables S1, S2 and Fig. S3. The ADG of all-male NBS was 4.5 g/d, which was 32.4 % faster than that of mixed-sex NBS (3.4 g/d). The average body weight of all-male NBS was 949.3 g after 180 days of feeding, which was significantly heavier than mixed-sex NBS by 33.91 % ( P < 0.01). In addition, the LCVs of all-male NBS were only 5.0–8.4 %, which were much lower than those of mixed-sex NBS, whose LCVs were 8.2–13.5 %. Likewise, the WCVs of all-male NBS were much smaller than the WCVs of mixed-sex NBS, which were 13.2–19.7 %. Overall, the growth rate of all-male NBS was faster than that of mixed-sex NBS, and the body size of all-male NBS was more uniform than that of mixed-sex NBS in the same earthen pond. 3.2.2 Results of growth performance in different earthen ponds During 2019–2020, growth performance experiments in different earthen ponds were conducted in four different places. Tables 4 and 5 exhibit the information and growth indexes of all-male NBS, mixed-sex NBS and inbred NS in different earthen ponds at different test sites. At the ZH site, the average body weight of commercial all-male NBS was 1260.0 g after 192 days of feeding, while that of commercial mixed-sex NBS was only 952.0 g after 195 days of feeding. The ADG of all-male NBS (6.3 g/d) was 36.5 % higher than that of mixed-sex NBS (4.6 g/d). In addition, the size of all-male NBS was more uniform than mixed-sex NBS, and the rate of body weight above 1 kg was 91.3 %, which was much higher than that of mixed-sex NBS (only 62.0 %). A total of 188 individuals were male in 200 commercial all-male NBS, whose male ratio was 94.0 %, while the male ratio was only 52.0 % in commercial mixed-sex NBS. Similar results were observed at the SS and ZS sites. The ADGs of all-male NBS were higher than those of mixed-sex NBS by 37.7 % at the SS site and 28.4 % at the ZS site. Furthermore, the rates of body weight above 1 kg in commercial all-male NBS at the SS and ZS sites were 96.4 % and 93.1 %, respectively, which were much higher than those of commercial mixed-sex NBS (only 71.2 % at the SS site and 74.5 % at the ZS site). The male ratios of commercial all-male NBS were more than 90.0 %, whereas those of commercial mixed-sex NBS were less than 60.0 % at these two test sites. Moreover, the FCRs of all-male NBS were 0.97, 1.19 and 1.16 at the ZH, SS and ZS sites, respectively, which were lower than those of mixed-sex NBS by 12.6 %, 12.8 % and 17.6 % at the ZH, SS and ZS sites, respectively (1.11 at the ZH site, 1.37 at the SS site, and 1.41 at the ZS site). At the DQ site, the average size of commercial all-male NBS fed artificial compound feed was 18.6 % greater than that of inbred NS fed iced fresh fish after 250 days of feeding. The ADG of all-male NBS (5.6 g/d) was 20.0 % faster than that of inbred NS (4.7 g/d). The FCR of all-male NBS was only 1.25, which was 72.3 % lower than that of inbred NS (4.25). The rate of body weight above 1 kg of all-male NBS was 95.1 %, which was much higher than that of inbred NS (only 75.8 %). In addition, the male ratio of commercial all-male NBS was 93.0 %, which was much higher than the 54.0 % of inbred NS. Therefore, the individuals of all-male NBS were more uniform and larger than inbred NS. In July 2020, the growth performance experiments in different earthen ponds were repeated in the ZH, SS, ZS and DQ fisheries. Despite intensive efforts to get the fish to reach the same size at the initiation of tests in June, the initial average body weight of these three types of fish ranged from 5.0–7.4 g at 30 dph, and the initial average body length ranged from 7.0–8.4 cm, but there were no significant differences ( P > 0.05). The results were consistent with those of the previous year. After four months of feeding, the average body weight of all-male NBS was heavier than mixed-sex NBS or inbred NS by 38.4 % at the ZH site, 35.5 % at the SS site, 38.8 % at the ZS site, and 34.0 % at the DQ site ( Fig. 3 and Table S3). The average body length of all-male NBS was longer than that of mixed-sex NBS or inbred NS by 14.1 %, 13.2 %, 14.3 % and 12.7 % at the ZH, SS, ZS, and DQ sites, respectively (Fig. S4 and Table S3). The WCVs of all-male NBS were smaller than mixed-sex NBS or inbred NS, which were 4.2–16.3 % at the ZH site, 3.5−20.6% at the SS site, 2.7–18.8 % at the ZS site, and 8.5–13.4 % at the ZH site. However, the WCVs of mixed-sex NBS were 12.1–25.5 % at the ZH site, 7.5−25.6% at the SS site, 13.2–24.5 % at the ZS site, and 4.2–26.0 % at the ZH site. Similarly, the LCVs of all-male NBS were much lower than those of mixed-sex NBS or inbred NS (Table S4). 3.3 Low-temperature tolerance of three group fish After 7 days of adaptation, the experimental fish began to cool at a rate of 2 °C/d. On the 6th day, the lowest water temperature reached approximately 4 °C, which could be maintained by the air circulation cooling system. The fish began to die on the 18th day. As the experiment progressed, the mortality reached a peak on the 24th and 32nd days ( Fig. 4 ). There was basically no mortality on 40th day. After 5 days of stabilization, the water temperature was recovered to room temperature, and the low-temperature tolerance test was over. The activity ability of hybrid snakehead obviously decreased at low water temperatures (4−5 °C). The fish mainly gathered at the bottom of the pond, moved slowly, and then died gradually. The main symptoms of the dead fish were ascites, exophthalmos, edema and frostbite. A total of 86 fish died in the test, and the overall mortality rate was 23.4 %. Table S5 shows that the body weight of the dead individuals was significantly lighter than that of the surviving individuals ( P < 0.01). The mortality rates of mixed-sex BNS, mixed-sex NBS, and all-male NBS were 37.4 %, 28.5 % and 4.1 %, respectively ( Fig. 5 ). The results of the chi square test showed that there were significant differences in the mortality rates among different hybrid snakehead varieties (P = 0.000). The low-temperature resistance of all-male NBS was the strongest, followed by mixed-sex NBS, and mixed-sex BNS was the weakest. 4 Discussion There are many advantages of monosex stocks created by sex control. For example, the huge economic value brought by rapidly dimorphic growth rates, prevention of energy waste caused by precocity and uncontrolled reproduction, stable mating systems, and so on ( Budd et al., 2015 ; Ventura, 2018 ). Hybridization is a useful tool in aquaculture, and hybrids exhibit heterosis or transgressive inheritance such as fast growth rate, superior flesh quality, strong disease resistance and environmental tolerances ( Hu et al., 2012 ; Ou et al., 2018 ). All-male NBS, which have remarkable growth characteristics, were produced by combining sex control and sex-specific molecular marker with hybridization of YY-M BS and XX-F NS ( Zhao et al., 2021 ). In the production practice, the fertilized eggs of all-male NBS and mixed-sex NBS were found to have a relatively high proportion of dead or malformed embryos during embryonic development compared with inbred NS. Some fertilized eggs had yolk sac autolysis and chorda notochord deformity, and some embryos had only partial membrane emergence. As a result, these abnormal embryos could not survive and hatch. In our results, the fertilization and hatching rates of inbred NS were higher than those of all-male NBS and mixed-sex NBS ( P < 0.05). The dead or malformed embryos of hybrid snakehead during hatching may be caused by the difficulty of mating. The chromosome numbers and karyotypes of parents are different, in which the karyotype formula of NS is 2n = 48 = 4sm + 22st + 22t, and BS is 2n = 42 = 4m + 2sm + 16st + 20t. As a result, the chromosome number and karyotype of their F1 hybrid (NBS hybrid) is 2n = 45 = 2m + 4sm + 9st + 30t ( Ou et al., 2018 ). There may be compatibility and coordination problems in their nucleoplasmic genetic materials, which may lead to gene silencing, delayed gene expression, asymmetric gene expression, etc. ( Whitt et al., 1977 ; Parker et al., 1985 ; Ren et al., 2016 ; Li et al., 2020 ), resulting in a low hatching rate of F1 hybrid embryos. The fingerling survival rate of all-male NBS was much higher than that of mixed-sex NBS and inbred NS ( P < 0.05). All-male NBS are suspected to be more uniform in size, and the mutual annihilation caused by size differences is less serious and easier to be domesticated compared with mixed-sex NBS and inbred NS, so the fingerling survival rate is improved. Monosex stocks have many merits in aquaculture, such as fast growth rate, low feed conversion ratio, and uniform body size, which have excellent market application prospects. Mair et al. (1995) reported that genetically male tilapia (GMT) produced from YY males could improve marketable yields compared with sex-reversed male tilapia (SRT) and mixed-sex tilapia (MST). Stejskal et al. (2009) found that all-female Eurasian perch ( Perca fluviatilis L.) produced by mating masculinized females (neomales) with common females gained 20 % more in total body weight than bisexual stock on 126 feeding days. Liu et al. (2007 , 2013) indicated that the average yield of all-male yellow catfish was approximately 35 % higher than that of the normal catfish. Wang et al. (2019) stated that the average daily gain of “Yuemin No. 1” tilapia ( O. niloticus ♀ × O. Spp. YY♂) bred from the original populations of O. niloticus and O. aureus was 28.6 % higher than that of GIFT O. niloticus in the same earthen pond. In our results, multiple growth parameters strongly indicated that all-male NBS exhibited superior productivity traits compared to mixed-sex NBS or inbred NS. In the same earthen pond culture, all-male NBS yielded higher ADGs (3.8 g/d for 390 days feeding from July 2019 to August 2020, 4.5 g/d for 210 days feeding from June to December 2020) than mixed-sex NBS (2.8 g/d for 390 days feeding, 3.4 g/d for 210 days feeding). Similar results were also found in different earthen pond cultures in the four aquaculture fisheries. During 2019–2020, the ADGs of all-male NBS were 6.3 g/d for 192 days feeding at the ZH site, 5.8 g/d for 275 days feeding at the SS site, 5.7 g/d for 225 days feeding at the ZS site, and 5.6 g/d for 255 days feeding at the DQ site. However, the ADGs of mixed-sex NBS or inbred NS in this same period were much lower than those of all-male NBS, which were 4.6 g/d for 195 days feeding at the ZH site, 4.2 g/d for 281 days feeding at the SS site, 4.4 g/d for 225 days feeding at the ZS site, and 4.7 g/d for 258 days feeding at the DQ site. In July to November 2020, repeated comparative culture in different ponds also confirmed that all-male NBS grew faster than mixed-sex NBS or inbred NS, and the ADGs were 38.4 %, 35.5 %, 38.8 %, and 34.0 % higher than those of mixed-sex NBS or inbred NS at the SS, SS, ZS, and DQ sites, respectively. The rates of body weight above 1 kg of all-male NBS were 91.3–96.4 %, which were far higher than those of mixed-sex NBS or inbred NS, only 62.0–75.8 %. In snakeheads trading, large individuals are preferred, and the price of fish above 1 kg is approximately 40 % higher than that of fish in the range of 0.5–2 kg ( Zhao et al., 2021 ). Therefore, uniform and large individuals mean more economic benefits for farmers. All-male NBS with fast growth and uniform size will be more popular among farmers. In terms of feed conversion efficiency, fish can achieve higher growth rates by: (i) decreasing the amount of food consumed in relation to weight gain; (ii) increasing the quantity of feed consumed as a result of increased appetite; or (iii) by a combination of effective feed utilization and higher feed consumption ( Gomelsky, 2011 ). In our research, when the feeding days and initial size were equivalent, the feed consumption of all-male NBS was 1.20 times that of mixed-sex NBS at the ZH site and 1.23 times that at the SS site. The average size of all-male NBS was larger than that of mixed-sex NBS by 34.4 % and 34.7 % at the ZH and SS sites, respectively. As a result, the FCRs of all-male NBS decreased by 12.6 % at the ZH site and 12.8 % at the SS site compared to mixed-sex NBS. Consequently, it can be deduced that the fast growth of all-male NBS at the ZH and SS sites was because of superior appetite and feed consumption. Compared with inbred NS, the advantages of all-male NBS are more obvious. First, all-male NBS could feed on artificial compound feed, which can avoid the water pollution and diseases caused by iced fresh fish feeding in NS culture ( Ou et al., 2018 ). Second, the FCR of all-male NBS was 72.3 % lower than that of inbred NS at the DQ site, while the total yield of all-male NBS was 40.5 % higher than that of inbred NS, and the average size was larger and the rate of body weight above 1 kg was higher than inbred NS at the same culture time. Therefore, input costs are falling, but output profits are increasing, resulting in great economic benefits for farmers. All-male NBS are a great substitute for inbred NS in snakeheads aquaculture. According to the internal morphological characteristics of gonads, the physiological sex of all-male NBS was 93.0–95.0 % males, and there was a low proportion of female or bisexual individuals. This phenomenon also exists in other monosex populations. Mair et al. (1997) reported that genotypic ‘YY’ male Nile tilapia tested in crosses with normal females produced 95.6 % males. Liu et al. (2007) indicated that approximately 10 % of the progeny of YY supermale yellow catfish with females were phenotypic females. Novelo et al. (2020) pointed out that male proportions were 79–100 % in four cross-combinations of different strains of XX female and YY male Nile tilapia, and late maturation was found in these unexpected females. Obvious eggs could be seen in the gonads of unexpected female or bisexual individuals in the all-male NBS group, but their maturity was later than that of normal female NBS individuals (Fig. S2), in which ovotestes showed massive developing oocytes and spermatocytes instead of mature oocytes. The occurrence of these rare female or bisexual individuals may be attributed to parental and maternal effects, and a series of autosomal sex-modifying genes, as shown in several fish species ( Mair et al., 1997 ; Liu et al., 2007 ; Baroiller and D’Cotta, 2019 ; Xiong et al., 2020 ). As a lower vertebrate, the physiological sex of fish is easily affected by exogenous factors on sex differentiation, such as feed, temperature, and pH ( Devlin and Nagahama, 2002 ; Goikoetxea et al., 2017 ). These factors are the direction for us to increase the proportion of males in the progeny of YY super-males in the future. Environmental temperature is a major factor for fish growth and breeding because their body temperature is affected by ambient water temperature ( Brett, 1979 ; Coggan, 1997 ). Low temperature is a key factor that affects the health and survival of fish in overwintering culture. Fish become minimally active, and reduce food intake and metabolism at low temperatures ( Hurst, 2007 ; Takegaki and Takeshita, 2020 ). Under the dual pressure of low temperature and starvation, snakeheads may be predisposed to generalized stress, immune suppression and/or opportunistic pathogen infections, resulting in the outbreak of winter disease, such as water mildew and epizootic ulcerative syndrome, which lead to mass mortality during the winter season, with significant economic losses for snakeheads aquaculture production ( Zhu et al., 2011 ; Li et al., 2019 ). Therefore, low-temperature tolerance is an important characteristic for the new variety of snakehead. In our study, the low-temperature tolerance test indicated that the size of the dead fish was significantly smaller than that of the surviving fish, which is consistent with studies in other fish species, such as largemouth bass ( Micropterus salmoides ), sand smelt ( Arherina boyeri ), Atlantic silversides ( Menidia menidia ), and mudskipper fish ( Boleophthalmus pectinirostris ) ( Garvey et al., 1998 ; Hurst, 2007 ; Takegaki and Takeshita, 2020 ). Since smaller fish tend to have lower energy reserves than larger fish and use up those reserves more rapidly due to the allometry of metabolic rate at low temperature, when food is scarce, large fish have abundant energy reserves relative to their small counterparts, thereby improving their survival ( Soyano and Mushirobira, 2018 ; Takegaki and Takeshita, 2020 ). All-male NBS have low mortality under low temperature stress, probably because their average weight is higher than that of mixed-sex NBS and mixed-sex BNS, and the higher average weight stores more energy against low temperature. Of course, more experimental data and aquaculture practice are needed to prove our point in the future. 5 Conclusions In conclusion, this study evaluated the growth performance of all-male NBS produced by combining sex control, sex-specific molecular marker and hybridization of YY-M BS and XX-F NS. All-male NBS exhibited superior productivity traits compared to mixed-sex NBS or inbred NS. The average body weight and average daily gain of all-male NBS were much higher than those of mixed-sex NBS or inbred NS. The body weight and length coefficient of variation of all-male NBS were smaller, and rates of body weight above 1 kg were far higher than mixed-sex NBS or inbred NS. Therefore, the sizes of all-male NBS are more uniform. The feed conversion ratio of all-male NBS decreased compared to mixed-sex NBS or inbred NS. The low-temperature tolerance of all-male NBS was the strongest of the three types of hybrid snakehead, which will reduce losses associated with overwintering culture. In snakeheads trading, large individuals are more favored, and the price of snakeheads above 1 kg is approximately 40 % higher than that of individuals in the range of 0.5−2 kg. Therefore, a large and uniform all-male NBS can bring tremendous economic benefits because input costs are falling, while output profits are increasing. All-male NBS exhibit better production performance and are expected to increase production yield per unit water body as well as economic benefit and are thus quite promising in snakeheads market applications. CRediT authorship contribution statement Mi Ou: Conceptualization, Methodology, Investigation, Formal analysis, Resources, Writing - original draft, Visualization. Kun-Ci Chen: Conceptualization, Methodology, Resources, Validation, Writing - original draft, Writing - review & editing. Qing Luo: Methodology, Investigation, Visualization. Hai-Yang Liu: Methodology, Investigation, Formal analysis. Ya-Ping Wang: Conceptualization, Writing - original draft, Writing - review & editing. Bai-Xiang Chen: Methodology, Resources, Validation. Xin-Qiu Liang: Methodology, Validation. Jian Zhao: Conceptualization, Methodology, Resources, Validation, Formal analysis, Visualization, Writing - original draft, Writing - review & editing. Declaration of Competing Interests The authors declare that they have no competing interests. Acknowledgements This work was supported by the National Key R&D Program of China [ 2018YFD0901201 ], the China Agriculture Research System of MOF and MARA [ CARS-46 ], the National Natural Science Foundation of China [ 31902351 ], the Central Public-interest Scientific Institution Basal Research Fund, CAFS [ 2020TD34 ], Guangdong Agricultural Research System [ 2020KJ150 ] and National Freshwater Genetic Resource Center [ NFGR-2020 ]. We would like to thank all colleagues in our lab for sample collection and laboratory technical assistance. Appendix A Supplementary data Supplementary material related to this article can be found, in the online version, at doi: https://doi.org/10.1016/j.aqrep.2021.100768 . Appendix A Supplementary data The following are Supplementary data to this article: | [
"ABOALELA",
"AZUMA",
"BAROILLER",
"BENFEY",
"BRETT",
"BUDD",
"BUNTHAWIN",
"CASALINI",
"CHINASMINISTRYOFAGRICULTURE",
"COGGAN",
"DEVLIN",
"GARVEY",
"GOIKOETXEA",
"GOMELSKY",
"GUIGUEN",
"HU",
"HURST",
"KITANO",
"LI",
"LI",
"LIANG",
"LIU",
"LIU",
"LUO",
"MAIR",
"MAIR",... |
ef67c20dd63546a8a80c914ff7752873_IMPACTO NA FARMACOECONOMIA DE UM HOSPITAL PÚBLICO ONCOLÓGICO COM A IMPLANTAÇÃO DE UM PROGRAMA DE GER_10.1016_j.bjid.2023.102850.xml | IMPACTO NA FARMACOECONOMIA DE UM HOSPITAL PÚBLICO ONCOLÓGICO COM A IMPLANTAÇÃO DE UM PROGRAMA DE GERENCIAMENTO DO USO DE ANTIMICROBIANOS | [
"Sejas, Odeli Nicole Encinas",
"Katayose, Jéssica Toshie",
"Pontes, Patrícia Rodrigues Bonazzi",
"Ibrahim, Karim Yaqub",
"Magri, Adriana Satie Goncalves Kono",
"Neves, Tamara Regina Vitale Ferretti",
"de Siqueira, Rejane Sousa",
"Sabanai, Alberto Hideyoshi",
"Abdala, Edson"
] | Introdução/Objetivo
Os antimicrobianos amplamente utilizados no âmbito hospitalar representam grande impacto financeiro à instituição, e seu uso inadequado pode propiciar o desenvolvimento de bactérias multirresistentes. A partir disso, foi implantado o Programa de Gerenciamento do Uso de Antimicrobianos (ASP) que envolve conjunto de ações destinadas ao controle e uso racional dos antimicrobianos nos serviços de saúde, sendo um dos objetivos secundários a redução de custos financeiros com medicamentos (farmacoeconomia), contribuindo para a otimização de cuidados, tomada de decisão e melhor uso dos recursos financeiros. O objetivo é avaliar o impacto da implantação de um Programa de Gerenciamento do Uso de Antimicrobianos na farmacoeconomia hospitalar.
Métodos
Estudo retrospectivo quase-experimental, com intervenção, realizado em um hospital público oncológico, universitário, quaternário. Os períodos do estudo foram: pré-implantação–2018, pós-implantação–2022. A implantação do ASP ocorreu em 2019, com o início do gerenciamento do consumo de 18 antimicrobianos, sendo ampliado o escopo para 35 antimicrobianos em 2020. Esta seleção foi baseada em maior valor financeiro, medicamentos de amplo espectro e/ou uso restrito, e aqueles gerenciados pela COVISA. O ASP implantado avalia o consumo de antimicrobianos a partir de 5 indicadores: 1. Densidade de Prescrição (DP); 2. Dose Diária Definida (DDD); 3. Dias de Terapia (DOT); 4. Duração da Terapia (LOT); 5. Razão DOT/LOT. Além de realizar auditorias prospectivas beira-leito, avaliação de prescrição, e reunião mensal entre a equipe do Serviço de Controle de Infecção Hospitalar, membros do ASP e equipe multidisciplinar das Unidades de Terapia Intensiva (UTI), setor de maior consumo de antimicrobianos, a fim de fornecer devolutivas. Compararam-se os custos financeiros com os 35 antimicrobianos monitorados entre os dois períodos.
Resultados
Foram analisados os setores: Unidades de Internação, Hematologia e UTI. No ano de 2022 (pós) a Unidade de Internação apresentou 52,54%, Onco-hematologia 25,19% e UTI 59,87% de diminuição de custo financeiro quando comparado com o ano de 2018 (pré).
Conclusão
O estudo mostrou uma diminuição do custo com antimicrobianos em todos os setores em 4 anos de implantação do ASP, reforçando sua importância, principalmente em unidade de saúde pública onde os recursos financeiros são limitados. | null | [] |
eea43e88a0cc4dea979e7b23a74d3519_Neutrophil-to-lymphocyte ratio trend at admission predicts adverse outcome in hospitalized respirato_10.1016_j.heliyon.2023.e16482.xml | Neutrophil-to-lymphocyte ratio trend at admission predicts adverse outcome in hospitalized respiratory syncytial virus patients | [
"Shusterman, Eden",
"Prozan, Lior",
"Ablin, Jacob Nadav",
"Weiss-Meilik, Ahuva",
"Adler, Amos",
"Choshen, Guy",
"Kehat, Orli"
] | Background and aims
Severe cases of respiratory syncytial virus (RSV) infection are relatively rare but may lead to serious clinical outcomes, including respiratory failure and death. These infections were shown to be accompanied by immune dysregulation. We aimed to test whether the admission neutrophil-to-leukocyte ratio, a marker of an aberrant immune response, can predict adverse outcome.
Methods
We retrospectively analyzed a cohort of RSV patients admitted to the Tel Aviv Medical Center from January 2010 to October 2020d. Laboratory, demographic and clinical parameters were collected. Two-way analysis of variance was used to test the association between neutrophil-lymphocyte ratio (NLR) values and poor outcomes. Receiver operating characteristic (ROC) curve analysis was applied to test the discrimination ability of NLR.
Results
In total, 482 RSV patients (median age 79 years, 248 [51%] females) were enrolled. There was a significant interaction between a poor clinical outcome and a sequential rise in NLR levels (positive delta NLR). The ROC curve analysis revealed an area under curve (AUC) of poor outcomes for delta NLR of (0.58). Using a cut-off of delta = 0 (the second NLR is equal to the first NLR value), multivariate logistic regression identified a rise in NLR (delta NLR>0) as being a prognostic factor for poor clinical outcome, after adjusting for age, sex and Charlson comorbidity score, with an odds ratio of 1.914 (P = 0.014) and a total AUC of 0.63.
Conclusions
A rise in NLR levels within the first 48 h of hospital admission can serve as a prognostic marker for adverse outcome. | 1 Introduction Respiratory syncytial virus (RSV), a single-stranded RNA virus of the Paramyxoviridae family, is recognized as being an important cause of acute respiratory infection among adults [ 1–3 ]. Although most cases of RSV infection follow a mild clinical course, certain patient populations are prone to suffer a severe, often lethal, disease course when it presents in the form of an upper respiratory tract infection. Adults with comorbidities, long-term care facility residents and immunocompromised hosts appear to be at increased risk for severe disease, complications and mortality [ 4–6 ]. The adverse outcome of RSV infection is partly due to an aberrant immune response. An impaired immune activation may facilitate extensive viral replication and invasion. On the other hand, a dysregulated, over-activated immune response may lead to local and systemic injury [ 7 , 8 ]. Early viral invasion is characterised by interleukin- 8 -mediated neutrophilic activation and suppression of lymphocytes. The inability to mount an adaptive lymphocytic response has been shown to underlie a more severe disease ([ 9 ]. The neutrophil-to-lymphocyte ratio (NLR) is a constant calculated from a complete blood count used to evaluate the inflammatory status of a patient [ 10 ]. NLR levels can serve as a prognostic factor in rheumatic and cardiovascular diseases, as well as in different types of malignancies, such as lung and breast cancer [ 11–16 ]. Moreover, recent studies have shown that NLR can serve as a useful marker in predicting outcomes of common infectious diseases [ 17–19 ]. NLR has been studied in the context of viral infections. Zhang et al. demonstrated that NLR can be a predictive prognostic marker in patients infected with avian influenza [ 20 ]. Several very small-scale studies have suggested NLR's application as a diagnostic tool for identifying influenza virus-infected patients (serotypes A and B) among elderly and pediatric populations [ 21–23 ]. In contrast, numerous reports have investigated the application of NLR in Coronavirus Disease 2019 (COVID-19), including four large-scale meta-analyses. An increased NLR was found to be an early marker for severe COVID-19 and a prognostic factor for endotracheal intubation and mortality during hospitalization [ 22 , 24–29 ]). RSV is one of the two major viral pathogens associated with acute lower respiratory infection, and it represents an epidemiologic concern, especially during the winter season [ 30 ]. To the best of our knowledge, a possible relation between RSV infection and the NLR parameter has not been previously reported. In the current study, we aim to assess the levels and prognostic value of NLR in adult patients infected with RSV. 2 Methods 2.1 Study design This retrospective observational study was conducted at the Tel Aviv Sourasky Medical Center, a tertiary academic hospital. We searched the electronic medical records in the microbiology laboratory database for RSV positive reverse transcription polymerase chain reaction (RT-PCR) nasal swabs obtained from 2010 through 2021. For all study procedures The STROBE checklist was followed [ 31 ]. The study was reviewed and approved by the Tel Aviv Sourasky Medical Center institutional review board (ethics approval number 056-20-TLV). Requirement for informed consent was waived for this retrospective anonymised study. 2.2 Patients The study participants were selected according to the following criteria: 1. Positive viral PCR for RSV; 2. Hospital admission from January 1, 2010 to October 1, 2021; 3. Documented complete blood count (CBC) taken at admission; 4. Hospital admission duration over 24 h; 5. At least two consecutive CBCs available within 48 h from admission. Only patients over 18 years of age were included. 2.3 Data collection Data were extracted from the patients' electronic medical records. Background patient information included sex age, and comorbidities, ranked by the Charlson comorbidity index [ 32 ]. Laboratory findings on white blood cell and differential blood counts, estimated glomerular filtration rate, liver function tests, bilirubin, international normalized ratio and C-reactive protein were retrieved for each patient. Blood counts were performed automatically with Beckmann Coulter LH750 or Beckmann Coulter DxH800. NLR was calculated as followes: neutrophil count at admission divided by the lymphocyte count at admission. Then, Delta NLR values were calculated by subtracting the NLR value of the second test from the first test. The primary outcome measure was defined as a composite score of mortality, death within 30 days of admission, and mechanical ventilation. 2.4 Microbiology Viral respiratory infection was diagnosed by PCR from combined pharyngeal and nasopharyngeal swabs introduced into UTM tubes and transported to the laboratory. RNA extraction was by the easyMAG® system (BioMérieux, Marcy-l'Étoile, France). RSV was diagnosed by the Simplexa™ Flu A/B & RSV kit (DiaSorin) or the Seeplex® RV7 kit (Seegene). All positive cultures (blood, sputum, urine and others) from the first seven days of hospitalization were extracted for identification of bacterial or fungal super-infections. The list was reviewed independently by two internal medicine physicians to exclude positive cultures that were considered to be contaminants. 2.5 Statistical analyses The characteristics of patients with RSV presented as counts and as percentages for categorical variables and as medians and interquartile ranges (IQR) for continues variables and. A repeated measures analysis of variance (ANOVA) was used to compare the initial trend of NLR values (negative or positive delta NLR) with adverse outcomes. The NLR and delta NLR predictive values for poor clinical outcome were assessed by means of the receiver operating characteristic (ROC) curve analysis. The optimal cutoff value of delta NLR was determined by Youden's index. In order to obtain the odds ratio (OR) of delta NLR and additional factors, multivariate logistic regression was performed. The Chi-square test was applied to compare NLR levels of patients with to those without a superinfection. Statistical calculations were performed using the SPSS 25.0 software (SPSS Inc, Chicago, USA). 3 Results In total, 469 confirmed cases of RSV infection were registered in the database during the study period. They had all been identified by nasal swab RT-PCR findings. Their baseline characteristics, including age, sex and Charlson Comorbidity Index (CCI) are presented in Table 1 . The mean age of the cohort was 79 years, and 248 of the patients were female (51%). Comorbidities were presented by the Charlson score, with a mean score of 5. The mortality rate was 13% (61 cases) among all of the registered RSV patients, and 8% (38 cases) of the patients required mechanical ventilation. For the purposes of analysis, a poor outcome was defined as the presence of any of the two above–mentioned endpoints. As expected, patients with an adverse outcome were significantly older (median 81.5 [range 71–88] Vs. 78 [range 67–85), p < 0.05] and had more comorbidities (a median CCI of 6 [range 4–8] Vs. 5 [range 4–7], p < 0.05). Patients in the adverse outcome group had significantly higher rates of readmission within 7 days of discharge from the index hospitalization (9.3% Vs. 6.3%, p < 0.01). Table 2 details the patients’ laboratory data at presentation, including white blood cell and differential blood counts, liver function tests, bilirubin, international normalized ratio and C-reactive protein. The median NLR at presentation was 6.82 (range 3.9–11.8) and there was no significant difference between NLR levels at admission between patients with or without an adverse outcome (median 6.6 (range 3.8–11.3) Vs. 7.2 (range 4.4–16.6), p = 0.77). All RSV patients in our study had more than two CBCs taken during their hospital stay yielding more than two NLR values, We calculated the delta NLR by subtracting the NLR value of the second test from the first. The median time between the first and second lab tests for NLR was 19 h (IQR 16 h [range 12–28]). We examined the association between the initial trend of NLR values (delta NLR) and adverse outcomes with a two-way ANOVA. A poor clinical outcome was associated with a sequential rise in NLR levels (positive delta NLR). A repeated measure ANOVA showed a significant test-outcome interaction ( p = 0.008): patients with adverse clinical outcomes had higher NLR levels in their second test compared to patients with non-adverse clinical ( Fig. 1 ). Having found the delta NLR as being associated with poor clinical outcome, we further tested the discrimination ability of delta NLR by means of ROC curve analysis ( Fig. 2 ). The area under the curve (AUC) of poor outcomes for NLR was AUC = 0.56 p = 0.08 95%CI [0.49–0.63] and AUC 0.629 p < 0.001 95%CI [0.55–0.69] for delta NLR. The optimal cutoff of delta NLR was obtained from Youden's index: delta of 0.18. Since this cutoff is not a practical number for clinicians, we decided to further analyze our data by using a delta = 0 cutoff (meaning, the second NLR value is equal to the first NLR value). We applied multivariate logistic regression in order to test the discrimination ability of this cutoff as a prognostic factor of poor clinical outcome adjusted for age, sex and CCI score. The results showed an OR of 2.031 for a delta NLR >0 ( p = 0.004 95%CI [1.248–3.304]) and a total AUC of 0.64 95% CI [0.57–0.72]. In this model, age, sex and the CCI score emerged as being non-significant in predicting a poor outcome among the RSV patients ( p = 0.091 (95%CI [0.99–1.03]), p = 0.817 (95%CI [0.58–1.53]), P = 0.483 (95%CI [0.28–1.17]), respectively). Fig. 3 demonstrates the similar results of using delta NLR = 0.18 as a cutoff with an OR of 2.209 for a delta NLR >0.18 ( p = 0.001 (95%CI [1.359–3.592]) and an AUC of 0.645 95% CI [0.58–0.7]. In order to exclude the possibility that the NLR trend was due to a concurrent bacterial or fungal superinfection, we compared the rates of superinfections among patients who had a positive trend with those who had a negative trend and observed that the presence of a superinfection was not associated with a higher NLR trend at admission (χ 2 (1, N = 469) = 0.45, p = 0.6) ( Fig. 4 ). Table 3 lists all superinfections among the study participants within the first 7 days from admission. Gram negative infections (n = 67 14% and specifically gram-negative urinary infections (n = 34 7%) were most common. E. Coli was the most frequently isolated pathogen (n = 29 6%). 4 Discussion Severe cases of RSV, although relatively rare, can have severe outcomes, such as respiratory failure and death. These cases have been shown to be accompanied by immune dysregulation. The current study findings demonstrated that a rise in the neutrophil-to-leukocyte ratio, a well-known marker of an aberrant immune response, during the first 48 h of hospitalization, is associated with an adverse outcome. Hospitalizations of patients aged 5–50 years due to RSV are rare and generally involve such patients who have underlying comorbidities. However, RSV is an important cause of mortality and carries the same hospitalization burden as Influenza A in a vaccinated population of older adults [ 33–35 ]. We retrospectively retrieved a cohort of nearly 500 RSV cases admitted to our institute over a decade between 2010 and 2021. The mean age of our patients was 79 years, which is higher compared to other studies in which an age range of 65–76 years was reported [ 2–4 ]. Interestingly, in our study, age was not an independent risk factor when adjusted for comorbidities, sex and NLR trend. This finding may be explained by the initial advanced age of the admitted patients in our study. The mortality rate of our study patients was 13% and mechanical ventilation rate was 8%, both of which are relatively high but in accordance with the reports of others in which mortality and requirement of mechanical ventilation rates ranged between 8-19% and 3–17%, respectively [ 2–4 , 33 , 34 ]. Early identification of high-risk patients emerges as being crucial given such substantial morbidity. The findings of a recent study by our group in which the prognostic value of NLR in COVID-19, influenza and RSV patients were compared showed that NLR at admission was not a useful prognostic marker in RSV patients despite pathophysiological reasoning [ 36 ]. In our current work, we tested the independent prognostic value of the NLR trend within the first 48 h, which was defined as a delta NLR (second NLR minus first NLR) larger than zero. A positive NLR trend (delta NLR >0) emerged as an independent risk factor for death or need for mechanical ventilation adjusted for sex, age and comorbidities. Indeed, after using delta NLR and including sex, age and comorbidities, the AUC of this model was only 0.63. This low (albeit significant) AUC is not surprising, considering the diverse backgrounds and presentations of these patients. Compiling a model with a higher discriminative ability is possible, but it would include many more variables and be cumbersome to be used clinically. An NLR trend has been shown to be associated with adverse outcome in various clinical scenarios [ 37 , 38 ]. A temporal rise of the NLR during the first five days was associated with death rather than survival in abdominal septic shock [ 39 ]. Interestingly, those authors observed that a higher admission NLR was associated with a better prognosis in the first five days, suggesting that an NLR at admission and an NLR trend do not always bear the same prognostic value. In our study, a positive NLR trend within the first 48 h, but not the NLR at admission, was associated with a poor prognosis. This finding may derive in part from the temporal pattern of neutrophil-based inflammation. Neutrophil recruitment in inflammation in general and in lung injury specifically consists of two phases. The initial stage is the influx of circulating neutrophils to the lung tissue (rapid phase), and it is followed by a late persistent phase, which includes recruitment of the large mass of neutrophils produced in the bone marrow. This process may be driven by SDF-1 signalling [ 40 ]. The flip side of neutrophilia with a high NLR is lymphopenia, which also follows a temporal pattern. In influenza pneumonia, for example, the CD-8 lymphocyte response is biphasic, with rapid expansion of lymphocyte population only after 6 days from the start of infection. This late phase correlates with viral clearing from the lungs [ 41 ]. In RSV, different cytokine patterns result in different inflammatory responses. A Th-1 response characterized by IFN-γ and IL-2 production can lead to viral clearance, while proinflammatory cytokines, such as interleukin (IL) IL-4, IL-6, IL-10 and IL-13 lead to an inflammatory response associated with lung damage [ 42 ]. Moreover, acute RSV infection in infants is associated with lymphopenia, possibly induced by lymphocyte apoptosis and a loss of CD-8 T cells [ 43 ]. Thus, the NLR trend may represent the temporal evolvement of an aberrant response to the pathogen and persistent inflammation [ 44 ], with a time-dependent elevation in neutrophils and a time-dependent depletion of lymphocytes. The use of longitudinal blood counts for prognostic input has been reported for other respiratory viruses and bacterial infections, and has been proposed as a tool to guide therapy [ 45 ]. In our model, a positive NLR trend was associated with a twofold risk for death and the need of mechanical ventilation. There are several inherent limitations to our study. First, its observational retrospective design limited control for unmeasured confounders. Second, our study was carried out in a single medical center. Third, our study population was limited to patients with severe respiratory illness which required hospitalization, thereby excluding RSV patients with mild illness and warranting further study in order to determine the implication of NLR in that group as well. Finally, due to the COVID-19 pandemic, new infection protective measures had been employed and there was heightened awareness to respiratory symptoms and infections in the public. One can postulate that the epidemiology of patients with respiratory symptoms seeking medical attention may change from the one represented in our cohort. To conclude, based upon on a large cohort of RSV patients, we developed a practical prognostic tool for clinicians in the management of adults with RSV infections. To the best of our knowledge, this is the first large-scale report of a prognostic model based upon NLR levels for RSV. A patient whose NLR levels within the first 48 h of admission do not decrease compared to those measured at admission has a twofold risk for an adverse outcome. In an era of respiratory pandemics, such a prognostic tool can help guide the therapeutic management and placement of RSV patients. Ethical approval The study was reviewed and approved by the Tel Aviv Sourasky Medical Center's institutional review board. Author contribution statement Eden Shusterman: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Lior Prozan: Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Guy Choshen: Analyzed and interpreted the data. Ahuva Weiss-Meilik and Amos Adler: Contributed reagents, materials, analysis tools or data. Jacob Nadav Ablin: Analyzed and interpreted the data; Wrote the paper. Orly Kehat: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. Data availability statement The authors do not have permission to share data. Additional information No additional information is available for this paper. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | [
"FALSEY",
"FALSEY",
"DUNCAN",
"CHUAYCHOO",
"SHAH",
"FALSEY",
"NURIEV",
"HABIBI",
"RUSSELL",
"FORGET",
"TOMITA",
"KIM",
"AZAB",
"PROCTOR",
"KOZA",
"HAO",
"DEJAGER",
"TERRADAS",
"BOZBAY",
"ZHANG",
"INDAVARAPU",
"LIU",
"XIAOHONG",
"CHAN",
"LIU",
"FENG",
"TATUM",
"L... |
25caccb7afc24140821a334e2c46d17a_Comprehensive analysis of Eimeria necatrix infection From intestinal lesions to gut microbiota and m_10.1016_j.psj.2025.105412.xml | Comprehensive analysis of Eimeria necatrix infection: From intestinal lesions to gut microbiota and metabolic disturbances | [
"Chen, Ya-Mei",
"Wei, Peng",
"Liao, Hsing-Yu",
"Tsai, Yu-wei",
"Cheng, Ming-Chu",
"Lien, Yi-Yang"
] | The coccidian Eimeria necatrix infects the mid-intestine of chickens, causing hemorrhage and resulting in significant economic losses. However, there is a lack of a clear method for evaluating the tissue lesions caused by E. necatrix infection. Moreover, the impact of E. necatrix infection on gut microbiota and metabolites remains to be explored. Therefore, this study was conducted to investigate the effects of E. necatrix infection on the intestinal tissues of chickens and establish a novel histopathological scoring system for evaluating lesion severity. In addition, changes in gut microbiota and metabolites after E. necatrix infection were evaluated. Chickens aged 3 weeks were divided into 5 groups (4 experimental groups and a control group), with 6 chickens in each group. The experimental groups were orally inoculated with different concentrations of E. necatrix oocysts. Intestinal and fecal samples were collected on 7 days post-infection (DPI) and analyzed. Chickens infected with a high dose of E. necatrix exhibited diarrhea, bloody stools, and partial mortality within 6 DPI. Pathological analysis revealed a remarkable reduction in villous height, along with severe hemorrhage, necrosis, and inflammation. The histopathological scoring system revealed a strong correlation with other disease-related indicators, such as weight loss and oocyst shedding, demonstrating its stability and accuracy. Furthermore, the severity of villous lesions was closely associated with alterations in gut microbiota composition. Microbiota analysis showed a considerable reduction in the abundance of Lactobacillus in the high-dose group, whereas the abundance of potential pathogenic bacteria, including Shigella and Escherichia coli, increased, causing gut dysbiosis. Finally, metabolomic analysis indicated that E. necatrix infection disrupted energy and amino acid metabolism, particularly affecting glycolysis, the tricarboxylic acid cycle, and pyruvate metabolism. Overall, this study establishes a reliable histopathological scoring method and confirms that E. necatrix infection causes gut dysbiosis and metabolic abnormalities through tissue damage. These data provide novel insights into the diagnosis and treatment of coccidiosis in chickens. | Introduction Seven recognized species of Eimeria affect chickens, viz., E. acervulina, E. brunetti, E. maxima, E. mitis, E. necatrix, E. praecox , and E. tenella ( Chapman, 2014 ), among which E. necatrix is one of the most pathogenic, causing chronic disease and significant mortality ( Tyzzer et al., 1932 ). E. necatrix infects the midgut, resulting in hemorrhage and dilatation of the intestine ( Johnson, 1930 ). This species is of particular concern for laying hens, because infections tend to peak around the onset of egg production ( Shirley et al., 2005 ). The asexual stages of E. necatrix are present in the lamina propria of the villi, and infection may cause severe mucosal hemorrhage. According to Johnson and Reid (1970) , the lesion caused by E. necatrix infection is scored on a scale of 0–4. Briefly, score 1 indicates small petechiae and white spots, score 2 represents numerous petechiae on the serosal surface, score 3 indicates extensive hemorrhage into the lumen with thickening of the serosal surface, and score 4 characterizes extensive hemorrhage and ballooning, which may extend through the intestine. Dead birds are assigned a score of 4. Nonetheless, the scoring system is subjective and highly dependent on examiner’s experience. Moreover, the relationship between lesion scoring and histological changes in E. necatrix infection has not been investigated in detail. Accordingly, one of the objectives of the present study was to compare the scoring system with other parameters, including body weight, oocyst count, and histological changes. Accumulating evidence indicates that the commensal microbiota in the digestive tract plays a vital role in maintaining animal health ( Salzman et al., 2007 ). The gut microbiota regulates pathogens and pathobionts by strengthening the epithelial barrier, competing for receptors and nutrients, stimulating the intestinal innate immune response, and producing antimicrobial compounds ( Kamada et al., 2013 ; Garcia-Gutierrez et al., 2019 ). A dynamic response of the intestinal microbiome has been detected during E. maxima infection ( Liu et al., 2024 ). Therefore, the present study aims to use a novel histopathological scoring system for evaluating the intestinal histological lesions induced by E. necatrix infection and investigating the correlation between lesion severity and other disease-related factors. Moreover, the composition and diversity of intestinal microbiota, as well as the changes in intestinal metabolites, were analyzed, with the goal of comprehensively elucidating the impact of E. necatrix on intestinal function. Materials and methods Animal experiment The animal experiment was approved by the Institutional Animal Care and Use Committee (IACUC) of the National Pingtung University of Science and Technology (#NPUST-113-020). An a priori power analysis was performed with G*Power v3.1 (Heinrich-Heine-Universität Düsseldorf, Germany). Based on lesion-score and villus-height-to-crypt-depth (VH:CD) differences reported in previous Eimeria challenge studies ( Johnson and Reid, 1970 ; Teng et al., 2020 ), an effect size of Cohen’s d = 0.9 was selected. With α = 0.05 and statistical power = 0.80 in two-tailed t -test, the required sample size was n = 6 chickens per group. Hendrix male chicks aged 1 day were obtained from a commercial hatchery and reared in coccidian-free conditions with feeding on a standard ration ad libitum . The chickens were randomly divided into 5 groups, viz., the control group and Eimeria -challenged groups 1–4, with 6 chickens in each group. On Day 0 (the day of inoculation), the chickens were aged 3 weeks. The control group received 1 mL of PBS orally, whereas the Eimeria -challenged groups 1–4 were orally inoculated with 2,000, 5,000, 10,000, and 30,000 oocysts, respectively. All chickens were euthanized by cervical dislocation on 7 days postinfection ( DPI ), followed by dissection and evaluation of weight gain, oocyst output, and intestinal gross lesions. The number of oocysts per gram ( OPG ) of feces was measured to quantify oocyst output. The rate of body weight gain was calculated by expressing the weight gain of challenged birds as a percentage of that of uninfected controls. The survival proportions of each Eimeria -challenged group were expressed as the percentage of surviving chickens at the end of the experiment compared with the initial number of chickens. Gross lesion scoring was performed based on the criteria described by Johnson and Reid (1970) . The segment of the jejunum and ileum was collected for histopathology. Fecal samples from the jejunum were collected, stored at −80°C, and transferred to the laboratory of Biotools Co., Ltd. (Taipei, Taiwan) for further analysis. Histopathology Fixed tissues were routinely processed, embedded in paraffin, and cut into 4-μm-thick sections, followed by hematoxylin and eosin ( HE ) staining. For each chicken, the heights of 10 villi and their corresponding crypt depths were measured separately in the jejunum and ileum to calculate the villous height to crypt depth ( VH:CD ) ratio. Furthermore, this study designed a histological scoring system with two categories. The evaluation criteria included the following 9 subcategories: hyperemia, hemorrhage, submucosal necrosis, loss of mucosa, inflammation, edema, eosinophil count, schizont count, and gamete count ( Table 1 ). Processing and analysis of fecal microbiota and metabolites Microbial genomic DNA was extracted using the QIAamp PowerFecal DNA Kit (Qiagen, Austin, TX, USA), and its concentration was measured using a Qubit 4.0 fluorometer (Thermo Scientific, Wilmington, DE, USA). Full-length 16S rRNA genes, covering V1–V9 regions, were amplified for library preparation using the SMRTbell Express Template Prep Kit (Pacific Biosciences, Menlo Park, CA, USA). Taxonomic classification of representative sequences was conducted using the feature-classifier and classify-consensus-search algorithms in QIIME2 (v2022.11) ( Bolyen et al., 2019 ). Alpha diversity was evaluated using the Shannon index and phylogenetic diversity index, with group comparisons performed using the Kruskal–Wallis test. Constrained principal coordinate analysis ( cPCoA ) based on Bray–Curtis dissimilarity was applied to examine microbial community differences, and statistical significance was evaluated using permutational multivariate analysis of variance (PERMANOVA). To identify key taxa differentiating groups, linear discriminant analysis ( LDA ) effect size ( LEfSe ) was performed with an α threshold of 0.05 (Kruskal–Wallis rank-sum test) and an LDA score threshold of 4. Functional metagenomic predictions were conducted using phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt, v2.5.1) and functional annotation of prokaryotic taxa (FAPROTAX, v1.2.6) ( Segata et al., 2011 ; Noval Rivas et al., 2013 ). Kyoto Encyclopedia of Genes and Genomes ( KEGG ) ortholog abundances were used to map metabolic pathways through the KEGG database ( http://www.genome.jp/kegg/ ). To gain deeper insights into metabolic alterations, untargeted metabolomics profiling was performed using ultrahigh-performance liquid chromatography–tandem mass spectrometry (UHPLC–MS/MS) ( Cai et al., 2015 ; Wang et al., 2016 ). Statistical analysis Each chicken was considered an independent experimental unit. P ≤ 0.05 was considered statistically significant. Differences in body weight, OPG, gross lesion scores, VH:CD ratio, histological lesion scores, and the relative abundance of specific bacterial taxa were analyzed using the nonparametric Kruskal–Wallis test. Spearman’s rank correlation was used to analyze the associations among body weight, gross lesion scores, histological lesion scores, VH:CD ratio, and OPG. Cronbach’s α test was used to evaluate the internal consistency of the nine subcategories, determining whether these histopathological changes collectively measure disease severity. Spearman’s rank correlation was also applied to evaluate the relationships between microbial taxa and metabolites. Moreover, VH:CD ratio and body weight were defined as environmental factors. The Mantel test was performed to investigate the relationships between environmental factors and microbial community. All statistical analyses were conducted using the Statistical Analysis System v9.4 (SAS Institute Inc., Cary, NC, USA) and R software, where * indicates P < 0.05, ** signifies P < 0.01, and *** represents P < 0.001. Results Clinical signs, gross lesions, and histological lesions During the study period, chickens in the control group exhibited no clinical signs associated with E. necatrix infection. In the Eimeria -challenged groups, group 3 exhibited diarrhea on 6 DPI, whereas group 4 developed hematochezia on 5 DPI, with two chickens succumbing on 6 DPI ( Table 2 ). Regarding mean weight gain, there was a significant difference between the control group and groups 3 and 4. For OPG, a significant difference was observed between groups 1 and 4. Furthermore, mean weight gain exhibited a strong negative correlation with OPG ( r = −0.933, P < 0.0001), indicating that higher OPG values were associated with lower weight gain. For gross lesions, the control group showed an average lesion score of 0.67 in both the jejunum and ileum ( Table 3 , Supplemental Table S1). In group 4, the average gross lesion score was 4.0 in the jejunum and 3.5 in the ileum. A score of 4 showed severe intestinal swelling and numerous mucosal petechiae ( Fig. 1 ). In the jejunum, the gross lesion scores of groups 3 and 4 were significantly different from those of the control group. Similarly, in the ileum, the gross lesion scores of groups 3 and 4 also exhibited significant differences compared with those of the control group. Regarding the VH:CD ratio, in the jejunum, the control group showed an average VH:CD ratio of 9.69, whereas group 4 showed a significantly lower ratio of 0.93 ( Table 3 , Supplemental Table S2). In the ileum, the average VH:CD ratio of the control group was 5.47, whereas group 4 showed a ratio of 1.58. In both the jejunum and ileum, the VH:CD ratios of groups 3 and 4 were significantly different from those of the control group. For histological lesions, the control group showed an average lesion score of 0 in both the jejunum and ileum. In the jejunum, the histological scores of groups 2–4 were significantly different from those of the control group. Among the infected groups, a significant difference in scores was detected between groups 1 and 4. In group 4, the jejunum exhibited marked mucosal loss, abundant schizonts and gametes, edema, and prominent eosinophilic infiltration ( Fig. 1 ). In the ileum, only group 4 showed a significant difference in histopathological scores compared with the control group. Similarly, a significant difference was also observed between groups 1 and 4 among the infected groups. We performed correlation analyses to evaluate whether the histopathological lesion scores exhibited a dose-response relationship with oocyst dosage. Results demonstrated a positive correlation between oocyst count and gamete count ( r = 0.77, P < 0.0001) as well as between oocyst count and schizont count ( r = 0.82, P < 0.0001) ( Fig. 2 ). The oocyst count positively correlated with all histological lesions ( P < 0.001), with the strongest correlation observed with hemorrhage ( r = 0.77) and the weakest with necrosis ( r = 0.52). To further explore the clinical significance of the histological scoring system, we analyzed the correlations between histological parameters and other disease indicators, including weight loss, gross lesions, and the VH:CD ratio. Mean weight gain demonstrated a negative correlation with all histological variables, ranging from −0.43 to −0.78 ( P < 0.0001), indicating that a greater lesion severity was associated with reduced weight gain. The strongest negative correlations were observed between mean weight gain and schizont count, gamete count, hemorrhage, and inflammation, suggesting that these factors play a vital role in weight loss. In contrast, mean weight gain showed the weakest correlation with necrosis, suggesting that necrosis is not a primary factor. For gross lesions, the correlation coefficients ranged from 0.42 to 0.70 ( P < 0.001), indicating moderate to strong positive correlations with all histopathological parameters. Remarkably, gross lesion scores strongly correlated with gamete count ( r = 0.70), schizont count ( r = 0.69), and inflammation severity ( r = 0.67), suggesting that these variables contribute significantly to the observed histological lesions. The VH:CD ratio also exhibited a strong negative correlation with all histological variables ( P < 0.0001), further confirming that villous atrophy was associated with increased histological damage ( Fig. 2 ). The strongest negative correlations were observed between the VH:CD ratio and schizont count ( r = −0.82), inflammation ( r = −0.77), and gamete count ( r = −0.74), whereas correlations with necrosis ( r = −0.60) and eosinophil infiltration ( r = −0.62) were relatively weaker. Cronbach’s α > 0.7 indicates good internal consistency among the 9 subcategories in the histopathological scoring system. The Cronbach’s α for the jejunum was 0.933, representing high reliability, and for the ileum, it was 0.873, indicating good consistency. These data suggest that the scoring system is stable and can reliably evaluate lesion severity. Spearman’s correlation analysis revealed a strong correlation between the histological scores of the jejunum and ileum ( r = 0.80, P < 0.0001), indicating a similar trend in both intestinal segments. The Wilcoxon signed-rank test yielded a highly significant result ( P < 0.0001), demonstrating a significant difference in histological scores between the jejunum and ileum, suggesting a potential systematic variation in lesion scoring across different intestinal segments. The intraclass correlation coefficient of 0.626 indicates moderate to good agreement between jejunal and ileal histological scores, although some degree of variation remains. Altered diversity of intestinal microflora We characterized the fecal microbiomes of 30 samples. After applying strict trimming criteria to exclude low-quality reads, 317,449 high-quality sequences were obtained, with an average of 10,581 reads per sample (range: 8,197–15,045 reads per sample) (Supplemental Table S3). The rarefaction curve was measured, and adequate sequence coverage was confirmed for all samples. The rank–abundance curve demonstrated that the overall microbial diversity and relative abundance patterns were similar across all groups; however, the experimental groups, particularly group 4, exhibited slightly lower diversity at the tail end, suggesting a potential reduction in the abundance of rare species due to E. necatrix infection ( Fig. 3 A). The alpha diversity indices revealed a significant difference among groups (Kruskal–Wallis, P = 0.019), with group 4 exhibiting significantly higher diversity than other groups, suggesting that E. necatrix infection at higher doses is associated with increased microbial diversity ( Fig. 3 B). Group 4 exhibited significantly higher diversity than the control and all other experimental groups. Groups 3 and 4 demonstrated the most remarkable differences, suggesting that microbial diversity increases significantly in higher dose infection groups. Beta diversity was calculated by cPCoA to measure the extent of the distinction between microbial communities. Beta diversity indices revealed a significant separation of microbial communities among groups ( P = 0.001), indicating that E. necatrix infection induces distinct shifts in the composition of gut microbiota, with the most pronounced differences observed in group 4 ( Fig. 3 C). Disturbed microflora composition To further explore the microbiota composition and distribution in the control and Eimeria -challenged groups, we performed phylum-, class-, family-, and species-level analyses (Supplemental Tables S4-S7). Fig. 4 depicts the bacterial distribution, characterized by the top 10 relative abundances of each taxon. Firmicutes was the dominant phylum in all groups ( Fig. 4 A), whereas group 4 exhibited a remarkable increase in the abundance of Proteobacteria, suggesting that E. necatrix infection at higher doses disrupts the gut microbiota balance, resulting in an overrepresentation of Proteobacteria, which is often associated with dysbiosis and inflammation. Bacilli was the dominant bacterial class across all groups ( Fig. 4 B), whereas group 4 exhibited a remarkable increase in the abundance of Gammaproteobacteria and Clostridia, suggesting that E. necatrix infection at higher doses alters the gut microbiota composition by promoting the growth of specific bacterial taxa. Lactobacillaceae was the predominant family in all groups ( Fig. 4 C), whereas group 4 exhibited a significant increase in the abundances of Enterobacteriaceae, Enterococcaceae, Clostridiaceae, and other minor families, suggesting that E. necatrix infection at higher doses results in a shift in the gut microbiota composition, favoring the growth of opportunistic or potentially pathogenic bacteria. At the species level ( Fig. 4 D), beneficial Lactobacillus species, including Ligilactobacillus salivarius, Limosilactobacillus reuteri , and Lactobacillus crispatus , dominated in the control group. Nevertheless, group 4 showed a significant increase in the abundance of pathogenic or opportunistic bacteria, such as Shigella flexneri, Shigella dysenteriae, Enterococcus cecorum, and Escherichia fergusonii ATCC 35469 , suggesting that E. necatrix infection disrupts the gut microbiota, reducing the abundance of beneficial Lactobacillus species and promoting that of potential pathogens associated with intestinal inflammation and dysbiosis. We next performed LEfSe modeling with a logarithmic LDA score cutoff of ≥4.0 to confirm both the statistical and taxonomic differences among the groups ( Fig. 5 ). A higher relative abundance of Limosilactobacillus urinaemulieris was detected in group 1 than in the control group, suggesting that E. necatrix infection influences its proliferation in the gut microbiota. Group 2 exhibited a higher abundance of Limosilactobacillus urinaemulieris but a lower abundance of Ligilactobacillus salivarius and Ligilactobacillus , suggesting that E. necatrix infection results in a decline in the abundance of beneficial Lactobacillus species, potentially disrupting the gut microbiota homeostasis. Group 3 showed a lower abundance of Ligilactobacillus salivarius but a higher abundance of Lactobacillus crispatus and Lactobacillus . Group 4 exhibited a significant increase in the abundance of pathogenic or opportunistic bacteria, including Shigella dysenteriae, Shigella flexneri, Escherichia fergusonii ATCC 35469 , and other Proteobacteria-associated taxa such as Enterobacteriaceae , Enterobacterales, and Gammaproteobacteria, suggesting that E. necatrix infection in group 4 promotes dysbiosis by favoring the growth of potentially harmful bacteria. Conversely, the abundances of beneficial bacteria such as Limosilactobacillus reuteri subsp. reuteri and taxa from Firmicutes, Bacilli, and Lactobacillaceae were reduced in group 4 compared with those in the control group, indicating a disruption of the gut microbiota balance. Overall, these findings suggest that E. necatrix infection, particularly at higher doses, induces a shift from beneficial Lactobacillus -dominated microbiota to an opportunistic pathogen-enriched microbiome, which may contribute to intestinal inflammation and disease progression. Fig. 5 E illustrates the differential abundance of bacterial taxa across groups 1–4. Pathogenic or opportunistic bacteria, including Shigella flexneri, Shigella dysenteriae, Escherichia fergusonii ATCC 35469, Enterococcus cecorum , and other Proteobacteria-associated taxa (Enterobacteriaceae , Gammaproteobacteria , and Enterobacterales ), were more abundant in group 4, suggesting that higher doses of E. necatrix infection promote the growth of potential pathogens. Conversely, beneficial bacteria such as Limosilactobacillus reuteri subsp. reuteri, Ligilactobacillus, Lactobacillus crispatus , and taxa from Firmicutes, Bacilli, and Lactobacillaceae were more abundant in groups 1 and 2, but their abundance decreased in group 4, indicating a loss of beneficial gut microbiota in more severe infection. The progressive shift from Lactobacillus -dominated microbiota to a Proteobacteria-enriched community in higher dose experimental groups suggests a dose-dependent microbial dysbiosis induced by E. necatrix infection. Body weight and villous changes correlated with taxonomic composition The relationships between environmental factors and microbial community were further investigated using the Mantel test ( Fig. 6 ), in which the VH:CD ratio and body weight were defined as environmental factors. Results revealed a significant association between body weight and taxonomic composition ( P < 0.01), suggesting that changes in microbial communities are related to variations in body weight. The VH:CD ratio and taxonomic composition also exhibited an association ( P < 0.05), indicating that microbial shifts correspond to alterations in intestinal morphology. These findings suggest that body weight and gut morphology (VH:CD ratio) significantly correlated with microbial composition, implying that E. necatrix infection-induced dysbiosis directly affects intestinal integrity and growth performance in chickens. E. necatrix infection altered the jejunal and ileal metabolome The negative-ion mode metabolomic analysis revealed that E. necatrix infection significantly disrupted the jejunal metabolic profiles. Compared with those in the control group, 345 dysregulated metabolites were found in group 1 ( P < 0.05), of which 134 metabolites were upregulated and 211 metabolites were downregulated (Supplemental Fig. S1). In group 2, 351 metabolites were dysregulated, of which 168 metabolites were upregulated and 183 metabolites were downregulated (Supplemental Fig. S2). In group 3, 1,030 metabolites were dysregulated, of which 226 metabolites were upregulated and 804 metabolites were downregulated (Supplemental Fig. S3). In group 4, 2,523 metabolites were dysregulated, of which 591 metabolites were upregulated and 1,932 metabolites were downregulated ( Fig. 7 A). All the different metabolites were matched against the KEGG database to obtain information on higher metabolite enrichment pathways. In group 4, the specific metabolites were primarily associated with glycolysis/glucogenesis; citrate cycle (tricarboxylic acid (TCA) cycle); alanine, aspartate, and glutamate metabolism; d-amino acid metabolism; pyruvate metabolism; and glyoxylate and dicarboxylate metabolism ( Fig. 7 B). Spearman’s correlation analysis showed that certain microorganisms correlated with metabolites ( P < 0.05) ( Fig. 7 C). Some bacteria exhibited a positive correlation with specific metabolites, suggesting their involvement in the production or regulation of these metabolites. For instance, Shigella flexneri exhibited a strong positive correlation with certain metabolites, such as oxaloacetic acid (OAA). Similarly, Escherichia fergusonii highly positively correlated with gamma-linolenic acid. In contrast, other bacteria exhibited a negative correlation, potentially inhibiting the accumulation of metabolites or promoting their degradation. For instance, Limosilactobacillus reuteri showed a strong negative correlation with genipin. These results suggest that microbial composition influences the gut metabolic environment. Discussion This study investigated the effects of E. necatrix infection on intestinal tissues and established a novel histopathological scoring system for quantifying the severity of lesions. Results demonstrated that this scoring system significantly correlated with other disease-related factors and that intestinal lesions also significantly correlated with microbial community composition. Hence, this study provides a multilayered perspective by integrating pathological damage, gut microbiota, and host metabolic changes for a comprehensive analysis. E. necatrix primarily parasitizes the epithelium and lamina propria of the mid-intestine in chickens, where its extensive replication causes severe mucosal damage and hemorrhage ( Tyzzer et al., 1932 ). In this study, chickens in the high-dose infection group exhibited diarrhea and bloody stools, with mortality occurring within 6 DPI, indicating that the acute pathogenicity of E. necatrix is strongly dose-dependent. In our experimental design, the lower doses (10² and 10³ oocysts) reflect exposure doses commonly encountered in floor-reared flock ( Hein, 1971 ), whereas higher doses (10 4 and 10 5 oocysts) correspond to moderate challenge levels widely used in recent pathogenesis and vaccine evaluation studies ( Gao et al., 2021 ; Xu et al., 2022 ). These higher doses consistently induce quantifiable histopathological lesions and measurable shifts in the gut microbiota. Furthermore, the intestinal lesion scores and histological findings further confirmed the dose-dependent nature of tissue destruction. In the highest-dose group, the intestinal villi were almost completely destroyed, and the VH:CD ratio significantly decreased to approximately 1, indicating severe villous atrophy and loss. Histological sections revealed widespread epithelial loss, villous necrosis, and hemorrhage in the lamina propria, accompanied by extensive infiltration of inflammatory cells. These histopathological changes emphasize the breakdown of the intestinal epithelial barrier, directly resulting in villous structural damage and atrophy. Previous studies have similarly reported that coccidial infections can cause villous atrophy and a reduction in the number of functional epithelial cells, ultimately decreasing the intestinal absorptive surface area ( Tyzzer et al., 1932 ). Severe coccidial lesions could damage the brush border of the villous epithelium, resulting in villous shortening or even fusion, significantly impairing the absorption capacity of the intestine ( Fernando et al., 1983 ; Allen, 1987 ). Studies also suggest that severe mucosal damage results in reduced growth performance and increased energy demands ( Turk, 1972 ; Adams et al., 1996 ). The nearly flattened villi observed in this study are consistent with these previous reports. Overall, the severity of histopathological lesions determines the extent of villous damage. The parasite-induced cellular rupture and inflammatory response in the intestine are direct causes of villous atrophy, establishing a clear causal relationship between infection severity and epithelial damage. The histopathological scoring system used in this study demonstrated excellent stability and accuracy in evaluating intestinal lesions. Our histopathological scoring system is based on microscopic changes such as villous atrophy, crypt hyperplasia, and mucosal epithelial damage. This approach provides continuous and semiquantitative indicators to characterize lesion severity. Our results demonstrated that the scoring system effectively distinguished different levels of tissue damage. The low-dose group exhibited no significant difference compared with that in the control group, whereas the high-dose group displayed a marked decrease in the VH:CD ratio and extensive mucosal destruction. The strong dose-dependent relationship between histological scores and infection dose indicates that the scoring results are highly stable and reproducible. Moreover, the histological scores in this study correlated well with other disease indicators, such as body weight loss and oocyst shedding, confirming that this scoring system accurately reflects the actual impact of the disease on the host. Severe intestinal damage is accompanied by drastic changes in the gut microbiota. Our study demonstrated that as the infection dose increases, there is a significant shift in the gut microbial diversity and composition, with the most severe infection group exhibiting microbial restructuring. In the normal control group, commensal bacteria such as Lactobacillaceae dominated the gut microbiota. However, E. necatrix infection at a high dose resulted in a substantial decline in the abundance of these beneficial bacteria, whereas opportunistic pathogenic bacteria proliferated. For instance, we observed a significant reduction in the abundance of Lactobacillus species (e.g., Ligilactobacillus and Lactobacillus ) postinfection, whereas that of pathogenic bacteria such as Shigella, Escherichia , and Enterococcus increased substantially. The relative abundance of Proteobacteria also increased significantly, indicating gut dysbiosis. This imbalance is also supported by literature reports, as coccidial infections frequently deplete commensal bacteria and enrich opportunistic pathogens in the gut ( Lu et al., 2021 ). For instance, infection with E. tenella significantly reduces the abundance of beneficial anaerobic bacteria, such as Lactobacillus and Ruminococcus , and promotes the proliferation of Enterobacteriaceae (e.g., Escherichia coli ) in the intestine ( Kimura et al., 1976 ; Cui et al., 2017 ). In chickens, E. tenella is the most pathogenic Eimeria species that parasitizes the cecum, causing severe lesions and a marked reduction in the abundance of beneficial anaerobes. After E. tenella infection, there is a significant reduction in bacterial diversity in cecal contents ( Cui et al., 2017 ). In contrast, the infection with E. necatrix in this study resulted in a relatively increased alpha diversity in intestinal microbiota in the most severely infected group, because of the simultaneous proliferation of various opportunistic pathogenic bacteria. Although alpha diversity is generally considered a marker of a healthy microbiome, it can paradoxically increase under certain pathological conditions due to the expansion of opportunistic pathogens, as previously demonstrated in haem-induced dysbiosis and disease-related microbiota alterations ( Litvak et al., 2017 ; Monteagudo-Mera et al., 2023 ). It is proposed that severe E. necatrix infection causes extensive mucosal destruction and hemorrhage, allowing oxygen along with blood-derived nutrients such as heme, iron, and amino acids to diffuse into the inflamed microenvironment. This altered milieu preferentially supports the outgrowth of facultative opportunistic gut bacteria—particularly members of the Enterobacteriaceae family—thereby driving a pathological bloom and an apparent, yet artifactual, increase in alpha diversity. ( Stanley et al., 2014a ) Simultaneous infection with high doses of E. acervulina, E. maxima , and E. brunetti also causes a significant reduction in the abundance of Lactobacillus , accompanied by a substantial increase in the abundance of pathogenic bacteria such as Shigella, Escherichia , and Enterococcus ( Stanley et al., 2014b ). Similarly, Kim et al. observed that E. maxima infection alone reduced Lactobacillus populations in the ileal contents and altered the gut microbial structure ( Kim et al., 2015 ). Collectively, these data indicate that, irrespective of the site of infection, different Eimeria infections disrupt the balance of normal gut microbiota, shifting the composition from a beneficial bacterial dominance to a pathogenic bacterial overgrowth pattern. The changes in gut microbiota are causally linked to tissue pathology. On the one hand, parasite-induced damage to the intestinal mucosa directly alters the physiological environment and barrier function of the gut, such as increasing the epithelial permeability and modifying the secretion of antimicrobial molecules, thereby influencing microbial composition both directly and indirectly ( Zaiss and Harris, 2016 ). When widespread epithelial exfoliation occurs, normal commensal bacteria lose their habitat and nutrient sources, causing a decline in their population. Conversely, an impaired barrier provides an opportunity for pathogenic bacteria to adhere, colonize, and rapidly proliferate ( Williams, 2005 ). On the other hand, dysbiosis further exacerbates host inflammation and tissue damage, creating a vicious cycle ( Zhang et al., 2015 ). The correlation analysis in this study revealed a significant association between villous height reduction and microbial composition changes, indicating that severe tissue pathology is often accompanied by substantial microbial shifts. The gut microbiota and host metabolism are closely interconnected. When infection disrupts microbial homeostasis, the intestinal metabolic environment is also affected. The metabolomics analysis in our study showed that E. necatrix infection induces widespread changes in metabolite concentrations, with the number and types of affected metabolites increasing exponentially with the infection dose. In the severe infection group, thousands of metabolites exhibited significant alterations, the majority displaying a downward trend, indicating severe disruptions in host energy metabolism and nutrient utilization. Pathway enrichment analysis further identified the most affected metabolic pathways, including carbohydrate catabolism and utilization (glycolysis, the TCA cycle, and pyruvate metabolism) as well as multiple amino acid metabolic pathways. This finding suggests that localized intestinal infection and damage ultimately exert systemic effects, resulting in energy deficiency and protein metabolism disorders. This observation is consistent with findings from other studies on the metabolic effects of Eimeria infections. For instance, mixed Eimeria infections can cause significant alterations in host amino acid and fatty acid metabolic pathways ( Su et al., 2024 ). Multiple mechanisms contribute to these metabolic abnormalities. First, severe villous atrophy causes impaired intestinal digestion and absorption, restricting nutrient uptake and naturally altering host metabolite levels. Previous research has demonstrated that infections with E. necatrix and E. maxima significantly impair protein digestion and absorption ( Turk, 1972 ), and infection with E. acervulina can inhibit small intestinal disaccharidase activity and glucose transporter expression, reducing the ability of the host to break down and absorb carbohydrates ( Enigk and Dey-Hazra, 1976 ). When nutrient absorption and utilization are compromised, the host may mobilize its own reserves or break down tissues to compensate for the energy deficit, further disrupting amino acid and energy metabolism. Moreover, dysbiosis directly affects metabolic reactions within the intestinal lumen. Under normal conditions, commensal gut bacteria ferment undigested dietary fibers to produce beneficial metabolites such as short-chain fatty acids (SCFAs), which help maintain an acidic gut environment and provide an additional energy source. However, coccidial infections often disrupt this process. Eimeria infections result in a sharp decline in the abundance of acid-producing anaerobic bacteria, reducing SCFA production and increasing the pH of cecal contents ( Bjerrum et al., 2006 ). A higher intestinal pH creates an unfavorable environment for beneficial bacteria and promotes the growth of certain pathogenic bacteria, further exacerbating dysbiosis. Simultaneously, reduced SCFA levels imply that epithelial cells lose a crucial energy source and signaling molecules, potentially impairing epithelial renewal and immune regulation ( Leung et al., 2019 ). The proliferation of pathogenic bacteria may also result in the accumulation of certain toxic metabolites. For instance, our study showed that increased Shigella abundance was associated with elevated OAA levels in the gut. Because OAA is an essential intermediate in the TCA cycle ( Arnold and Finley, 2023 ), its increased levels suggest alterations in the metabolism of host cells or other gut microbes, creating a nutritional environment favorable for the growth of Shigella . Similarly, the proliferation of Escherichia species was accompanied by the accumulation of proinflammatory fatty acids, such as gamma-linolenic acid. In contrast, the depletion of beneficial commensal bacteria may prevent the metabolism of certain compounds that are typically processed by these microbes, resulting in their accumulation. These findings are consistent with previous reports linking Eimeria infections to oxidative stress ( Lu et al., 2021 ). After infection, intestinal epithelial cells produce excessive reactive oxygen species (ROS), which not only cause direct damage to intracellular lipids and proteins but also suppress appetite and reduce energy metabolism efficiency. Altogether, the gut microbiota imbalance and tissue damage induced by E. necatrix infection contribute to multifaceted metabolic disturbances in the host, including insufficient energy supply, impaired digestion and absorption, and a deficiency of beneficial metabolites. These metabolic alterations, in turn, affect both gut health and systemic conditions. Reports on coccidiosis have documented outcomes such as hypoglycemia, reduced blood lipid levels, and growth retardation ( Allen, 1988 ; Su et al., 2014 ). Therefore, the impact of Eimeria infection on the host can be viewed as a chain reaction, starting from tissue pathology, resulting in microbial dysbiosis, followed by metabolic disruption—each step reinforcing the next—ultimately causing clinical manifestations such as malnutrition and poor growth performance. This study has some limitations that should be acknowledged. First, the limited sample size may have reduced statistical power. Second, the experiment was performed under controlled laboratory conditions using a single chicken breed and a single infection model, which may differ from the complex, repeated low-dose exposure scenarios commonly observed in field conditions. Moreover, E. necatrix primarily infects laying hens, whereas the animals used in this study were younger. Although Hendrix broiler chicks provide a well-controlled experimental model, E. necatrix outbreaks in commercial settings predominantly afflict older laying hens. Broilers are selected for rapid growth and reach market weight before full immunological maturation, whereas layers exhibit a slower growth trajectory accompanied by a more developed adaptive immune repertoire and longer gastrointestinal transit time. These physiological contrasts influence gut physicochemical parameters—luminal oxygen tension, mucin composition, and digesta retention—that in turn shape microbial niches and fermentative outputs ( Stanley et al., 2014b ). Additionally, the metabolite profiles in layers are modulated by peak lay–associated endocrine changes and higher dietary calcium demands ( Khan et al., 2020 ). Consequently, the observed infection-induced shifts in alpha-diversity and bile-acid–linked metabolites in broilers may not be directly transferrable to laying hens. In light of these limitations, we caution against overgeneralisation of our findings. Further studies are warranted to validate whether the pathology-microbiome-metabolome relationships identified in broilers are conserved in adult layers. Future research should also incorporate larger sample sizes and assess the applicability of this histopathological scoring system to infections caused by other Eimeria species. Careful evaluation of result consistency across different rearing conditions is essential before broader application. Conclusion This study integrates pathological, microbial, and metabolic data to propose a hypothetical model linking parasitic infection to metabolic dysregulation ( Fig. 8 ). First, E. necatrix undergoes extensive replication in the intestine, directly causing severe mucosal damage and hemorrhagic lesions, ultimately resulting in near-complete villous destruction. As the disease progresses, villous height significantly decreases, crypt depth increases, and the VH:CD ratio decreases sharply. A reduced VH:CD ratio indicates a drastic reduction in intestinal absorptive surface area, forcing the host to mobilize additional energy and nutrients to accelerate mucosal cell renewal in an attempt to compensate for the damage. Simultaneously, severe tissue lesions compromise intestinal barrier function, making it easier for pathogenic bacteria to invade. These histopathological injuries trigger changes in the intestinal microenvironment, causing a sharp decline in the abundance of normal commensal bacteria due to alterations in nutrient availability and habitat conditions, whereas opportunistic pathogens proliferate excessively. This microbial imbalance further exacerbates intestinal inflammation, creating a vicious cycle between tissue damage and microbial disruption. Ultimately, the collapse of the gut microbiota profoundly affects host metabolism. Mucosal damage-induced nutrient malabsorption, coupled with alterations in microbial metabolic byproducts, causes widespread metabolic abnormalities in the host. Funding This project was supported by the National Science and Technology Council, Taiwan , through research funding ( 111-2313-B-020-009-MY3 ). Declaration of competing interest The author(s) declare(s) no conflicts of interest. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.psj.2025.105412 . Appendix Supplementary materials Image, application 1 Image, application 2 | [
"ADAMS",
"ALLEN",
"ALLEN",
"ARNOLD",
"BJERRUM",
"BOLYEN",
"CAI",
"CHAPMAN",
"CUI",
"ENIGK",
"FERNANDO",
"GARCIAGUTIERREZ",
"HEIN",
"JOHNSON",
"KAMADA",
"KIM",
"KIMURA",
"LEUNG",
"LITVAK",
"LIU",
"LU",
"NOVALRIVAS",
"SALZMAN",
"SEGATA",
"SHIRLEY",
"STANLEY",
"SU",
... |
05ca3c75fde14c6c977e16095efd2d73_Insights on the molecular mechanism of anti-inflammatory effect of formula from Islamic traditional _10.1016_j.jtcme.2018.09.004.xml | Insights on the molecular mechanism of anti-inflammatory effect of formula from Islamic traditional medicine: An in-silico study | [
"Elgazar, Abdullah A.",
"Knany, Hamada Ramadan",
"Ali, Mohammed Soliman"
] | Background and aim
Traditional medicine is an important source for drug discovery. However, many challenges face the scientific community to develop novel drugs from it. To investigate the rationale behind the medical legacy of centuries of precious knowledge from traditional medicine, we aimed at performing virtual screening to identify potential leads from the middle-age textbook, The Canon of Medicine.
Experimental procedure
A database of chemical constituents of plants mentioned within the book was built and docked against different molecular targets associated with inflammation such as phospholipase A2, p38 alpha mitogen activated protein kinase, cyclooxygenase-2 and leukotriene B4 dehydrogenase, after that literature survey was done to determine the consistency of traditional uses and molecular docking results with the current knowledge obtained from previous studies and reports.
Results and conclusion
The in-silico study revealed the ability of several chemical constituents, in the plants under investigation, to bind effectively to different targets associated with inflammation, which was consistent with previous reports, indicating that Islamic traditional medicine can be considered as a reliable promising source for developing new anti-inflammatory agents with low toxicity and minimal side effects. | Abbreviations PLA 2 Phospholipase A2 p38 MAPK p38 Mitogen Activated Protein Kinase Cox-2 Cyclooxygenase-2 LTB4DHR Leukotriene B4 Dehydrogenase AP-1 Activator Protein-1 ATF-2 Activating Transcription Factor 2 TM Traditional Medicine VS Virtual Screening SBVS Structure Based Virtual Screening PDB Protein Data Bank DL Human Drug-Likeness OB Oral Bioavailability ADME Absorption, Distribution, Metabolism, And Excretion iNOS Inducible Nitric Oxide Synthase LPS Lipopolysaccharide PGE2 Prostaglandin-2 TNF-alpha Tumor Necrosis Alpha IL-1β Interleukin-1 Beta FCA Freund's Complete Adjuvant IL-12 Interleukin-12 NO Nitric Oxide. NF-ΚB Nuclear Factor-Kappa B JNK C-Jun N-Terminal Kinase (ERK)1/2 Extracellular Signal-Regulated Kinase ADME Absorption, Dissolution, Metabolism, Excretion 1 Introduction Inflammation is the body's attempt at removing harmful stress, including damaged cells, irritants or pathogens to start the healing process, which leads to release of pro-inflammatory mediators into the blood or affected tissues causing increased blood flow to the of injury or infection and may result in redness and warmth, also chemicals like prostaglandins cause a leak of fluid into the tissues resulting in swelling. This protective process may stimulate nerves and cause pain. 1 , However, the continuous inflammatory state is believed to be responsible for the pathogenesis of several diseases such as metabolic disorders, several types of cancer and Alzheimer's disease. Consequently, targeting chemical mediators controlling inflammation should be a promising approach for preventing and management of several diseases. 2 2–4 Chemical mediators of the inflammatory process include a variety of substances originating in the plasma and the cells of uninjured tissue and possibly from the damaged tissue. Moreover, a lot of enzymes like phospholipase A2, p38 mitogen activated protein kinase (p38 MAPK), cyclooxygenase-2 (Cox-2), Leukotriene B4 dehydrogenase (LTB4DHR) is associated with inflammation process. Phospholipase A2 is a member of a family of esterases that are involved in a wide array of physiological and pathological processes. It catalyzes the hydrolysis of phospholipids leading to the production of free fatty acids and lysophospholipids, which is converted to arachidonic acid. Subsequently, arachidonic acid is converted to prostaglandin H 2 which is the main precursor for production of prostaglandins and pain producing substances. Since it plays an important role in the generation of pro-inflammatory lipid mediators, it's considered as one of the therapeutic targets of interest to develop new anti-inflammatory agents 5–8 During inflammation, multiple intracellular signaling cascades activate the (p38 MAPK) pathway, which control the recruitment of leukocytes to sites of inflammation. 9 , Furthermore, it can regulate several inflammatory pathways by activation of several transcriptional factors such as activator protein-1 (AP-1), activating transcription factor-2 (ATF-2) 10 direct phosphorylation of Phospholipase A2 to initiate the arachidonic acid pathway 11 also it was found that expression of COX-2 and PGE2 is sensitive to p38 MAPK blockade 12 13 , and recent work has shown that regulation of COX-2 activity may depend on MAPK activation of the Nuclear Factor-Kappa B (NF-κβ) pathways. 14 15 The expression of COX-2 is induced selectively by pro-inflammatory cytokines at the site of inflammation. It is involved in the conversion of arachidonic acid to prostanoids. Since COX-2 has been localized primarily to inflammatory cells and tissues, many drugs were developed to inhibit this enzyme selectively and achieved excellent clinical outcomes. 16 , 17 After the initiation of inflammatory response, there are intrinsic mechanisms to resolve the inflammatory action by production of anti-inflammatory Lipoxins which work as a stop signaling and stimulate the release of other resolving factors however, the eicosanoid inactivating enzymes such as LTB4DHR could limit the anti-inflammatory effect of such mediators leading to extended inflammatory conditions. Interestingly, it was found that clinically used drugs such as indomethacin and diclofenac don't exert their actions by only inhibiting the cyclooxygenase enzyme but also by preventing the degradation of the anti-inflammatory eicosanoids by inhibiting LTB4DHR, which suggest the importance of the regulation of this target in treatment of inflammatory diseases. 18 19 , 20 Traditional medicine (TM) is the total of the knowledge, skills, and practices, based on the theories, beliefs, and experiences indigenous to diverse cultures, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness. 21 , The Canon of Medicine written by the medieval physician Ibn-Sina (Avicenna), was the main medical reference till the 18th century 22 23 , however, the advances in medical knowledge and the introduction of synthesized medication led to the neglection of such invaluable legacy. 24 While TM proved its efficacy over a long time of experimentation and observations, still the lack of the tools and technology for identification and standardization of components of TM poses a great challenge for the development of new therapeutics from traditional medicine. Moreover, TM is based on empirical philosophy that defines the health and disease state considering the body as a whole unit, and the balance between different forces controlling the physiological functions. This is contrary to the western medicine point of view which relies more on understanding the etiology of diseases by investigating molecular cascades that induce and manage the pathogenesis of different ailments. 25 , 26 Although the accumulation of huge amount of phytochemical information addressing the isolation of chemical constituents of medicinal plants; it will be almost impossible to evaluate their biological activities especially with the increasing number of therapeutic targets, this is where virtual screening (VS) can make an important contribution. 27–29 VS is specialized discipline that uses computational methods to simulate drug-receptor interaction in a virtual way, which is known as structure based virtual screening (SBVS); using such tool could provide a chance to identify phytochemicals that are responsible for the effectiveness of traditional method. 30 This work aims to employ SBVS to investigate the molecular mechanisms behind the anti-inflammatory effect of compounds derived from formula mentioned in The Canon of Medicine, by assessing their ability to bind with the beforementioned molecular targets; also we will shed the light on evidences from the literature that supports such claim. 2 Materials and methods 2.1 Choosing herbal formula The formula was chosen from The Canon of Medicine, 1593 manuscript digitized by American University of Beirut ( http://ddc.aub.edu.lb/projects/saab/avicenna/ ), under the treatise 3, on general management of bites (stings) and driving away (repelling) insects, and on signs of snake bites and their types, Avicenna claimed that these formulae were used for treatment of scorpion bites which are associated with incidence of inflammation. To translate the plants from Arabic to Latin, the appendix accompanies the electronic version( http://ddc.aub.edu.lb/projects/saab/avicenna/appendix_1.html ) in addition to other Arabic-Latin dictionaries were used. 31 2.2 Retrieving chemical constituents of the plants The active constituents of the plants under investigation were retrieved mainly from dictionary of natural products ( http://dnp.chemnetbase.com ), KNApSAcK Core System ( http://kanaya.naist.jp/knapsack_jsp/top.html ) metabolomics database, and reviewing available phytochemical literature. 2.3 Building database of the chemical constituents of the plants The compounds were drawn using Chembiodraw, Cambridge soft corporation, (Version 14) as a neutral species with the correct stereochemistry and then saved in SDF format, the files were exported to MOE software 2015, and converted to the 3D structure using quick prep module without changing the stereochemical aspects of the compounds, energy minimization was done using the MMFF94x force field using gradient of 0.1 RMS Kcal/Mol/A 2 The compounds exported to MONA software ( 32 http://www.biosolveit.de/Mona/ ), and add the compounds were saved to as one file in mol2 format. 33 2.4 Applying Lipinski's rule of five The database was subjected to filtration using MONA software Lipinski's rule of five filter which is a guideline for choosing compounds with a greater chance of yielding successful drugs that have a good bioavailability 34 2.5 Choosing the molecular targets Therapeutic target database (TTD) ( http://bidd.nus.edu.sg/group/cjttd/ ) was consulted for target selection and target validation, 5 targets were chosen to represent different targets involved in inflammation) their X-ray crystal structure were retrieved from protein data bank( www.pdb.org ), their PDB- ID was as following; phospholipase A2(PLA2): 1DB4, p38 α mitogen activated protein kinase (p38 MAPK): 1OUK, Leukotriene B4 dehydrogenase (LTB4DHR): 2DM6, and Cyclooxygenase-2 (Cox-2): 3NT1. 2.6 Preparation of receptor for virtual screening Essential amino acids required for good binding affinity was determined from the Pose view generated interaction provided by the PDB, and the bounded reference ligand was exploited to determine the binding site using the default options in the receptor preparation wizard in LeadIT software ( https://www.biosolveit.de/LeadIT/ ).The binding site was defined as 6.5 Å around the ligand in the active site, water molecules were removed if they don't have a role in the interaction of the ligand with the active site. 2.7 Molecular docking studies The molecular docking was performed using FlexX docking engine in LeadIT software which uses the robust incremental construction algorithm. The validation of the software was assessed by redocking the co-crystalized ligand in the active site of the target; taking in consideration the root-mean-square deviation (RMSD) value should be less or equal to one, and the ability of the software to reproduce the same interactions observed experimentally using X-ray crystallography. The database of the compounds was loaded and docked in the active site using the default options, the maximum number of solutions per iteration and fragmentation were set to 200 and the top 3 poses for each compound were kept for visualization. The best 10 compounds in term of binding affinity, and the ability to bind with essential amino acids in the active were selected for post-docking analysis, also ADME parameters namely, human drug-likeness (DL), oral bioavailability (OB) and Caco-2 permeability (Caco-2) were employed to calculate their pharmacokinetic using TCMSP database. 35 3 Results and discussion To investigate the claims of anti-inflammatory effect of Islamic traditional medicine; we chose a formula from the Canon of Medicine, based on reports indicating that plants used for treatment of scorpion bites are usually possessing anti-inflammatory effects the plants were translated to their corresponding Latin names ( 28 Supplementary.Table.1 ) to facilitate the access to phytochemical information found in the literature. One hundred and fifty-seven compounds derived from the plants under investigation were collected from different phytochemicals database, and available literature. ADME filtration revealed that 153 compounds were obeying to Lipinski's rule of five. The validation of SBVS was done by redocking the co-crystallized ligands, in all cases, the software could reproduce the experimental binding mode with RMSD equal or less than one, also HYDE assessment predicted binding affinity like those reported in protein data bank ( Table 1 ). The molecular docking of the database of the 3D structure of the compounds in the active site of the selected target was done, the best 10 compounds in terms of binding affinity and the ability to interact with essential amino acid in the binding site were selected for post docking analysis. It was found that 22 compounds could bind effectively at least to single target only; eleven of them were able to bind with more than putative target which suggests that the formula exerts its action by targeting different steps in the pathway of inflammation. Remarkably, out of the nine plants under investigation, twelve compounds ( Supplementary Fig.1 ) were found to be in 3 plants , Mentha pulegium, Rumex patientia and Taraxacum officinale ( Table 2 ), their Pharmacokinetic properties can be found at Table 3 . In case of docking the compounds into the binding site of PLA2; it was noticed that the best 3 compounds in the term of binding energy ( 5,8 and 12 ) were able to interact with Gly29, Gly31, His47 or Asp48 amino acids ( Fig. 1 )which is consistent with other reports indicating the importance of these interactions to achieve inhibitory effect on this enzyme. 36 , 37 For p38 MAPK, the post docking analysis of the 3 best compounds ( 4,5 and 7 ) in the binding energy showed the interaction with amino acids such as MET 109 and GLY 110 ( Fig. 2 ), which is indicative for the ability to inhibit the activity of this type of kinases. 38 Analyzing the interaction between the compounds ( 7 , 8 and 10 ), with the active sites of the crystal structure of COX-2, revealed that they could interact with two essential amino acids Ala 120 and Tyr 355 by hydrogen bonding and hydrophobic interactions with the following amino acids Val349, Ala527, Leu352 which has been shown to be an important residues for the proper positioning of amino acids, required for enzyme activity( Fig. 3 ). 3.9 Finally, the interaction between compounds ( 8 , 9 and 10 ) and the binding site of LTB4DHR demonstrate their ability to bind with Arg56, Tyr262, Val 272 ( Fig. 4 ) which prevents the activation of the enzyme by restricting the access of its normal substrate 15-oxo-PGE2. 40 It was noticed that best three compounds in the binding affinity were almost superimposed in the active site of the 4 targets as presented in Fig. 5 which indicates that they possess the same binding mode. Collectively, our in-silico study showed that the active compounds in the formula could inhibit different molecular targets under investigation, the proposed mechanism of action summarized in Fig. 6 suggests that some of the compounds could inactivate PLA2 enzymes by direct inhibition or through preventing its phosphorylation by blocking the MAPK pathway which also would prevent the release of cytokines responsible for chemotaxis and further progress of inflammation, also the compounds might be able to stop the production of prostaglandins and the hydrolysis of anti-inflammatory lipoxin by inhibiting COX-2 and LTB4DHR respectively. It's worthy to note that reviewing the literature showed that there are many reports addressing the anti-inflammatory effect of the plants mentioned in the formula prescribed by Avicenna; Ferula szowitsiana and its sesquiterpene coumarins was found to decrease the inflammation in the carrageenan induced paw edema significantly 41 , in addition, methyl galbanate an active compound found in the plant was able to inhibit nitric oxide (NO) production and inducible nitric oxide synthase (iNOS) expression in Lipopolysaccharide (LPS) stimulated RAW264.7 cells at a concentration of 10 μM. 42 43 , 44 Ferula asafoetida a closely related species was reported to have analgesic activity by inhibiting the production of prostaglandins 45 Rumex patientia aqueous and alcoholic extract is reported to possess anti-inflammatory effect against carrageenan induced paw edema also it inhibited capillary permeability induced by xylol and hyaluronidase, and found to be as effective as indomethacin. 46 The phytochemical investigation of this plant revealed the presence of anthraquinones such as Emodin and Chrysophanol, which are known to have anti-inflammatory effect. 47 48 , 49 Taraxacum officinale was found to exert its anti-inflammatory effect through its inhibition of NO production, COX-2 expression and/or its antioxidative activity. In agreement to our 50 in-silico study, the extract of the leaves of T.Officinale was able to down-regulate nitric oxide, PGE2, and pro-inflammatory cytokines and reduced expressions of iNOS and COX-2 via inactivation of the MAPK signal pathway in LPS stimulated RAW264.7 cells. Taraxasterol a pentacyclic triterpene isolated from 51 T.Officinale significantly inhibited the overproduction of serum TNF-α, IL-1β and PGE2 in Freund's Complete Adjuvant (FCA) induced arthritis in rats. 52 Mentha pulegium is commonly used traditionally for treating snake bites, however, there're no reports that address the anti-inflammatory effect of this plant, also some studies proposed that its anti-inflammatory is due to its phenolic content which acts as strong anti-oxidants,. 53 , 54 On the other hand, Pedalitin a flavonoid that is isolated from M. Pulegium showed anti-inflammatory properties by decreasing the production of NO and pro-inflammatory cytokines such as TNF-α and IL-12 55 ; again this is in concordance with our molecular docking investigating which showed the ability of Pedalitin to inhibit PLA2 and P38 MAPK, which play critical role on the production of inflammatory cytokines 56 57 , 58 Cichorium intybus a variety of C. endivia roots demonstrated significant dose-dependent decrease in paw edema, which can be explained by the observed diminished the serum TNF-α, IL-6, and IL-1β levels in comparison to control group also 8-deoxylactucin, a major sesquiterpene found in chicory extract was reported to be an inhibitor of COX-2 induction. 59 60 Bryonia alba is a medicinal plant which is rich in Cucurbitacins and their glycosides, while there're no studies discussing its anti-inflammatory effect, several studies indicated the anti-inflammatory effect of this class of compounds. 61 62–64 The anti-inflammatory effect of Myrtus communis was investigated in several reports; its extract exhibited strong inhibitory activity against IL-8 secretion also the essential oil of 65 Myrtus communis reduces leukocyte migration to the damaged tissue and exhibits anti-inflammatory activity. Another study showed that the anti-inflammatory effect of the Oligomeric Nonprenylated Acylphloroglucinols isolated from 66 M.Communis through inhibition of eicosanoid biosynthesis. 67 Cyperus longus is one of the most common plants in traditional medicine, however there're no studies discussing the anti-inflammatory effect of it. On the other hand, the biological evaluation of the extract and compounds isolated from it showed that they possessed anti-oxidant, immunomodulatory and cytotoxic effect. 68–70 On the other hand, the chemical classes of the hits suggested by the molecular docking study, were flavonoids, phenolic acids and anthraquinones. Available literature showed that Pedalitin, a known inhibitor for 5-lipoxygenase enzyme was reported to decrease the nitrite production after treatment of RAW 264.7 cell line with LPS at concentration 10 μg/ml. 71 72 Jaceosidin showed significant anti-inflammatory at 40 μg by inhibiting the production of (NF-κβ) activity, (NO) production, and suppressed expression of inducible nitric oxide synthase (iNOS) in (LPS) induced RAW264.7 cell line also it inhibited COX-2 expression and NF-κB activation, and markedly reduced TNF-α, IL-1β, and prostaglandin E2 (PGE2) levels in carrageenan induced model in mice. 73 74 Luteolin is a common bio-flavonoid with prominent biological activities, several reports indicated that it exerted its anti-inflammatory effect through inhibition of MAPKS, COX-2, NF-κB, Leukotriene B4 and suppression of release other several cytokines 75 While there's no reports addressing the anti-inflammatory effect of Thymonin, but the extract of Zataria multiflora seeds which contains this flavonoid was reported to decrease the level of serum levels of nitric oxide, nitrite, PLA2, and histamine in sensitized Guinea pigs. There were no specific studies addressing the anti-inflammatory effect of caftaric acid, or Cis-Caffeoyl tartaric acid, however phenolic acids are well known to be a good inhibitors for COX and interfering with inflammasome pathways. 76 77 Emodin and Rhabarberone are anthraquinones which were reported to possess anti-inflammatory effect by reducing the expression of several cytokines such as TNF-α, Il-2, COX-2 and transcription of NF-κB. In another study Physcion and emodin caused 65–68%reduction of edema volume at 40 mg/kg in Carrageenan-induced inflammation rat paw and decreased the production of iNOS in LPS stimulated macrophage in doses dependent manner 78–80 81 The Glycoside of physcion (physcion 8-O-β-glucopyranoside) was able to inhibit wide array of inflammatory markers in different pathways such as c-Jun N-terminal kinase (JNK), P38 MAPK, extracellular signal-regulated kinase (ERK)1/2 in collagen-induced arthritis model. In the same context Chrysophanol protected against proinflammatory cytokine expression and release in LPS induced inflammation in RAW264.7 and showed protective effect in dextran sulphate induced colitis in rats by modulation of the activity of NF-κB/Caspase-1. 82 83 , 84 4 Conclusion In this work, we investigated the ability of several compounds, derived from formula found the Canon of Medicine, to bind with distinctive molecular targets associated with inflammation; such as phospholipase A2, p38 alpha mitogen activated protein kinase, Cyclooxygenase-2 and Leukotriene B4 dehydrogenase, using structure based virtual screening. Eleven compounds from three plants could interact with more than one target suggesting that the formula exerts its action through synergism, moreover, consulting the literature revealed the consistency of our in-silico results with several previous in-vitro and in-vivo ; indicating that there's rationale behind choosing the plants to treat inflammatory conditions, however, more clinical trials, standardization and safety studies are required before their employing in clinical practice. Conflicts of interest The authors declare no conflict of interest. Appendix A Supplementary data The following is the Supplementary data to this article: supplementary Chemical structure of compounds achieving best binding energy with the selected targets 1) Pedalitin 2) Jaceosidin 3) Thymonin 4) Emodin 5) Physcion 6) Rhabarberone 7) Chrysophanol 8) Cis-caffeoyl tartaric acid 9) Caftaric acid 10) Luteolin 11) Hydroxycinnamic acid 12) Methylemodin. supplementary Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.jtcme.2018.09.004 . | [
"MEDZHITOV",
"HUNTER",
"HOTAMISLIGIL",
"LIBBY",
"MUKHERJEE",
"FUNK",
"SHRIDAS",
"MURAKAMI",
"COULTHARD",
"HERLAAR",
"THWIN",
"KRAMER",
"PYPE",
"LAPORTE",
"LIN",
"SEIBERT",
"FURST",
"LAWRENCE",
"CLISH",
"HORI",
"FOKUNANG",
"GALE",
"STRATHERN",
"ALNAQIB",
"JIANG",
"WA... |
03bd2de844d842c5bcede79d442dff6f_Computational insights of phytochemical-driven disruption of RNA-dependent RNA polymerase-mediated r_10.1016_j.nmni.2021.100878.xml | Computational insights of phytochemical-driven disruption of RNA-dependent RNA polymerase-mediated replication of coronavirus: a strategic treatment plan against coronavirus disease 2019 | [
"Balkrishna, A.",
"Mittal, R.",
"Sharma, G.",
"Arya, V."
] | The current pandemic of coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has raised global health concerns. RNA-dependent RNA polymerase (RdRp) is the prime component of viral replication/proliferation machinery and is considered to be a potential drug target against SARS-CoV-2. The present study investigated the anti-RdRp activity of phytochemicals against SARS-CoV-2 infection. Virtual ligand screening was carried out to determine the potent compounds against RdRp. Molecular docking and an MD Simulation study were employed to evaluate the spatial affinity of selected phytochemicals for the active sites of RdRp. Structural stability of target compounds was determined using root mean square deviation computational analysis and drug-like abilities were investigated using ADMET. Bond distances between ligand and receptor were marked to predict the strength of interaction. Aloe, azadirachtin, columbin, cirsilineol, nimbiol, nimbocinol and sage exhibited the highest binding affinities and interacted with active sites of RdRp, surpassing the ability of chloroquine, lamivudine, favipiravir and remdesivir to target the same. All the natural metabolites exhibited stable conformation during MD Simulation of 101 ns at 310 K. Kinetic, potential and electrostatic energy were observed to be least in the case of natural metabolites in comparison with synthetic analogues. Deviations and fluctuations were observed to be structurally least in target phytochemicals. Physiochemical and biological properties of these compounds further validated their drug-like properties. Non-bonded distance was found to be short enough to form hydrogen bonding or hydrophobic interactions, which revealed that these target compounds can strongly bind with RdRp. The study found potential phytochemicals to disrupt the replication domain of SARS-CoV-2 by hindering RdRp. We therefore anticipate that the current findings could be considered as valuable for the development of an efficient preventive/therapeutic expedient against COVID-19. | Introduction Coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) first emerged in Wuhan, China in December 2019 and since then has become a global pandemic. SARS-CoV-2 is a member of betacoronavirus genus and exhibits 94.6% sequence homology with conserved domains of other members belonging to the Coronaviridae family, namely severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle East respiratory syndrome coronavirus (MERS-CoV) [ 1 , 2 ]. Cough, fever and shortness of breath were observed to be the prominent pathological symptoms associated with SARS-CoV-2 infection. Higher frequency of hospitalization, mortality rate and the non-availability of preventive/therapeutic treatment strategies have posed a serious threat to the world population during the current outbreak [ 3 , 4 ]. The pathophysiology of COVID-19 has not yet been fully explored. SARS-CoV-2 is a single-stranded positive sense RNA virus with viral RNA genome, employing a multi-subunit replication and transcription set-up. The genome size of SARS-CoV-2 is expected to be in the range of 29.8–29.9 kb. The 5ʹ end of the genome comprises of orf1ab , which encodes the orf1ab polyprotein whereas the genes lying at the 3ʹ end encode for structural proteins such as surface (S), envelope (E), membrane (M) and nucleocapsid (N) proteins. RNA-dependent RNA polymerase (RdRp) plays a vital role in viral replication and proliferation, which makes several copies of its RNA genome [ 5–8 ]. Gao et al in 2020 revealed that RdRp exists as a complex with nsp12 (residues S1 to Q932) and along with certain other smaller nsp7 (residues S1 to Q83) and nsp8 (residues A1 to Q198) polymerases on the basis of cryo-electron microscopy. RdRp resembles a right-hand structure with fingers, thumb and palm-like domains. Therefore, RdRp could be considered as a prominent drug target for SARS-CoV-2 [ 9 ]. A new therapeutic approach to target RdRp could yield promising results to overcome the virus outbreak. The present focus is keenly on the development of novel therapeutic option comprising anti-viral drugs and vaccines [ 10 , 11 ]. Several national and international research organizations are currently enrolled in development of vaccines to prevent or treat COVID-19, but till now no effective treatment strategy is available to target the virus outbreak. The present study aimed to analyse the therapeutic potential of phytochemicals in disrupting or hindering the conserved domain of RdRp responsible for SARS-CoV-2 replication and proliferation through in silico study. Materials and methods Protein and ligand structure preparation After screening a set of medicinal plants on the basis of an ancient medicinal text, 15 different natural moieties—belonging to Tinospora cordifolia , Azadirachta indica , Ocimum sanctum , Origanum vulgare , Salvia officinalis , Allium sativum , Melissa officinalis , Zingiber officinale , Aloe vera and Curcuma longa —were selected on the virtue of their ability to exhibit anti-viral activity against several viral strains [ 12 ] and to be used to target COVID-19 using computational studies. Molecular details of the chosen compounds were retrieved from PubChem ( https://pubchem.ncbi.nlm.nih.gov ), Zinc Database in multiple formats such as pdb, mol2 and sdf. Three-dimensional (3D) structures of selected ligands were generated using canonical SMILES through the RBPS Web Portal. Crystallographic data of ligands were examined in Cartesian coordinates. The protein structure of SARS-CoV-2 RdRp was retrieved from the protein data bank with PDB ID 6M71 ( https://www.rcsb.org/structure/6M71 ). 3D structures were visualized using UCSF Chimera to obtain actual insights of protein structure. Experimental settings for molecular docking A Lamarckian genetic algorithm with 250 000 energies were implemented to compute the binding affinity of natural moieties with RdRp to predict their efficiency in targeting replication of SARS-CoV-2. By using the AutoDock Tool of the AutoDock 4.2.6 package, a molecular docking study was carried out. 3D structures of protein and ligands were saved in PDBQT format. Ligands were placed in grid boxes of varying dimensions for each docking process. Different grid points for autogrid maps were set to be 71 × 53 × 25 Å and 77 × 46 × 25 Å for RdRp interaction with ligands. While carrying out the process, torsion degree of freedom was also defined and to attain the highest number of poses, 20 different modes were chosen with exhaustiveness of 8. Docking at different sites may help in determination of the best possible ligand–receptor interaction. Docked poses of ligands can be aligned over the receptor by using UCSF Chimera as the visualization tool to analyse the ligand–receptor interaction and to anticipate the affinity of ligands to target the receptors [ 13 , 14 ]. Binding energy can be computed as: ΔG Binding = G Complex – G Protein – G Ligand . Molecular dynamic simulation Using NAMD software, molecular dynamic simulation of docked protein–ligand complexes was carried out to predict their stability. CHARMM36 force field settings were chosen to run the MD simulation. With the help of visual molecular dynamic (VMD) software, protein structure files were obtained to run the process. Protein–ligand complexes were further subjected tor solution in cubic water boxes which contain transferable intermolecular interactions. Subsequently, box size was selected to ensure the distance of 5 Å between protein and box edges. Initially MD simulation for 50 000 steps of steepest descent with minimized energy were run using NAMD. Furthermore, the present system was subjected for simulation by NVT set-up. For the optimum run, temperature of 310K and 10 ns of simulated time duration were fixed. After the completion of the run, the simulated complexes were visualized using VMD and were further analysed to predict any possible change in the dynamics of protein–ligand complexes. The trajectories of simulated complexes were plotted to predict their stability and electrostatic, kinetic and potential energy were computed using VMD and NMAD software [ 15 ]. Root mean square deviation computation Root mean square deviation (RMSD) values represent the flexibility of protein structure and thereby reflect its mobility in trajectory. Higher RMSD values point to the higher mobility of ligands and vice versa. During the ligand–receptor interaction, the ligand poses generated using AutoDock Tools were analysed to determine their structural stability on the basis of their deviation from the native ligand pose. By using RMSD algorithms, results obtained from docking experiments can be validated and it will potentially help in enhancing docking performance [ 16 ]. RMSD can be calculated as: RMSD = 1 N ∑ i = 1 N d i 2 Active site prediction UCSF Chimera was used for the visualization of ligand–receptor binding pose. Docked structure of ligands with RdRp were analysed and the amino acid residues involved in ligand–receptor interaction were studied. Amino acid residues involved in interaction in proximity of 5 Å were depicted using the command line. Inhibitory potential of natural metabolites was further compared with synthetic anti-viral and anti-malarial compounds currently available on the market to fight against the disease. Similarly, a comparison of the active site of RdRp targeted by natural moieties and synthetic analogues was carried out [ 17 ]. Bond distance determination between active site and ligands Using UCSF Chimera, docked structures were visualized and bond distances between the active site and ligands were computed to predict the type of bond formation and to analyse the strength of the interaction between RdRp and the respective ligands. The presence of either hydrogen or hydrophobic bonds can be predicted based upon the bond distance [ 18 ]. ADME toxicity prediction Computational prediction of pharmacological and biological activity of ligands was carried out on the basis of Lipinski's Rule of Five. Assessment of absorption, dissolution, metabolism and excretion were performed virtually using the ADMETlab Web Server. AMES Toxicity, lethal dosage and skin sensitization parameters were also calculated [ 19 ]. Results Protein and ligand 3D structure preparation for molecular docking Canonical SMILES of selected ligands obtained from PubChem were loaded onto the RBPS Web Portal and their respective 3D structures were obtained, which were then converted into monomeric units in both mol2 and pdf file format. Popularly known as an anti-malarial drug, chloroquine was used as reference synthetic analogue for the comparison along with anti-viral agents such as favipiravir, remdesivir and lamivudine. 3D structure of RdRp with PDB ID 6M71was taken from the RCSB Web Portal and analysed using UCSF Chimera and saved in pdb format. Both the ligands and receptors were energy minimized and saved in PDBQT format using AutoDock Tool to carry out molecular docking at different receptor sites, thereby predicting the active site and binding affinities. Molecular docking study Using AutoDock Tool , phytochemical ligands were docked with RdRp. Docked compounds were ranked on the basis of maximum occupancy of binding pockets along with minimum Gibbs free energy. Ligands were docked with RdRp at different grid points (71 × 53 × 25 Å and 77 × 46 × 25 Å) and the binding affinities were represented as kcal/mol. Docked poses with less than 5 kcal/mol binding affinity represented negligible binding affinity and were not considered for further evaluation. From a long list of phytochemicals only seven were observed to exhibit remarkable inhibitory activity. Aloe, azadirachtin, nimbiol, nimbocinol, cirsilineol, columbin and sage—belonging to Aloe vera , Azadirachta indica , Ocimum sanctum , Tinospora cordifolia and Salvia officinalis , respectively—were observed to bind with RdRp with binding affinities of –7.1, –8.2, –7.3, –7.4, –6.9, –7.5 and –8.7 kcal/mol at the active site ( Table 1 ). Binding affinities exhibited by these phytochemicals were better than those of the synthetic analogues chloroquine, lamivudine, favipiravir and remdesivir. When RdRp was docked at 71 × 53 × 25 Å grid points; aloe and azadirachtin exhibited binding affinities of –7.1 and –8.2 kcal/mol whereas at 77 × 46 × 25 Å grid points, binding affinities were –6.7 and –7.1 kcal/mol, respectively. Similarly, nimbiol and nimbocinol exhibited binding affinities of –7.3 and –7.4 kcal/mol at 71 × 53 × 25 Å grid points whereas at 77 × 46 × 25 Å grid points, the results were –6.8 and –7.2 kcal/mol, respectively ( Fig. 1 ). From the above mentioned results it can be clearly predicted that RdRp at 71 × 53 × 25 Å grid points offered the best binding site for the ligands and the active site can be considered as a potential target for the possible drug candidates involved in targeting RdRp to fight against the current SARS-CoV-2 outbreak. Molecular dynamic study and computation of free energy MD Simulation of RdRp SARS-CoV-2 with aloe, nimbiol, nimbocinol, sage, azadirachtin, cirsilineol and columbin were carried out using NAMD software, and favipiravir and remdesivir were used as references for the study. MD Simulation was carried out for 101 ns at 310K. From the results it was observed that all the natural metabolites in complex with RdRp SARS-CoV-2 were stable for 101 ns with the receptor. From trajectory analysis it can be easily predicted that all of the above-mentioned compounds remained stable during the simulation process and exhibited similar patterns to that of the synthetic analogues remdesivir and favipiravir ( Fig. 2 ). Furthermore kinetic, potential and electrostatic energy was computed to compare their binding affinities and the results obtained from the simulation study were observed to be in concordance with those of the molecular docking study ( Table 2 ). All the natural metabolites selected for the study exhibited better results in comparison to the synthetic analogues. Docked conformer stability analysis The flexibility and overall stability of docked complexes were further predicted using RMSD. Docked conformers obtained from the AutoDock Tool after carrying out the molecular docking process were evaluated for their residual deviation and fluctuation with respect to the reference pose. RMSD values near or equivalent to 3 Å were considered as ‘near native’ to ‘native pose’. Ligands that exhibited highest binding affinity with RdRp were selectively used for analysing their potency to produce stable docked conformer. Results revealed that aloe, azadirachtin, nimbiol, nimbocinol, cirsilineol, columbin and sage exhibited RMSD of 1.3, 3.2, 1.8, 1.6, 1.5, 1.3 and 2.8 Å, respectively. Except for azadirachtin and sage, the docked poses of the above-mentioned ligands can be significantly considered as ‘near native’ pose with respect to their ‘reference pose’ ( Fig. 3 ). Higher binding affinity does not assure stable docked conformer. Azadirachtin and sage showed slightly higher RMSD values but the results obtained from the MD Simulation study supported their efficiency to bind and target RdRp. However, other parameters also need to be explored to determine their stability index and because they exhibited high binding affinity and remained stable during the MD Simulation studies, their potential to inhibit RdRp cannot be ruled out just because of higher RMSD values. RdRp active site residues Putative ligand binding site of RdRp was identified using UCSF Chimera. The region within a 5 Å proximity of where the respective ligands bound to the receptor domain was selected and the amino acid residues constituting that particular region were predicted to determine which one of them was involved in the interaction with receptor; this will also help in predicting the targeted site to tackle the current pandemic. Identification of amino acid residues involved in the docking process provides further biological insight for future drug discoveries and the availability of cryptic pockets in RdRp may help in analysing the possible space for drug binding. RdRp at 71 × 53 × 25 Å grid points forms the binding site with aloe, azadirachtin, columbin, cirsilineol, nimbiol, nimbocinol and sage comprising GLU350, PRO677, THR362, PHE326, ASN459; LEU460, GLU350, SER318, ARG349, PHE396; LEU460, ASN459, PRO461, THR319, ASN628; LEU460, SER318, PRO461, THR319, VAL315; LEU460, SER318, VAL315, PRO461, ASN628; PHE396, LEU460, VAL675, CYS395, THR394; VAL320, PHE321, THR319, LEU251 and SER255, with binding affinities of –7.1, –8.2, –7.5, –6.9, –7.3, –7.4 and –8.7 kcal/mol, respectively. Whereas chloroquine, favipiravir, lamivudine and remdesivir form the binding site with RdRp amino acid residues that include LEU460, PRO677, CYS395, TYR456, ASN628; PRO461, ASN628, GLY678, PRO677, SER664; TRP268, SER255, LYS267, VAL320, THR319; VAL675, ARG457, PRO169, SER664, LEU172, with binding affinities of –5.8, –5.1, –5.0 and –7.3 kcal/mol, respectively, at 71 × 53 × 25 Å grid points ( Fig. 4 ). All the above mentioned seven phytochemicals exhibited higher binding affinity than the chemical analogues with RdRp and even the active site residues involved in the interaction were observed to be similar, which clearly revealed that these natural moieties may play a pivotal role in targeting RdRp to control the SARS-CoV-2 outbreak. Bond distance prediction Determination of the presence of bond type, either hydrogen bond or other non-covalent interactions, may help in predicting the potency of phytochemicals to target RdRp. Side chains of amino acid residues configuring the active site of RdRp act as electron donors and thereby form either hydrogen bonds or are involved in hydrophobic interactions. A hydrogen bond is formed when the bond distance between the amino acid residues and ligand is 3.5 Å whereas a distance of 4.5 Å leads to hydrophobic or non-covalent interactions. Phytochemicals exhibiting high binding affinity with predicted catalytic domain of RdRp were further evaluated to analyse the bond strength between ligands and receptors. Average bond distance between the aloe, cirsilineol, nimbiol, nimbocinol, sage and respective amino acid residues of RdRp were observed to be less than 3.5 Å. These ligands represented hydrogen bond interactions, thereby exhibiting maximum and strong interactions with the receptor in comparison to azadirachtin and columbin, which showed a bond distance up to 4.5 Å. These two ligands were found to be sharing hydrophobic or non-covalent interactions with RdRp ( Table 3 ). ADMET The ADMET (Absorption, Dissolution, Metabolism, Excretion, Toxicity) -based drug scanning tool ADMETlab Web Server predicted the physiochemical and biological properties of the selected potential inhibitors against RdRp. Aloe is a dihydroxyanthroquinone with molecular weight 270.24 g/mol, Log P value of 1.365; contains three hydrogen bond donor atoms and five hydrogen bond acceptor atoms. Other potential inhibitors azadirachtin, cirsilineol, columbin, nimbiol, nimbocinol and sage are a terpenoid, flavone, diterpenoid, terpenoid and triterpenoids (nimbocinol and sage) with molecular weights of 720.7, 344.3, 358.4, 272.4, 408.5, 409.6 g/mol; Log P value of 0.203, 2.89, 2.53, 4.37, 4.8, 4.3; the compound contains 3, 7, 1, 1, 1 and 1 hydrogen bond donor atoms and 16, 7, 6, 2, 4 and 4 hydrogen bond acceptor atoms, respectively ( Table 4 ). To further validate the drug-like properties, all the above-mentioned inhibitors were subjected to ADMETlab Web Server to determine their absorption, distribution, metabolism and excretion. All four parameters were investigated on the basis of several thresholds. All seven predicted inhibitors of RdRp passed the ADMETlab threshold of drug capability. On further evaluation of toxicological parameters all the ligands were observed to be negative in the AMES mutagenicity test and being non-skin sensitizing ruled out their possibility to induce any skin-related allergic reactions ( Table 5 ). Discussion Despite making several efforts to discover an anti-viral drug against COVID-19, no US Food and Drig Administration-approved preventive/therapeutic drug has yet reached the market. Rather, the scientific community is still struggling to identify the prime target in SARS-CoV-2 to control the current outbreak. SARS-CoV-2 comprises a multi-subunit replication and transcription machinery. Non-structural proteins formed by the cleavage of viral polyproteins together configure the replication and transcription mechanism of the virus. RdRp facilitates the synthesis of SARS-CoV-2 and thereby plays a pivotal role in its survival and transmission. RdRp may be considered as a potential drug target for several drugs competing to eliminate the complication induced by the viral attack. Gao et al. in 2020 released the structure of the RdRp from SARS-CoV-2 and emphasized its role as an efficient target to control the outbreak by inhibiting virus proliferation [ 9 ]. Several drugs have been analysed to determine the potential of synthetic analogues to target SARS-CoV-2. Wang et al. and Yao et al. evaluated the potency of remdesivir and chloroquine to inhibit SARS-CoV-2 on the basis of their ability to target SARS-CoV and MERS-CoV [ 20 , 21 ], but neither of these passed the clinical trials against the disease [ 22 ]. A study by Masui et al. in 2017 and Ganjhu et al. in 2015 have also emphasized the potential role of herbal remedies in targeting several viral diseases including influenza, rabies and enterovirus [ 23 , 24 ]. The present study was designed to investigate the role of phytochemicals in targeting SARS-CoV-2 replication. We screened cost-effective moieties belonging to plants which can be easily grown even in wild conditions, and furthermore the processing of these medicinal plants does not require highly equipped technology for their extraction and purification. These medicinal plants have remained an integral part of our dietary routine for centuries and no adverse effects have been reported so far with their usage. Hence, an in silico study was carried out to hinder the virus replication by disrupting the domain of RdRp with potential inhibitors like aloe, azadirachtin, columbin, cirsilineol, nimbiol, nimbocinol and sage belonging to Aloe vera , Azadirachta indica , Ocimum sanctum , Tinospora cordifolia and Salvia officinalis. A molecular docking study was carried out at different domains of RdRp to analyse their inhibitory potential. Seven potential ligands were screened from long lists of phytochemicals on the basis of their binding affinity. Disruption of RdRp at 71 × 53 × 25 Å grid points by the above-mentioned ligands revealed that LEU460, ASN459, TYR456, PRO461 and PRO677 could form the potential active site that needs to be targeted to control the virus proliferation. The inhibitory potential of these molecules was more profound than chloroquine, lamivudine, favipiravir and remdesivir. Apart from determining their inhibitory potential, our effort of simultaneous exploration of their structural and functional characteristic features produced encouraging results. Docked conformers were observed to be structurally stable with respect to their native pose on the basis of their RMSD values. Our proposed drug-like molecules were observed to form hydrogen bonds in addition to hydrophobic bonds with crucial amino acid residues of RdRp, thereby hindering their function, which is essential for virus replication. Moreover, physiochemical and biological properties of these phyoto-ligands have validated their potency to be acclaimed as potential inhibitors of RdRp. The set of compounds screened in this study, could potentially act as mono-therapeutic or as a combination therapeutic approach against SARS-CoV-2. Computational analysis on the basis of a docking algorithm revealed that these seven phytochemicals may disrupt the replication domain of SARS-CoV-2 to target the virus attack. An upsurge of these phytochemicals will act as structural and functional templates for de novo synthesis of drugs as an efficient possible treatment strategy against COVID-19. Conclusion Computational interpretation of the present study revealed that the conserved domain of RdRp can be efficiently targeted by phytochemicals belonging to Aloe vera , Azadirachta indica , Ocimum sanctum , Tinospora cordifolia and Salvia officinalis to overcome the current COVID-19 pandemic. Medicinal plants have been used for centuries to treat viral infection and the present study has validated those facts in the case of COVID-19. On the basis of our computational study we can conclude that phytochemicals can significantly hinder the replication domain of SARS-CoV-2 through its ability to exhibit efficient binding ability, structural stability and drug-like physiochemical and biological properties even better than the synthetic analogues currently in use against the disease. We believe that insights gained by in silico analysis in the current study will prove valuable in further exploring the targeted site of SARS-CoV-2 and potential inhibitors. Funding No funding was received from external sources to carry out the work. Authors' contribution VP and RM have framed the entire work under the expert guidance of AB. Conflict of interest The authors have declared that there are no conflicts of interest. Acknowledgements The authors are grateful to Param Pujya Swami Ramdev Ji for his immense support and guidance. They also acknowledge the help and support provided by the Patanjali Research Institute, Haridwar, India and express their gratitude to Ajeet Chauhan for graphical support. The authors would like to thank Amit Saini for IT support, and Lalit Mohan Ji and Gagan Kumar Ji for their administrative support. | [
"RAOULT",
"VELAVAN",
"LI",
"YU",
"KHAILANY",
"MISRA",
"SMITH",
"CHANG",
"GAO",
"DONG",
"ZHU",
"BALKRISHNA",
"FERREIRA",
"MORRIS",
"CARREGAL",
"BELL",
"PETTERSEN",
"HWANG",
"BENET",
"WANG",
"YAO",
"DWIVEDY",
"MASUI",
"GANJHU"
] |
f47543d1ebd54944ad6f70473a8a1b74_Surgical management of carotid stump syndrome_10.1016_j.jvscit.2023.101342.xml | Surgical management of carotid stump syndrome | [
"Lucero, Leah",
"Dhawan, Darian Siddhartha",
"O'Banion, Leigh Ann"
] | null | First reported in the literature in the 1970s, the phenomenon termed “carotid stump syndrome” is defined by persistent cerebral ischemic events in the setting of an ipsilateral internal carotid artery (ICA) occlusion. The rare syndrome is hypothesized to occur due to embolism from the residual ICA stump into the middle cerebral artery territory via the external carotid artery. Treatment, although limited by case report experiences, has traditionally been ligation of the ICA with endarterectomy and patching of the common and external carotid arteries. The present patient provided written informed consent for the report of her clinical data and surgical video ( Supplementary Video , online only). Patient background Our patient is a 76-year-old woman who suffered a left-sided middle cerebral artery stroke with resulting right hemiparesis in 2019. Imaging after her initial stroke showed complete occlusion of the left ICA. She was medically managed with aspirin and clopidogrel. Subsequently, she presented on three separate occasions with recurrent hemiparesis and evidence of ongoing embolic strokes shown on magnetic resonance imaging. The magnetic resonance imaging scans taken during the patient's initial admission showed complete occlusion of the left ICA. Magnetic resonance angiography was also performed, which showed findings classic for carotid stump syndrome, including multiple large collateral vessels from the external carotid artery filling the ipsilateral hemisphere and reconstitution of the distal most segment of the supraclinoid left ICA, which was filling via contralateral perfusion. In addition, turbulent flow was visualized in the stump of the ICA on digital subtraction angiography. A cerebral angiogram (which was performed on a subsequent admission) showed occlusion of the left ICA at its origin, with a wisp of flow and no distal reconstitution. Attempts made to cross the lesion by our neurointerventional colleagues were unsuccessful. An attempt at crossing such occlusion poses an increased risk of thromboembolism without any added benefit; this was performed before our patient was referred to vascular surgery. Thus, we cannot comment on the decision or thought process. Due to ongoing symptoms and evidence of progressive embolic infarcts of the left hemisphere, in addition to the classic imaging findings, we diagnosed carotid stump syndrome. She was brought to the operating room for planned transection and ligation of her ICA with external carotid endarterectomy and patch angioplasty. Description of procedure After making an incision along the anterior border of the sternocleidomastoid muscle, dissection was carried down to the carotid sheath. After opening the sheath, the jugular vein was exposed and the facial vein suture ligated and divided. The carotid artery was identified and the common and distal internal and external carotid arteries were encircled with vessel loops. The superior thyroid artery was identified and encircled with a free 2-0 silk tie. The patient was then systemically heparinized. Three minutes after heparin administration, the external and superior thyroid arteries were controlled. The common carotid artery and ICA were subsequently controlled. No decrease in cerebral oximetry occurred with clamping, and we elected not to use a shunt during the case. The ICA was sharply transected using a no. 11 blade ∼1 cm distal to its origin. No back bleeding occurred from the ICA, which confirmed its occlusion. No atherosclerotic or thrombotic debris was visible within the ICA. We then oversewed the stump of the ICA with 5-0 Prolene suture and applied two large clips. The oversewing was done very carefully to balance the risk of stenosing the flow channel vs the risk of creating a new potential area for embolization. We then extended the arteriotomy with Potts scissors onto the external carotid artery and then down onto the common carotid artery, trimming away any excess tissue. We inspected the orifice of the external and common carotid arteries and, again, found no visible atherosclerotic or thrombotic material. Minimal plaque was present, consistent with the previous imaging findings. Because the ICA was occluded, we considered closing the external and common carotid arteries primarily. However, we elected to perform bovine patch angioplasty in an attempt to avoid narrowing the external carotid artery, because our patient was female with small vessels, and we believed she had a higher risk of stenosis with primary closure. We used a 0.8 × 8-cm LeMaitre bovine pericardial patch, which was sewn in place using 6-0 Prolene suture. Before completing the anastomosis, we back bled both the external and superior thyroid arteries and confirmed good inflow from the common carotid artery. We then flushed with heparinized saline, completed the anastomosis, and restored flow to the external carotid artery. Next, we obtained hemostasis and administered 30 mg of protamine to reverse the residual effects of the heparin. Once we were satisfied with the hemostasis, a Jackson-Pratt drain was placed in the surgical bed, and the incision was closed with 3-0 Vicryl suture to the platysma and a 4-0 Monocryl subcuticular suture to the skin. Finally, 2-octyl cyanoacrylate glue (Dermabond; Ethicon) was applied to the skin. The patient was neurologically intact on awakening and in recovery. She was discharged home on postoperative day 1. Patient follow-up The patient had an uneventful postoperative course and recovery. She remains asymptomatic with a patent common carotid artery and external carotid artery with no evidence of hemodynamically significant stenosis. More than 2 years have passed since the operation was performed in February 2021. Discussion Although a recognized phenomenon, the optimal management of carotid stump syndrome has been a source of debate. Comparative studies have evaluated the risks and benefits of medical vs surgical treatment (with the surgical standard of care ICA exclusion and external carotid artery endarterectomy). 1 However, due to the rare prevalence of this syndrome, there is a paucity of well-powered studies to support either approach. The case highlighted in the 2 Supplementary Video (online only) adds to the growing body of research supporting surgical intervention as a safe, effective treatment of carotid stump syndrome—especially for patients with recurrent ischemic events despite optimal medical management. Disclosures None. Appendix (online only) Supplementary Video (online only) Surgical management of carotid stump syndrome. MCA, Middle cerebral artery; MR, magnetic resonance. Appendix Additional material for this article may be found online at https://www.jvscit.org . | [
"KUMAR",
"HRBAC"
] |
d423118f5e1340a1a91159b7cb62d516_Data on the identification of VRK2 as a mediator of PD-1 function_10.1016_j.dib.2021.107168.xml | Data on the identification of VRK2 as a mediator of PD-1 function | [
"Peled, Michael",
"Adam, Kieran",
"Mor, Adam"
] | Therapeutic programmed cell death protein 1 (PD-1) blockade enhances T cell mediated anti-tumor immunity, but many patients do not respond, and a significant proportion develops inflammatory toxicities. To develop better therapeutics and to understand the signaling pathways downstream of PD-1 we performed phosphoproteomic interrogation of PD-1 to identify key mediators of PD-1 signaling. Hereby, supporting data of the research article “VRK2 inhibition synergizes with PD-1 blockade to improve T cell responses” are presented. In the primary publication, we proposed that VRK2 is a unique therapeutic target and that combination of VRK2 inhibitors with PD-1 blockade may improve cancer immunotherapy. Here, we provide data on the effect of other kinases on PD-1 signaling utilizing shRNA knockdown of the different kinases in Jurkat T cells. In addition, we used VRK2 inhibition by a pharmacologic approach in the MC38 tumor mouse model, to show the combined outcome of anti PD-1 treatment with VRK2 inhibition. These data provide additional targets downstream PD-1 and point toward methods of testing the effect of the inhibition of these targets on tumor progression in vivo. | Specifications Table Subject Immunology Specific subject area Cancer immunology, checkpoint inhibitor, immunotherapy, cell signaling Type of data Graph Figure How data were acquired RNA expression was determined by RT-PCR in QuantStudio 3 RT-PCR system. Concentrations of IL-2 were determined by specific ELISA kits in TEKAN microplate readers. Cell viability was measured with PrestoBlue (Invitrogen) in TEKAN microplate readers. T cells subsets were evaluated by flow cytometry in a MACSQuantR Analyzer 10. Statistical analysis was performed using GrapPad Prism 7 software. Data format Analysed Parameters for data collection Jurkat T cell lines were treated with anti-CD3 + anti-CD28 or anti-CD3 + anti-CD28 + recombinant PD-L2 coated beads. MC38 tumor cells were treated with Puromycin or AZD-7762. In vivo : mice harbouring MC38 tumors were orally administered with vehicle control, AZD-7762, Prexasertib and PF-477,736 (mice in each condition). In a second experiment, mice were orally administered with vehicle control or AZD-7762 and injected IP with vehicle control or 200 μg anti-PD-1 at day 0 and day 7 (4 mice in each group), two independent experiments. Description of data collection Total RNA was extracted using the RNeasy Plus Mini Kit (Qiagen). Media was collected following centrifugation at 500 g for 5 min to remove floating cells from the media, followed by IL-2 ELISA (Biolegend). Tumor growth was monitored by external measurement using callipers. The volume of tumor masses was calculated with the following equation: 0.5 × Length × Width 2 . Spleens were harvested 17 days post-treatment initiation, followed by tissue dissociated on a mesh. Splenic cells were stained with anti-mouse antibodies for flow cytometry analysis. Data source location Columbia Center for Translational Immunology, Columbia University Medical Center, New York, NY 10,032, USA Data accessibility Data is provided in the article and the related research article. Related research article Michael Peled, Kieran Adam, Adam Mor. VRK2 inhibition synergizes with PD-1 blockade to improve T cell responses. Immunology Letters 2021 May;233:42–47. Value of the Data • The data point to new protein targets downstream of PD-1 that can serve as drug targets for cancer immunotherapy. • These data can help researchers to evaluate novel therapeutic targets downstream of PD-1, that their blockade may help cancer patients. • These data can spark a search of specific pharmacological VRK2 inhibitors. 1 Data Description We have recently described potential kinases that may mediate PD-1 signaling in T lymphocytes based on phosphoproteome analysis of PD-1-activated T cells [1] . To assess if these kinases indeed facilitate signaling downstream of PD-1, knock-down of these kinases was induced by lentiviral transduction of kinase-specific shRNAs in Jurkat T cell lines ( Fig. 1 A). While all the cell lines secreted IL-2 following T cell receptor (TCR) activation with anti-CD3 + anti-CD28 antibodies ( Fig. 1 B), a combined TCR and PD-1 stimulation demonstrated that Vaccinia Related kinase 2 (VRK2) knocked down cells were the least inclined to PD-1 mediated inhibition of IL-2 secretion ( Fig. 1 C). Following these results, we assessed if pharmacologic inhibition of VRK2 in the MC38 syngeneic mouse tumor model could enhance T cell responses by targeting the PD-1 pathway and potentially augmenting T cell activation with PD-1 blockade ( Fig. 2 A). To this end we used AZD-7762, an inhibitor of VRK2 and the checkpoint kinases (CHK1 and CHK2) [1] . This agent did not cause additional cytotoxicity in MC38 tumor cells in vitro ( Fig. 2 B). However, AZD-7762 significantly decreased tumor volume compared to vehicle or anti-PD-1 antibody treatment [2] . To exclude the possibility that AZD-7762 acts primarily via CHKs related mechanism in the tumor cells, tumor growth was determined in response to treatment with two CHK-specific inhibitors, Prexasertib (LY2606368) [3] and PF-477,736 [4] . Indeed, AZD-7762 treatment significantly decreased tumor volume compared to these drugs ( Fig. 2 C). Anti-PD-1 treatment is currently approved for many solid malignancies, however only a minority of the patients respond. Thus, we assessed the effect of a combined treatment of AZD-7762 with PD-1 blockade. The combined treatment of AZD-7762 with PD-1 blockade enhanced anti-tumor immune responses compared with either treatments alone, as demonstrated by the increased percentage of activated (CD44 + PD-1 + ) T cells in the spleens ( Fig. 2 D). 2 Experimental Design, Materials and Methods 2.1 General reagents RPMI 1640 medium, DMEM, Dulbecco's PBS, and FBS were purchased from Life Technologies. Opti-MEMI was purchased from Invitrogen. Ficoll-Paque was purchased from Stem Cell. Puromycin was obtained from Sigma-Aldrich. 2.2 Cell culture, transfection, and stimulation In vitro T cell cultures were maintained in complete RPMI, containing 10% FBS, MEM nonessential amino acids, 1 mM sodium pyruvate, 100 IU/ml of penicillin, 100 µg/ml streptomycin and GlutaMAX-I. Human Jurkat T cells were obtained from the American Type Culture Collection and maintained in RPMI 1640 medium supplemented with 10% FBS and 100 U/ml penicillin and streptomycin. MC38 cells were provided by Kerafast and maintained in DMEM medium supplemented with 10% FBS and 100 U/ml penicillin and streptomycin. HEK 293T cells were obtained from the American Type Culture Collection and maintained in 5% CO 2 at 37 °C in DMEM media supplemented with 10% FBS and 100 U/ml penicillin and streptomycin. Cells were stimulated with magnetic beads (ratio of 1:5 cells per bead), which were conjugated with the following protein combinations (the ratio in parentheses indicates the relative concentration of each protein): anti-CD3/anti-CD28/IgG1 (1:1:2), or anti-CD3/anti-CD28/PD-L2-Fc (1:1:2). 2.3 Antibodies Anti-CD3 (UCHT1), and recombinant PD-L2-Fc were purchased from Acros. IgG1 (isotype control) was purchased from Jackson ImmunoResearch. Anti-CD28 (CD28.2) was purchased from eBioscience. Anti-mouse antibodies were purchased from BioLegend: CD3-AF488 (clone 17A2), CD8-PercpCy5.5 (clone 53–6.7), CD4-PE (clone GK1.5), CD44-BV421 (clone IM7), PD-1-PECy7 (clone RPM1–30). 2.4 Cytokine secretion IL-2 concentrations in the supernatant were measured by enzyme linked immunosorbent assay (ELISA) from BioLegend. 2.5 Knocking down PD-1 related kinases Kinases were stably knocked down in Jurkat T cells by short hairpin RNA using Mission shRNA plasmids (Sigma-Aldrich). Lentiviral particles were generated by transfecting HEK 293T cells with pMD2G, psPAX2, and the shRNA plasmid using SuperFect (Qiagen). T cells were transduced by centrifugation and selected with puromycin. The fowling shRNA sequences were used: VRK2 (#1): CCGGCTGGAGGATTTGGATTGATATCTCGAGATATCAATCCAAATCCTCCAGTTTTTTG. VRK2 (#2): CCGGGGGAAGAAGTTACAGATTTATCTCGAGATAAATCTGTAACTTCTTCCCTTTTT. BARK1: CCGGGCATCATGCATGGCTACATGTCTCGAGACATGTAGCCATGCATGATGCTTTTTTG. GSK3A: CCGGCCATAGCCCATCAAGCTCCTGCTCGAGCAGGAGCTTGATGGGCTATGGTTTTTTG. HIPK2: CCGGCCCACAGCACACACGTCAAATCTCGAGATTTGACGTGTGTGCTGTGGGTTTTTTG. CDK7: CCGGGCTGTAGAAGTGAGTTTGTAACTCGAGTTACAAACTCACTTCTACAGCTTTTT. RPS6KB: CCGGAGCACAGCAAATCCTCAGACACTCGAGTGTCTGAGGATTTGCTGTGCTTTTTT. CK2A1: CCGGATTACCTGCAGGTGGAATATTCTCGAGAATATTCCACCTGCAGGTAATTTTTTG. CDK3: CCGGTCACCCAGCTGCCTGACTATACTCGAGTATAGTCAGGCAGCTGGGTGATTTTTG. 2.6 RT-PCR analysis Total RNA was extracted using the RNeasy Plus Mini Kit (Qiagen). RNA (500 ng) was used for cDNA synthesis using SuperScript II First Strand Synthesis (Invitrogen). Human kinases and HPRT Taqman Primer/Probes were used for all Taqman Gene Expression Assays with the Taqman Universal PCR Master Mix (Applied Biosystems). Quantitative gene expression analyses were performed with Applied Biosystems 7300 Real-Time PCR. Gene expression was analyzed by the ΔΔCt method. 2.7 Mice, MC38 tumor inoculation and T cell analysis One million (1 × 10 6 ) MC38 cells were used for inoculation, implanted subcutaneously in the right hind flank of mice. Tumor growth was monitored using electronic callipers and calculated according to the formula: V = Length × Width 2 × 0.52. For T cell phenotypic analysis by flow cytometry, spleens were harvested 17 days post-treatment initiation. Splenic cells were stained with anti-mouse antibodies for flow cytometry analysis. To deplete T cells, mice received intraperitoneal injection of 200 µg anti-CD4 (BioXcell BE0003) and 200 µg anti-CD8 (BioXcell BE0061) antibodies in PBS, a second dose of the antibodies was administered two days later. To assess cytotoxicity, MC38 cells were thawed, seeded at a density of 3 × 10 4 in a flat bottom 96-well plate and treated overnight at 37 °C and 5% CO 2 with AZD-7762 (MCE HY-10,992) at the indicated concentrations. Cell viability was measured with PrestoBlue (Invitrogen). AZD-7762 and PF-4,777,736 were given I.P. at 25 mg/kg and 10 mg/kg respectively. Prexasertib was given S.C. at 10 mg/kg. All drugs were given for 12 days. 2.8 Statistical analysis GraphPad Prism software was used for statistical analysis. Unpaired Student's t-test was used to compare differences between the means of two groups and a two-tailed p-value ≤ 0.05 was considered statistically significant, where * p < 0.05. To compare the effects of different treatments on tumor volume, we used repeated measures two-way ANOVA and Tukey's multiple comparisons test with individual variances computed for each comparison. Ethics Statement All animal experiments comply with the National Institutes of Health guide for the care and use of Laboratory animals (NIH Publications No. 8023, revised 1978). IACUC approval # AAAW7464. CRediT Author Statement Michael Peled: Conceptualization, Investigation, Methodology, Writing – original draft; Kieran Adam: Investigation, Methodology; Adam Mor: Supervision, Investigation, Conceptualization, Methodology, Writing – reviewing & editing. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships which have or could be perceived to have influenced the work reported in this article. Acknowledgments We acknowledge Anna Tocheva for technical assistance with the flow cytometry experiments, data analysis, discussion, and figures generation. This work was supported by grants from the NIH (AI125640, CA231277, AI150597), and the Cancer Research Institute. Supplementary Materials Supplementary material associated with this article can be found in the online version at doi: 10.1016/j.dib.2021.107168 . Appendix Supplementary materials Image, application 1 | [
"ZABLUDOFF",
"PELED",
"KING",
"BLASINA"
] |
76c5a637c24b40e1949a2875abc8d706_Mitigating suspended-sediment environmental pressure in subsea engineering through colliding turbidi_10.1016_j.rineng.2024.101916.xml | Mitigating suspended-sediment environmental pressure in subsea engineering through colliding turbidity currents | [
"Alhaddad, Said",
"Elerian, Mohamed"
] | Turbidity currents have extensively been explored in quiescent environments. However, during several underwater activities (e.g., dredging and deep sea mining), generated turbidity currents could travel in opposite directions and interact with each other, which could largely influence their hydrodynamics and sediment transport capacity. Therefore, we carried out a set of dual-lock-exchange experiments to study the interaction of colliding turbidity currents. Our experimental results show that the interaction of identical currents results in the reflection of both currents with almost no mixing, forcing them to travel in the opposite direction of the pre-collision one. In contrast, when a turbidity current interacts with a lighter, less-energetic current, clear mixing is observed. Furthermore, it is revealed that the collision of turbidity currents reduces the suspended sediment transported by them, which is favorable from an environmental point of view, and slightly increases the vertical dispersion of particles. In the case of two identical counterflowing currents, a 35% reduction in mass flux, accompanied by a 6% increase in turbidity current thickness, was observed in our experiments. | 1 Introduction Turbidity currents are buoyancy-driven underflows generated by the gravity action on the density difference between a fluid-sediment mixture and the ambient fluid. These currents have extensively been studied experimentally (e.g., [20,18] ) and numerically (e.g., [6,13] ) in the literature in quiescent environments. However, very limited research has been carried out to investigate the interaction of turbidity currents running in opposite directions, despite the fact that this setting is encountered in subsea engineering (e.g., dredging and deep sea mining). Moreover, sedimentary deposits provide evidence of turbidity currents colliding on the ocean floor [19] . In dredging, breaching (underwater dilative slope failure) is considered an effective production mechanism, in particular for plain suction dredgers [21,2] . Breaching is typically accompanied by the generation of turbidity currents [22,14] , which were investigated experimentally by Alhaddad et al. [3] and numerically by Alhaddad et al. [5] . To explain the sand mining process with a suction dredger, a real dredging activity, which took place in IJsselmeer (the Netherlands) for land reclamation, is adopted and demonstrated in Fig. 1 . This dredging activity was carried out in 1968 by the Dutch company Amsterdamsche Ballast Maatschappij. The suction pipe was inserted into the sediment bed, forming very steep slopes (breach faces) around the suction mouth. As a result, the breaching process started and subsequently turbidity currents generated, which work as the carrier of sand from the breach faces to the suction mouth. The sand was sucked into the pipe and delivered to the dredger, while the steep slopes kept traveling backward in a radial direction [7] . In such an event, turbidity currents flow in opposite directions and interact eventually with each other. Moreover, across the abyssal plains of the global ocean, polymetallic nodules are abundantly found at depths ranging from 1 km to 6 km. These nodules are tremendously rich in economically valuable metals (e.g., cobalt, copper, manganese and nickel) [12] . While mining these nodules by a hydraulic collector, sediment is inevitably picked up and collected [1] . With regard to Coandă-effect-based collectors (a category of hydraulic collectors), the sediment gathered is disposed of behind the collector, generating turbidity currents that move across the seabed [8,9] . These currents could extend over large distances, potentially causing significant disturbances to aquatic ecosystems along its path [17] . During a mining operation, several collectors will be deployed alongside each other [4] , leading to the interaction of turbidity currents (see Fig. 2 ). Specifically, the turbidity currents moving perpendicular to the motion of the collector (referred to as sideways turbidity currents) will come into contact. A fundamental understanding of such an interaction is critical to improve our prediction of the evolution and fate of anthropogenic turbidity currents, which is central to the environmental impact assessment of underwater activities. The objective of this study is to explore whether the generation of colliding turbidity currents in subsea engineering is environmentally preferred to the generation of turbidity currents running on the seabed without interaction with counterflowing turbidity currents. To this end, we conducted a series of small-scale experiments in a water flume, where a turbidity current traveling leftward and another current traveling rightward can be produced. This article presents and discusses the acquired experimental results and observations. Our experimental measurements provide the first insights into the effect of turbidity currents moving in opposing directions on each other. 2 Dual-lock-exchange experiments The dual-lock-exchange experiment is a modified version of the classical lock-exchange experiment widely used to explore the dynamics of gravity currents. Our modified version allows for the generation of two turbidity currents flowing in opposite directions towards one another. This simple setup allows for reproducibility and easy control of initial conditions, making it well-suited for a systematic study. Additionally, it offers a clear visual representation of dynamics involved in turbidity currents. 2.1 Experimental setup The experiments are carried out in a rectangular glass tank measuring 3 m in length ( L ), 0.2 m in width ( W ), and 0.4 m in depth ( D ) (see Fig. 3 and Fig. 4 a). In each experiment, two locks are positioned at a horizontal distance of 0.2 m from both tank ends. To create suspensions, we used glass beads with particle sizes ranging from 0.065 mm to 0.105 mm. To acquire detailed concentration measurements, a high-speed recording of each experiment was required. This was achieved using an ‘IL5HM8512D Fastec’ camera fitted with a Navitar 17 mm lens, as shown in Fig. 4 d. The camera was operated at 80 frames per second. A correlation between sediment concentration and light permeability of the sediment-water mixture was established (see Subsection 2.2 ) and utilized to obtain the concentration for each recorded pixel. To ensure uniform lighting, a background plate equipped with white LED strips was affixed to the rear of the tank. In addition, a paper sheet is placed in front of these LED strips to create evenly-diffused light, as depicted in Fig. 4 c. Furthermore, a black tent was constructed around the experimental setup, so as to create a controlled environment, where external light sources are eliminated. 2.2 Concentration calibration method The same concentration calibration method used in the work of [10] is applied in this study. For convenience, we will briefly describe it here. The same experimental setup, comprising the tank and high-speed camera, serves the purpose of calibration as well. This calibration procedure begins by filling the right mixing chamber with fresh tap water, followed by the addition of a pre-measured mass of sediment, which is then mixed with water. After achieving the desired homogeneity in sediment concentration, a snapshot is taken of the mixing chamber. Subsequently, another pre-measured sediment quantity is introduced into the mixture and recorded. This sediment addition process continues until the resulting pixel values approach approximately 255. It is worth noting that a pixel value of 255 represents the camera's upper limit for distinguishing different shades of gray, where pixel values range from 0 for white to 255 for black (see Fig. 5 ). Each snapshot captures a specific concentration level. Table 1 outlines the concentration ranges and corresponding concentration increments. 2.3 Test procedure Table 2 summarizes the initial conditions of the experiments conducted within this study. We conducted three dual-lock-exchange experiments and one classical lock-exchange experiment with a 3%-concentration suspension as a reference case. The percentage 3% was chosen because our measuring technique can be used to retrieve concentration data up to 2.6%. It should be noted that local concentrations drop quickly below 2.6% after removing the lock, due to water entrainment. In the dual-lock-exchange experiments, we kept the sediment concentration in the right mixing chamber constant at 3%, while we varied the sediment concentration in the left mixing chamber (i.e., 1%, 2%, and 3%). Runs 2, 3 and 4 were conducted twice, during one of which dyes with different colors were added to the suspensions behind the locks, resulting in Runs 2 ⁎ , 3 ⁎ and 4 ⁎ . These experiments were conducted to facilitate visual observation of the interaction of turbidity currents during and after collision. In Runs 2 and 3, the sediment concentration in the mixing chambers is different, resulting in varying forward velocities for the turbidity currents after the release of locks. Consequently, the collision location of the two turbidity currents would not be in the middle of the tank. This would limit our ability to study the reflected turbidity currents due to their shorter propagation distance within the constrained tank length. Therefore, we delayed the removal of the right lock, since the suspension behind it has the higher concentration. Runs 2 and 3 were repeated until the currents collided almost in the middle of the tank. Every test was conducted following the next sequence of steps: • Tank filling: Clear water is added to the experimental tank up to a height of 30 cm. • Lock placement: The two locks are placed within the tank. In the case of a regular lock-exchange experiment, just one lock is used. • Sediment preparation: Based on the target concentration, sediment is weighed using a digital scale and added to the mixing chambers within the tank. Following that, water is added until reaching the target water depth ( ) everywhere in the tank. h s = 36 cm • Mixing: Propeller-type agitators are inserted into the mixing chambers, and they are operated until a homogeneous sediment-water mixture of is achieved. This mixing process takes about 2 minutes. For flow-visualization experiments, food coloring was added to the suspensions to distinguish the denser fluids from the ambient water. • Measurement initiation: The measurement using camera recordings is initiated to obtain the concentration distribution of turbidity currents. • Right lock removal: The right mixer is turned off, and the right lock is released. • Left lock removal: After a predetermined delay period, the left mixer is turned off and the left lock is released. In Run 4 and Run 4 ⁎ , the left and right locks are released simultaneously. • Measurement Termination: Measurements are stopped when one of the turbidity currents reaches the end of the tank. 3 Experimental results 3.1 General description of the results Upon the removal of the two locks, turbidity currents generate and start flowing in the direction of the ambient water. In other words, the left suspension results in a turbidity current running rightward, while the right suspension results in a turbidity current running leftward. Given that the two currents travel in opposite directions, they meet and collide almost in the middle of the tank. As a consequence, the sediment particles are further dispersed in the vertical direction, almost reaching the water surface in Run 4. Snapshots of the flow-visualization experiments, where food coloring was added, are depicted in Fig. 6 to illustrate the nature of the interaction and the potential mixing. In the top panels of Fig. 6 (time = ), the turbidity currents are seen just before the collision. In the next row of snapshots (time = T c ), horizontal motion was hardly observed in the three experiments. In the case of two identical currents (Run 4 T c + 2 ⁎ ), no clear mixing between the two currents was observed. Instead, the two currents reflected back towards their starting point. Conversely, in Run 2 ⁎ and Run 3 ⁎ , mixing was manifestly observed; the denser current penetrated the lighter current at time = . The bottom panel of T c + 4 Fig. 6 (time = ) shows that sediment particles from the lighter current were entrained in the denser current. Besides, in Run 2 T c + 10 ⁎ and Run 3 ⁎ , a portion of the denser current reflected back, as a result of collision. In the following, we will analyze the turbidity currents in more detail by looking into the density fields ( Fig. 7 ) and the amount of suspended sediment transported by the current traveling leftward after collision. Besides, we will explore the change in the thickness of turbidity currents. In this way, we investigate the influence of the interaction of currents on the current produced by the 3%-concentration suspension. 3.2 Sediment mass flux The sediment mass flux per unit width is estimated here to investigate the effect of opposing turbidity currents on the sediment transport by turbidity currents. This estimate is assessed on a vertical interrogation plane, which is located at a distance of 90 cm from the left tank end and denoted by the dotted vertical line in Fig. 7 . Assuming that the velocity of the turbidity current is uniform across its height, the sediment mass flux [ m ˙ ] can be calculated by Image 1 where (1) m ˙ = U f ∫ 0 h s c d z , [ U f ] is the average front speed of the turbidity current, Image 2 [ h s ] is water surface height, Image 3 c [kg/m 3 ] is the local suspended sediment concentration and z [ ] is the upward-normal coordinate. In our calculations, the front speed Image 3 was averaged over the time frame of analysis (7 seconds corresponding to the density fields shown in U f Fig. 7 ). This period was chosen because it was not long enough for the currents traveling leftward to reach the left flume end and thus reflect. The temporal change of the sediment mass flux passing the interrogation plane over the selected 7 seconds is shown in Fig. 8 . It is worth noting that the difference in the sediment mass flux between the cases is completely attributed to the difference in sediment concentration ( = 10 cm/s was found to be similar in all cases). U f Fig. 8 depicts that the sediment mass flux peaked shortly after the head of the current had passed the interrogation plane. This peak was the largest when there was no turbidity current incident on the current traveling leftward. In contrast, the peak was the smallest in the case of two identical currents. This is attributed to the fact that no clear mixing was observed between the two currents, as they are equal in density and velocity; they collide in the middle of the tank and force each other to reflect back towards their initial departure point (see Fig. 6 right). The second largest peak was for Run 3 where the denser current collided and mixed with the lighter current. Collision of currents results in vertical dispersion of particles, while mixing results in particle entertainment from the lighter current into the denser current. The lighter current in Run 3 has a higher concentration than the lighter current in Run 2, explaining why the peak was larger in Run 3. In other words, more particles were intruded into the denser current in Run 3. Fig. 9 clearly shows that the average sediment mass flux decreases as the density of the opposing current increases. Compared with the case with no incident current (Run 1), the average sediment mass was suppressed by nearly 35% in Run 4. The difference between the average sediment mass flux for Run 2 and Run 3 is small ( Fig. 9 ), although the lighter current was completely entrained into the denser current in both experimental runs, as shown in Fig. 6 . This is attributed to the fact that a smaller portion of the particles originally belonging to the denser current reflected back in Run 2, in comparison with Run 3. This can clearly be seen in the concentration maps presented in the left and middle panels of Fig. 7 . 3.3 Thickness of turbidity current The thickness of the turbidity current at the interrogation plane can be estimated as: The average turbidity current thickness over the selected duration (7 seconds) for all cases is presented in (2) H = ∫ 0 h s c z d z ∫ 0 h s c d z . Fig. 10 . Although the differences in thickness are not large, a clear trend can be observed, which is opposite to the trend of the average sediment mass flux. For instance, compared with the case with no incident current (Run 1), the average thickness was increased by 6% in Run 4. This suggests that the interaction of opposing currents leads to more vertical dispersion of particles. 4 Discussion The presence of suspended sediment reduces light penetration into water, consequently decreasing the amount of light available to seabed photosynthesizers. Besides, the suspended sediment could eventually settle out on sensitive marine plants and creatures (i.e., fish, coral reefs and seagrass beds), possibly smothering them. Turbidity currents represent an important agent of sediment transport in submarine environments [15] and the generation of these currents during underwater engineering activities can be inevitable. The findings of this study suggest that the generation of colliding turbidity currents in subsea engineering is environmentally preferred to the generation of turbidity currents propagating on the seabed without interaction with counterflowing turbidity currents. This is primarily because of the reduction of the sediment mass flux and thus the associated environmental stress [8] . By reducing sediment mass flux, water turbidity decreases, sediment deposition on aquatic ecosystems decreases, and the intensity of sediment resuspension events can be decreased. This implies that having several Coandă-effect-based collectors mining next to each other is a potential approach to reduce the corresponding environmental impact. Other underwater activities (i.e., dredging and underwater slope construction) may also be designed in a way that results in the collision of turbidity currents. In our study we used a two-dimensional lock-exchange configuration, where turbidity currents cannot spread in the lateral direction. Therefore, conducting field measurements would facilitate a better comparison between the mechanisms observed in the laboratory and those occurring in the field. In the future, we plan to extend this study by looking into a wider range of sediment concentrations and by using a longer water flume where more spatio-temporal data can be acquired. The collision of turbidity currents enhances mixing and thus the likelihood of particles colliding with each other. In the case of cohesive sediment, mixing triggers a flocculation effect, leading to the formation of flocs that settle out faster than individual particles [11] . Consequently, the buoyancy-driven forces of the current will decrease, which will further dampen it [9] . In this context, we also plan to test cohesive sediment to investigate the impact of colliding currents on the probability of floc formation. 5 Conclusions To mainly explore the environmental implication of producing colliding turbidity currents, we carried out a series of dual-lock-exchange experiments. The experimental results showed that turbidity currents traveling in opposite directions with identical dynamics hardly mix; their collision results in further dispersion of their sediment particles in the vertical direction at the collision phase followed by a reflection of the two currents. Conversely, when the density of the two opposing currents is different, mixing clearly occurs and the lighter current intrudes into the denser current. Depending on the density of the lighter current, a portion of the sediment of the denser current may reflect back as a result of the interaction. Our study reveals that the collision of turbidity currents reduces the amount of suspended sediment transported by them, which is favorable from an environmental point of view, while it slightly increases the turbidity current thickness. Specifically, the mass flux was reduced by 35%, while the turbidity current thickness increased by 6% in the case of two identical counterflowing currents. Notation c Local suspended sediment concentration - H Thickness of the turbidity current m h s Water surface height m m ˙ Sediment mass flux Image 1 T Time s T c Collision time s U f Average front speed of the turbidity current Image 2 z Upward-normal coordinate m CRediT authorship contribution statement Said Alhaddad: Writing – review & editing, Writing – original draft, Visualization, Supervision, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Mohamed Elerian: Writing – original draft, Visualization, Software, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgement The authors would like to thank the group of Mechanical Engineering bachelor students at Delft University of Technology that conducted the laboratory experiments used in this study. | [
"ALHADDAD",
"ALHADDAD",
"ALHADDAD",
"ALHADDAD",
"ALHADDAD",
"BIEGERT",
"DEKONING",
"ELERIAN",
"ELERIAN",
"ELERIAN",
"GILLARD",
"HEIN",
"LUCCHESE",
"MASTBERGEN",
"MEIBURG",
"MUNOZROYO",
"PEACOCK",
"SEQUEIROS",
"SIMPSON",
"STAGNARO",
"VANDENBERG",
"VANRHEE",
"VLASBLOM"
] |
4de9ca48786a43dcbd6d998837d3afd5_Manejo exitoso con stent en un prematuro con síndrome de vena cava superior Reporte de caso_10.1016_S0120-5633(12)70133-X.xml | Manejo exitoso con stent en un prematuro con síndrome de vena cava superior. Reporte de caso | [
"Gómez, Jhon J.",
"Vallejo, Ernesto",
"Palma, María A.",
"Rojas, Juan P."
] | El síndrome de vena cava superior en la infancia es una complicación inusual al uso de catéter venoso central en la unidad de cuidados intensivos neonatales. Otras causas en niños son la cirugía para enfermedades congénitas del corazón la cual ocupa la mayoría de los casos, y los linfomas, que constituyen la segunda causa más frecuente de obstrucción. Se describe el caso de un recién nacido prematuro de 25 semanas, con síndrome de vena cava superior secundario al uso de catéter venoso central para manejo de sepsis. Se destaca la importancia de un diagnóstico y tratamiento precoz. Así mismo, se reporta el manejo exitoso con stent para el síndrome de vena cava superior.
Superior vena cava syndrome in infancy is an unusual complication of the use of central venous catheters in neonatal intensive care unit. Other causes of this syndrome in children are surgery for congenital heart disease which accounts for most of the cases, and lymphomas, that constitute the second most common cause of obstruction. We describe the case of a premature infant born at 25 weeks with superior vena cava syndrome secondary to a central venous catheter for management of sepsis. The importance of early diagnosis and treatment is highlighted. We also report the successful management with stenting for superior vena cava syndrome. | null | [
"GRAHAM",
"EDSTROM",
"LUMB",
"MEHTA",
"SCHIFF",
"TANKE",
"EKELUND",
"MONAGLE",
"PETERS",
"ALKALAY",
"MOSS",
"GRISONI",
"WIGGER",
"HAVILL",
"PLIAM",
"VANOVERMEIRE",
"WESLEY",
"DESCHEPPER",
"HADDAD",
"JOHNSON",
"MAHONY",
"PICARELLI",
"PLIAM",
"GRAY",
"LORENZO",
"BECK"... |
7bc9be1680f840ea88cd6c9f5d9c54eb_14 000 years of geochemical and isotopic data from Lake Simcoe Canada_10.1016_j.dib.2022.108541.xml | ∼14 000 years of geochemical and isotopic data from Lake Simcoe, Canada | [
"Doyle, R.M.",
"Bumstead, N.",
"Lewis, C.F.M.",
"Longstaffe, F.J."
] | This dataset contains measurements of modern water and ancient core materials from Lake Simcoe, the fourth largest lake wholly in Ontario, Canada. These data consist of: (i) oxygen, hydrogen and carbon isotope (δ
18O, δ
2H and δ
13C) compositions for modern water samples; (ii) physical measurements of one piston core, PC-5; (iii) δ
13C and δ
18O values of ostracods collected from PC-5, and (iv) δ
13C and δ
18O values of ancient DIC and water, respectively, inferred from item (iii). Physical measurements performed on core PC-5 include magnetic susceptibility, mineralogy and grain size. Mass accumulation rates are also reported. These data will be of interest to those aiming to better characterize the timing and pathway of meltwater flow during and following deglaciation of the Laurentide Ice Sheet in the Laurentian Great Lakes region. These data will also be useful to researchers investigating the influence of deglaciation on the oxygen and carbon isotope systematics of ancient lake environments. A discussion of these data is available in “A ∼14 000-year record of environmental change from Lake Simcoe, Canada” [1]. | Specifications Table Subject Geochemistry and Petrology, Environmental Science Specific subject area Isotopic and geochemical data from lake sediments, including ostracods. Type of data .xlsx and .docx files How the data were acquired Modern water samples from Lake Simcoe were analyzed using (i) a PicarroⓇ L2120-i δ 2 H and δ 18 O Analyser and (ii) a Thermo Scientific™ GasBenchⓇ II connected to a Thermo Scientific™ Delta plus XL™ continuous flow isotope ratio mass spectrometer (IRMS) and a heater block equipped with a CombiPalⓇ autosampler. Ostracods collected from sediment core PC-5 were analyzed using (i) a Micromass MultiPrepⓇ device coupled to a VG OptimaⓇ dual-inlet isotope ratio mass spectrometer (IRMS) or (ii) a Thermo Scientific™ GasBenchⓇ II interfaced with a Thermo Scientific™ Delta plus XL™ continuous flow IRMS. Sediment core PC-5 was analyzed using the following instruments: (i) a Malvern MastersizerⓇ 2000 laser grain-size analyzer, (ii) a GEOTEKⓇ multi-sensor core logger (MSCL) and (iii) a Rigaku High Brilliance Rotating Anode X-ray Diffractometer. Data format Raw. Description of data collection All samples were collected from Lake Simcoe. The water samples were collected at various depths in the lake from May 2009 to November 2011. All other data originate from sediment core PC-5 collected from the deepest, flattest part of the lake in June 2007. Data source location Lake Simcoe, Ontario, Canada, 44.4873, −79.4169 Data accessibility [2] R.M. Doyle, N. Bumstead, C.F.M. Lewis, F.J. Longstaffe. Geochemical and isotopic analyses of a ∼14 000 year old sediment core collected from Lake Simcoe, Canada. Zenodo (2022) https://doi.org/10.5281/zenodo.6959403 . Related research article [1] R.M. Doyle, N. Bumstead, C.F.M. Lewis, F.J. Longstaffe, A ∼14 000-year record of environmental change from Lake Simcoe, Canada, Quat. Sci. Rev. 292 (2022) 107667, doi: https://doi.org/10.1016/j.quascirev.2022.107667 . Value of the Data • These data may be compared with other proxy archives in the Laurentide Great Lakes region of North America to better characterize the timing and pathway of meltwater flow following deglaciation of the Laurentide Ice Sheet. • These data are useful to researchers investigating the influence of deglaciation on the oxygen and carbon isotope systematics of ancient lake environments. • These data could also be used as a baseline for relative temperature change in southern Ontario. • These data are beneficial to researchers interested in contextualizing the recent eutrophication of Lake Simcoe against a backdrop of natural variation. 1 Data Description These data include the stable isotope ratios ( δ 18 O, δ 2 H and δ 13 C) of modern water from Lake Simcoe, as well as the stable isotope ratios ( δ 18 O and δ 13 C) of ostracods in core sediments from Lake Simcoe. Physical measurements of sediment core PC-5 (e.g., grain size, magnetic susceptibility, mineralogy) are also provided. For plots of these data, refer to Doyle and colleagues [1] . This dataset is contained in one Microsoft Excel file: LakeSimcoeData.xlsx – one data table containing all analytical measurements, including: (i) δ 18 O, δ 13 C and δ 2 H of modern waters; (ii) ostracod assemblages from core PC-5; (ii) δ 18 O and δ 13 C of ostracod valves in core PC-5; (iii) mineralogy of core PC-5; (iv) magnetic susceptibility of core PC-5; (v) grain size analysis of core PC-5. The δ 18 O and δ 13 C of ancient water and DIC, respectively, are inferred from isotopic analyses of ostracod valves and are also reported in this datasheet. 2 Experimental Design, Materials and Methods Analysis of δ 2 H and δ 18 O of modern waters: Unfiltered water samples were removed from the refrigerator and left to sit until they warmed to room temperature. 1 mL of water was pipetted into a 2 mL glass vial for analysis. Analysis of δ 2 H and δ 18 O of modern waters using a PicarroⓇ L2120-i δ 2 H and δ 18 O Analyser – Calibration of δ 2 H and δ 18 O to VSMOW was achieved using LSIS standards Heaven (accepted δ 2 H: +88.7‰; accepted δ 18 O: ‒0.27‰) and LSD (accepted δ 2 H: ‒161.8‰; accepted δ 18 O: ‒22.57‰). Analytical accuracy and precision were evaluated using LSIS standards MID (accepted δ 2 H: ‒108.1‰; accepted δ 18 O: +13.08‰) and EDT (accepted δ 2 H: ‒56‰; accepted δ 18 O: ‒7.27‰). The δ 2 H results for MID and EDT were ‒108.00 ± 0.67‰ ( n = 35) and ‒54.68 ± 1.17‰ ( n = 111), respectively, which compare well with accepted values and expected reproducibility. The δ 18 O results for MID and EDT were ‒13.01 ± 0.14‰ ( n = 35) and ‒7.22 ± 0.18‰ ( n = 111), respectively, which compare well with accepted values and expected reproducibility. The accepted values for all LSIS standards were determined previously by direct calibration to a VSMOW- and SLAP-scale. Analysis of δ 13 C DIC of modern waters: For samples, five drops of 100% concentrated orthophosphoric acid were added to the bottom of the glass vials, which were then septum-sealed and flushed with He for 5 min. One (1) mL of sample was then injected into the flushed vial using a 1 mL syringe. The vials were reacted in the GasBenchⓇ heater block at 35 °C overnight prior to isotopic measurements. The produced gas was then transported automatically to the IRMS using an autosampler. For each standard, 0.25 mg was weighed into the bottom of a glass vial, and the vial was placed in a horizontal position. Concentrated orthophosphoric acid (100%) was then added to the top of the vial such that the acid was separated from the standard powder. A septum cap was then attached to the vial and tightened. Next, the vial was flushed with He for 5 min at room temperature. The vial was then turned upright, thus allowing the acid to react with the standard powder. The vial was then immediately placed in the GasBenchⓇ heater block overnight reacting at 35 °C. The evolved gas was then automatically transferred to the IRMS using an autosampler. Analysis of δ 13 C DIC of modern waters was conducted using a Thermo ScientificTM GasBenchⓇ II coupled to a Thermo Scientific™ Deltaplus XL™ IRMS. Calibration of δ 13 C to VPDB was achieved using NBS-18, NBS-19 and Suprapur. Accuracy and precision of analyses were evaluated using the δ 13 C of WS-1. The δ 13 C results for WS-1 were 0.80 ± 0.11‰ ( n = 5), which compare well with accepted values and expected reproducibility. Analysis of δ 13 C and δ 18 O of ostracods: To collect ostracods, each interval of sediment was first wet-sieved. The sieved material and ostracods were separated under a binocular microscope using statically charged camel hairs. Ostracod valves were then identified and cleaned with bleach to remove organic material adhered to the valves. The δ 13 C and δ 18 O of most ostracods were measured using a Micromass MultiPrepⓇ coupled to a dual-inlet VG OptimaⓇ Isotope Ratio Mass Spectrometer (IRMS). Calibration of δ 13 C to VPDB was achieved using NBS-19 (accepted value: δ = + 1.95‰) and Suprapur (accepted value: ‒35.28‰). Accuracy and precision of analyses were evaluated using the 13C δ 13 C of NBS-18 (accepted value: ‒5.0‰) and LSIS standard WS-1 (accepted value: +0.76‰). The δ 13 C results for NBS-18 and WS-1 were –5.01 ± 0.07‰ ( n = 41) and +0.72 ± 0.08‰ ( n = 14), respectively, which compare well with accepted values and expected reproducibility. Calibration of δ 18 O to VSMOW was achieved using NBS-19 (accepted value: +28.65‰) and NBS-18 (accepted value: +7.20‰). Analytical accuracy and precision were evaluated using Suprapur (accepted value: +13.25‰) and WS-1 (accepted value: +26.23‰). The δ 18 O results for Suprapur and WS-1 were +13.28 ± 0.17‰ ( n = 35) and +26.20 ± 0.11‰ ( n = 14), respectively, which compare well with accepted VSMOW-SLAP calibrated values and expected reproducibility. The δ 13 C and δ 18 O of all remaining ostracods were measured using a Thermo Scientific™ GasBenchⓇ II coupled to a Thermo Scientific™ Deltaplus X™ IRMS. Calibration of δ 13 C to VPDB was achieved using NBS-19 and Suprapur. Accuracy and precision of analyses were evaluated using the δ 13 C of NBS-18 and WS-1. The δ 13 C results for NBS-18 and WS-1 were –4.99 ± 0.12‰ ( n = 8) and +0.91 ± 0.28‰ ( n = 6), respectively, which compare well with accepted values and expected reproducibility. Calibration of δ 18 O to VSMOW was achieved using NBS-19 and NBS-18. Analytical accuracy and precision were evaluated using Suprapur and WS-1. The δ 18 O results for Suprapur and WS-1 were +13.30 ± 0.34‰ ( n = 8) and +26.03 ± 0.71‰ ( n = 6), respectively, which compare well with accepted values and expected reproducibility. Methods for converting the δ 13 C and δ 18 O to estimates of δ 13 C DIC and δ 18 O lake water are provided in the main research article [1] . Grain-size analysis of PC-5: Grain-size analysis of samples from each core was performed using a Malvern MastersizerⓇ 2000 laser grain-size analyzer hosted in the Control and Crystallization of Pharmaceuticals Laboratory (CCPL) at The University of Western Ontario. In preparation for grain-size analysis, samples from PC-5 were disaggregated and treated with 15 mL of 0.3% bleach at 65 °C for at least 24 h to remove organic matter. The bleach was then removed by repeatedly rinsing each sample with distilled water. Finally, 10 mL of sodium hexametaphosphate solution, a dispersing agent, was added to each sample and samples were analyzed using the MastersizerⓇ. Analysis of the magnetic susceptibility (MS) of PC-5: Prior to the analysis of MS, sediment core PC-5 was rinsed with distilled water and gently scraped horizontally using a spatula. MS was assessed using a GEOTEKⓇ multi-sensor core logger (MSCL) in the Lake and Reservoir Systems Research Facility (LARS) at The University of Western Ontario. Analysis of sediment mineralogy of PC-5: Samples from PC-5 were freeze-dried and homogenized using a rubber mortar and pestle. The sample was then mounted onto an Al-backpack holder or a glass front-pack holder and analyzed using a Rigaku, high brilliance, rotating-anode X-ray diffractometer equipped with a graphite monochromator and CoKα radiation produced at 45 kV and 160 mA. Samples were scanned from 2° to 82° 2θ at a scanning rate of 10° 2θ/min. The abundance of each mineral was estimated using the background-subtracted peak height of its most intense diffraction. Crystallinity differences were account for using a form factor of x1, except for the (001) diffractions of kaolinite (x2), chlorite (x2) and illite (x4). Ethics Statement This work did not involve human subjects, animal experiments, or data collected from social media platforms. The manuscript adheres to Elsevier's ethics in publishing standards. CRediT authorship contribution statement R.M. Doyle: Conceptualization, Writing – original draft, Writing – review & editing, Methodology, Formal analysis, Visualization. N. Bumstead: Conceptualization, Investigation, Methodology, Writing – review & editing. C.F.M. Lewis: Writing – review & editing. F.J. Longstaffe: Supervision, Conceptualization, Writing – review & editing, Resources, Project administration, Funding acquisition. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments We thank Chip Heil Jr, Brad Hubeny (and others at the University of Rhode Island ) and Reba Macdonald who were instrumental in the collection of the Lake Simcoe cores. We thank the Natural Sciences and Engineering Council of Canada (FJL) , the Canada Foundation for Innovation (FJL) , the Canada Research Chairs Program (FJL) and The Lake Simcoe Region Conservation Authority (NB) for financial support of this research, and the staff of The Laboratory for Stable Isotope Science (LSIS) at the University of Western Ontario for their dedicated assistance with the analytical work. Thank you to the Lake and Reservoir Systems Research Facility (LARS) , especially Dr. Katrina Moser and Erika Hill, for their help analyzing the magnetic susceptibility of PC-5. This is LSIS contribution # 396 (RMD, NB, FJL) and Contribution # 20210691 of the Lands and Minerals Sector of Natural Resources Canada (CFML) . Supplementary Materials Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.dib.2022.108541 . Appendix Supplementary materials Image, application 1 | [
"DOYLE"
] |
3586abed4a0e4a5199cc4a27b7ebb8e1_Integrated effect of treadmill training combined with dynamic ankle foot orthosis on balance in chil_10.1016_j.ejmhg.2014.11.002.xml | Integrated effect of treadmill training combined with dynamic ankle foot orthosis on balance in children with hemiplegic cerebral palsy | [
"Sherief, Abd El Aziz Ali",
"Abo Gazya, Amr A.",
"Abd El Gafaar, Mohamed A."
] | Background and purpose
Maintaining balance is a necessary requirement for most human actions. Most cerebral palsy children, who constitute a large portion in our country, continue to evidence deficits in balance, co-ordination, and gait throughout childhood. So, the purpose of this study was to determine the combined effects of treadmill and dynamic ankle foot orthosis on balance in spastic hemiplegic children.
Subjects and methods
Thirty spastic hemiplegic children from both sexes ranging in age from 7 to 11years represented the sample of the study. The degree of spasticity ranged from 1 to 1+ according to the Modified Ashworth Scale. They were assigned randomly into two groups of equal number (A and B). Each child in the two groups was evaluated before and after 3months of treatment for detecting the level of lower limb performance using the Peabody Developmental Test of Motor Proficiency and Stability indices using Biodex instrument system.
Both groups received a designed physical therapy program for treatment of hemiplegic cerebral palsy children for 60min, in addition group B received treadmill training with dynamic ankle foot orthoses for 30min.
Results
Significant improvements were observed in all measuring variables when comparing the pre and post-in the same group. Comparing the post-treatment variables, significant difference is revealed in favor of the group (B).
Conclusion
The obtained results strongly support the combined effect of dynamic AFO with treadmill training as an additional procedure to the treatment program of hemiplegic cerebral palsy children. | 1 Introduction Cerebral palsy (CP) is a group of permanent disorders of the development of movement and posture, which are attributed to non progressive disturbances that occurred in the developing fetal or infant brain [1] . The motor disorders of cerebral palsy are often accompanied by disturbances of sensation, perception, cognition communication, and behavior. Secondary musculoskeletal impairments, pain, and physical fatigue are thought to contribute to changes in motor functions in children with CP [2] . Spastic hemiplegia accounts for more than a third of all cases of CP, and the resulting impairments to extremities affect functional independence and quality of life [3] . The most common patterns of spasticity during standing include flexion of the head toward the hemiplegic side and rotation. So that the face is toward the unaffected side and the upper limb is in flexion pattern with the scapula retracted and the shoulder girdle depressed [4] . The diagnosis of CP is based on a clinical assessment, and is typically based on observations or parent reports. Parents complain that their children had delayed motor milestones, such as sitting, standing, walking that play an important role in assessment of these cases. Evaluation of posture, deep tendon reflexes and muscle tone, particularly among infants born was done prematurely [5] . The treatment approaches used in management of cerebral palsy are neurodevelopmental treatment, sensory integration, electrical stimulation, constrained induced therapy and orthosis [6] . Balance control is important for performance of most functional skills and helping children to recover from unexpected balance disturbances due to self-induced instability [7] . Difficulties in determining individual causes of balance impairment and disability are related to decreased muscle strength, range of movement, motor coordination, sensory organization, cognition, multisensory integration and abnormal muscle tone [8] .Treadmill training was used for children with cerebral palsy to help them to improve balance and build strength of their lower limbs so they could walk earlier and more efficiently than those children who did not receive treadmill training [9] . The treadmill stimulates repetitive and rhythmic stepping while the patient is supported in an upright position and bearing weight on the lower limbs [10] . A positive correlation exists between balance impairments and decreased lower-limb strength. In addition, poor trunk controls negatively influence overall balance [11] . Splinting is commonly used by both physical and occupational therapists to prevent joint deformities and to reduce muscle hypertonia of hemiplegic upper limbs after stroke [12] . Orthoses are commonly used to improve and correct the position, range, quality of movement, and function of a person’s arm or hand [13] . It is proposed that inhibition results from the application of splint can be due to altered sensory input from coetaneous and muscle receptors during the period of splint or cast application. Immobilization is applying gentle continuous stretching of the spastic muscle at submaximal passive range of motion [14] . Ankle–foot orthoses (AFOs) are frequently prescribed to correct skeletal misalignments in spastic CP, and to provide a stable base of support which helps in improving the efficiency of gait training [15] . Dynamic AFO is a dynamic orthosis (articulated), which is used to facilitate body motion to allow optimal function [16] . A dynamic AFO provides subtalar stabilization while allowing free ankle dorsiflexion and free or restricted plantar flexion . So, dynamic ankle foot orthosis may be effective to gain balance and proper body alignment. The present study aims to evaluate the effect of the dynamic ankle–foot orthosis on standing balance of the spastic hemiplegic child. 2 Subjects, randomization and methods 2.1 Subjects Thirty hemiplegic CP children participated in this study from both sexes. They were selected from the pediatrics out-patient clinic of the Faculty of Physical Therapy, Cairo University. Their ages ranged from 7 to 11 years old. They were divided randomly into two groups A&B: Group A: included 15 children (10 boys and 5 girls) with mean age of 9.801 ± 0.77 years. They received a designed physical therapy program for treatment of hemiplegic cerebral palsy children for 1 h, Group B: included 15 children (10 boys and 5 girls) with mean age of 9.401 ± 0.69 years, and they received the therapeutic exercise program for treatment of hemiplegic cerebral palsy children for 1 h as group A in addition to exercising on treadmill with the ankle–foot orthosis for about 30 min. The subjects were selected according to the following criteria: (1) Spasticity grades ranged from 1 to +1 according to modified Ashworth scale [17] . (2) They were able to follow simple verbal commands included in the tests. (3) All subjects did not have fixed deformity of both lower limbs. (4) All subjects were able to stand with support. Exclusion criteria (1) shorting or contracture (2) cardiovascular diseases, (3) surgery within the previous 24 months, (4) sensory defensiveness, and (5) inability to follow instructions. All procedures involved for evaluation and treatment, purpose of the study, potential risks and benefits were explained to all children and their parents. The work is carried out in accordance with the code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans. Parents of the children signed a consent form prior to participation as well as acceptance of the Ethics Committee of the University was taken. 2.2 Randomization Randomization process was performed using closed envelopes. The investigator prepared 30 closed envelopes with each envelope containing a card labeled with either group A or B. Finally, each child was asked to draw a closed envelope that contained one of the two groups. 2.3 Methods 2.3.1 For evaluation Stability indices and gross motor function were evaluated before and after three successive months of treatment, using Biodex Stability System and Peabody developmental motor scale II). A familiarity session occurred prior to the test session. This session was particularly necessary for the children to ensure their comfort with the research team and protocol. On this session, participant practiced Biodex Stability System and Peabody developmental motor scale II). 2.3.2 Balance evaluation Biodex Stability System was used for evaluation using dynamic balance test which was performed at stability levels 8. At the first, certain parameters were fed to the device including: child’s weight, height, age and stability level (platform firmness). Each child in the two groups was asked to stand on the center of the locked platform within the device with the two legs stance while grasping the handrails. The display screen was adjusted, so he could look straight at it. Then each child was asked to achieve a centered position in a slightly unstable platform by shifting his feet position until it was easy to keep the cursor (representing the center of the platform) centered on the screen grid while standing in comfortable upright position. Once the child was centered, the cursor was in the center of the display target, he was asked to maintain his feet position till stabilizing the platform. Heel coordinates and feet angles from the platform were recorded as follows: heel coordinates were measured from the center of the back of the heel, and foot angle was determined by finding a parallel line on the platform to the center line of the foot. The test began after introducing feet angles and heel coordinates into the Biodex System. The platform advanced to an unstable state, then the child was instructed to focus on the visually feedback screen directly in front of him while both arms at the side of the body without grasping handrails and attempted to maintain the cursor in the middle of the bulls eye on the screen. Duration of the test was 30 s (sec.) for each child and the mean of the three repetitions was determined. The result was displayed on the screen at the end of each test including overall stability index, anterior–posterior stability index, and medio-lateral stability index. Peabody developmental motor scale (PDMS-2): was used for the detection of gross motor. Locomotive subtest of Peabody developmental motor scale was conducted before and after three successive months of training program. It is composed of six subtests that measure interrelated motor abilities that develop early in life. It was designed to assess the motor skills in children from birth through 5 years of age. The six subtests included in PDMS-2 are: Reflexes: Eight items of reflexes subtests, measure the child’s ability to automatically react to the environmental events .because reflexes typically become integrated by the time a child is 12 months old, these subtests are given only to children from birth through 11 month of age. Stationary: Thirty items of stationary subtests measure the child’s ability to sustain control of his or her body within its center of gravity and retain equilibrium. Locomotion: Eighty-nine items of locomotion subtests, measure the child’s ability to move from one place to another. The actions measured include crawling, walking running, hopping and jumping forward. Application of the scale included detecting entry point (in which 75% of children in the normative sample at that age passed), basal level (the last score of 2 on three items in a row before the 1 or 0 scores) and ceiling level (when the child scores 0 on each of three items in a row) for each child before and after treatment application. This scale is based on scoring each item as follows: Gross motor quotient (GMQ): It is a composite of the results of the subtests that measure the use of the large muscle system. Three of the following four subtests form this composite score: Reflexes (birth to 11 months only) Stationary (all ages) Locomotion (all ages) Object manipulation (12 months and older). (2) The child performs the item according to the criteria specific for mastery. (1) The child performance shows a clear resemblance to the item mastery criteria but does not fully meet the criteria. (0) The child cannot or will attempt the item, or the attempt does not show that skill is emerging. 2.3.3 For treatment Both groups received a designed physical therapy program which was applied for 1 h, three times per week for three successive months. This program included the following: (1) Manual standing on the mat grasping the child around his knees. (2) Manual standing on the mat with step forward and step backward grasping the child both knee (3) Kneeling and half kneeling on the mat to facilitate creeping position. (4) Changing position exercises from prone to standing and from supine to standing position. (5) Equilibrium, protective and righting reactions using balance board and medical ball. (6) Balance training exercise from standing on the mat by slightly pushing the child forward, backward and laterally to increase standing balance. (7) Strengthening exercises to weak muscles like dorsiflexors using manual resistive exercises. (8) Stooping and recovery exercising from standing position. (9) Squatting to standing exercise. (10) Gait training was performed: forward, backward, and sideways walking between the parallel bars (closed environment gait training). Obstacles including rolls and wedges with different diameters and heights, were put inside parallel bars. Gait training exercise between parallel bars using stepper. (11) Stretching exercises for tight muscles like hip flexors, hamstrings and calf muscles in lower limb and for wrist flexors, pronators and elbow flexors in upper limb. In addition, group B received the following: The children in this group received treadmill training for 30 min. Treadmill apparatus (En Tred) is a steel structure 2.4 meter (m) long, and ½ m width and is formed of a belt, two cylinders, and an axle along its width. The treadmill belt is a loop of synthetic rubber and nylon 3.75 m long that passes around 2 cylinders of 0.31 m in diameter. Parallel bars are attached on vertical beams at each side of the apparatus and its height was adjusted according to each child. The procedure and goals of exercise were explained to all children before starting walking on treadmill. Children grasped both parallel bars of the treadmill by both hands firmly and asked to look forward and not to look downward on their feet during walking as this may cause falling. At first the child must hold the hand rails by two hands then by one hand till he/she gained the self confidence, and walked on treadmill without support. The exercise training consisted of 5 min. of warm-up exercises involving light stretch and walking back and forth inside the room then dynamic aerobic exercise over treadmill was begun. A comfortable treadmill speed was selected for all children in both groups as 75% of their comfortable speed during over ground walking and zero degree inclination 20 min. [18] . The child was instructed to stop walking immediately if he felt pain, fainting, or shortness of breath. Finally, cooling down exercises for 5 min. involving light stretch and walking inside the room were performed. The child was asked to wear the dynamic AFO during walking on the treadmill. Dynamic AFO is a very thin flexible supramalleolar orthosis with a custom contoured soleplate to include support and stabilization to the dynamic arches of the foot. The bottom of the posterior cut-out needs to be just above the level of the ankle fulcrum of movement to get complete coverage of the calcaneus and allow comfortable ankle movement. A narrow posterior opening provides more complete medial–lateral control. Forefoot strap provides integrity to the anterior portion of the orthosis to maintain total contact and support to the forefoot and toes. AFOs are externally applied and intended to control position and motion of the ankle, compensate for weakness, or correct deformities. This type of orthosis is generally constructed of lightweight polypropylene-based plastic in the shape of “L”, with the upright portion behind the calf and the lower portion running under the foot. It is attached to the calf with a strap. 3 Statistical analysis The collected data of the balance and Locomotive subtest of Peabody developmental motor scale of both groups were statistically analyzed to study the effects of treadmill training with dynamic AFO on hemiplegic CP children. Descriptive statistics were done in the form of mean and standard deviation to all measuring variables in addition to the age, weight and height. T test was conducted for comparing the pre and post treatment mean values of all measuring variables between both groups. Paired T test was conducted for comparing pre and post treatment mean values in each group. All statistical analyses were conducted through SPSS (statistical package for social sciences, version 20). 4 Results 4.1 Subject characteristics Basic demographic data as well as the clinical characteristics of the 30 hemiplegic CP participants are presented in Table 1 There was no statistical significant difference between both groups as regards age, weight, height at the baseline of assessment as mean ± SD of the age of children in the group A was 8.11 ± 0.36, whereas that in the group B was 8.64 ± 0.48 years, also mean ± SD of the weight of children in the group A was 32.06 ± 4.54, whereas that in the group B was 32.66 ± 5.43 and mean ± SD of the height of children in the group A was 132.12 ± 4.54, whereas that in the group B was 136.33 ± 8.85, ( p > 0.05). 4.2 Stability indices The collected data from this study represent the statistical analysis of the stability indices including antero-posterior (A/P) stability index, medio-lateral (M/L) and overall stability index. The raw data of the measured variables for the two groups were statistically treated to show the mean and standard deviation. The obtained results in this study revealed no significant differences when comparing the pretreatment mean values of the two groups ( P > 0.05). Also significant reduction was observed in the mean values of stability indices for the both groups A&B at the end of treatment as compared with the corresponding mean values before treatment ( p > 0.05). As shown in Fig. 1 a significant difference was observed when comparing the post-treatment results of the two groups in favor of the study group B. 4.3 Peabody (locomotion subtest) Mean values and standard deviations of locomotion subtest of the Peabody of the two groups A&B before and after 12-weeks treatment are presented in Table 3 . The obtained results in this study revealed no significant differences when comparing the pretreatment mean values of the two groups ( P > 0.05). There was a significant increase of locomotion subtest of both groups as compared with the corresponding mean values before treatment ( P < 0.05). As shown in Fig. 2 there is a significant difference was observed when comparing the post-treatment results of the two groups in favor of the study group B ( Table 2 ). 5 Discussion Hemiplegic children may show a delay in the acquisition of various motor functions such as gross motor skills due to spasticity and motor weakness. This consequently will interfere with the gait function performance, so the current study was conducted to detect the effect of treadmill training with dynamic AFO on changing the affected lower extremity motor performance in those children. The pre-treatment mean values of overall stability index, anteroposterior stability index and mediolateral stability index of the dynamic balance test showed a significant increase in their values which indicated that those children had a significant balance problems. The pre-treatment mean values of locomotion subtest of the Peabody developmental motor scale II showed decrease in their values which indicated that those children had locomotion problems and difficulty. The dynamic postural control was impaired in cerebral palsied children due to the following: (1) Loss of selective muscle control. (2) Abnormal muscle tone. (3) Relative imbalance between muscle agonists and antagonists across joints, (4) Deficient equilibrium reactions. (5) Dependence on primitive reflex patterns for ambulation [19] . Comparing between mean values of pre-treatment results of dynamic balance test including overall stability index, anteroposterior stability index and medio-lateral stability index in both groups’ revealed non significant differences but also showed significant increase in their values. Also, pre-treatment mean values locomotion subtest of the Peabody developmental motor scale in both groups showed non significant differences but showed a significant decrease in their values in comparison to the normal values of the children in the same age group [20] which indicated that they had also balance problems. This also could be explained by the work of Lepage et al. [21] who reported that spastic hemiplegic children exhibit abnormal synergies of movement including deficits that interfere with various motor functions such as gross and fine motor skills. These results were consistent with those reported previously by Mark et al. [22] , who indicated that higher stability index was due to poor standing instability. Also the pre-treatment mean values of this study are in accordance with the findings of Roncesvalles et al. [23] , who stated that one of the contributing factors in stability of children with spastic hemiplegia is a poor ability to increase muscle response amplitude when balance threats increase in magnitude. Comparing between pre and post treatment mean values of the balance and gross motor function in both groups showed significant improvement at the end of the treatment program. This improvement could be attributed to reducing in muscle tone and improving joint ROM. This is supported by Karimi et al. [24] who stated that intensive reactive balance training which provided more stabilization to the child and minimized the displacement of COG under each foot, so keeping the center of gravity (COG) near the middle of base of support. Our results could be explained by the work of Carvalho and Almeida [25] who suggested that proprioceptive information is essential for the motor control system to select the appropriate motor strategy of reciprocal activation among the agonist and antagonist to efficiently maintain balance. High significant improvement was observed in group B when comparing its post treatment results with that of group A clearly demonstrated the evidence of using dynamic ankle foot orthosis with treadmill in addition to the physical therapy program for improving balance and gait in hemiplegic children. This combination leads to improvement of the child’s ability to stand and walk in a nearly normal way. So, it allows better motor function, more postural control, increase self confidence and motivation. This agrees with Yamam et al. [26] , who reported that there is a great importance of using the AFO in addition to gait exercise training in the early stages of rehabilitation of children with CP. This also comes in agreement with Morris and Bartlett [27] who reported that AFOs directly influence the alignment of the body segments. They can also influence hip and knee joint movements by manipulating the direction of the ground reaction force. Stabilizing the ankle and the foot therefore allows therapy to focus training on strengthening and encouraging better control over proximal joints. The improvement seen in the study group B may be due to reciprocal movement through treadmill training which strengthens and stabilizes the neurological network involved in producing this pattern and improves the postural control mechanism. [28] . The post-treatment results of the study group B also come in agreement with Matsuno et al. [29] who concluded that the treadmill is considered as a movable surface, so, the children needed to spend more time with both feet on the surface during the walking cycle than when they walked over ground. Also our results come in agreement with [30] who stated that treadmill has appositive effect on balance in Down’s syndrome children. The additional improvement in group B could be explained by the work of [31] who studied the effect of dynamic AFO with treadmill training on spasticity in planter flexors and their effect on the range of motion of ankle joint, as dynamic splints have moving parts that improve the ability of voluntary controlled of the spastic muscle and to decrease pathologic loading forces on the structural components of the foot and lower extremity during weight bearing activities. Also our results come in agreement with Olma et al. [32] who stated that three side support ankle–foot orthosis improves balance in children with spastic diplegic cerebral palsy. 6 Conclusion The data in the present study suggest that 12 weeks of intervention with dynamic AFO with treadmill training improve balance and gross motor performance related to standing and walking without any negative effects. So, it is recommended to include dynamic AFO with treadmill as a principle component in physical therapy programs directed toward improvement of balance and gait. Acknowledgments The authors would like to express their appreciation to all children who participated in this study. Special and deepest thanks to Prof. Dr. Faten H AbdelAzem chairman of Department of Physical Therapy for Growth and Developmental Disorders in Children and its Surgery, Faculty of Physical Therapy, Cairo University, Egypt for her great support and effort through this work. The authors declare no conflict of interest or funding for this research. | [
"RPETER",
"JAN",
"SCHITRA",
"WOOLLACOTT",
"OSHEA",
"GALLI",
"LIAO",
"MASSION",
"MENZ",
"GRIMSHAW",
"DAS",
"MORRIS",
"ROMKES",
"POHL",
"BUCKON",
"YAMAMOTO",
"BOHANNON",
"ELMENIAWY",
"MORRIS",
"TESTERMAN",
"HORAK",
"MARK",
"RONCESVALLES",
"KARIMI",
"CARVALHO",
"YAMAM"... |
6ca2f4770b204ad3a8854c888b286b60_Thank you reviewers_10.1016_S2666-3511(22)00065-1.xml | Thank you, reviewers!!! | [] | null | We take this opportunity to immensely thank all the distinguished reviewers, who have helped toward the growth of the journal. We greatly appreciate the time you have spent amidst your busy schedule towards the reviewing of the assigned manuscripts, without which it would be impossible to maintain the high standards of peer-reviewed journal such as SINTL. Thanks for all of you who reviewed the assigned articles for “ ” to make this a successful journal during the year 2022. Sensors International Jianguo Zhu Jie Zheng Zhicheng Zhang Nian Ashlee Zhang Jingdong Zhang Aminah S. Zawedde Atieh Zabihollahpoor Mehmet R. Yuce Bo Yu Hüseyin Yaşar Jingdong Xu Fan Wu Ming-Show Wong Qiuliang Wang H. Wang Shikha Wadhwa Santhosh Venkata N. Vasimalai Raju Vaishya Muammer Türkoǧlu Cihan Topcu V. Solis- Tinoco Molla Tefera Y. Tang Tomoya Takeda Shashi Shekhar T R Manu T M Siriporn Supratid Lin Sun Humbul Suleman Stankovski Stevan Ananya Srivastava Iñaki Soto-Rey Chaoyun Song Ravi Pratap Singh Namita Ashish Singh Dilbag Singh Uwe Siart Mohammad Shorfuzzaman Zhaoyao Shi Nagaraj P. Shetti Piyush Sindhu Sharma Himanshu Sharma Gaurav Sharma Mahesh Shanbhag Abdulkadir Şengür Fatih Sen Sudhir Chandra Sarangi Nileshi saraf Veera Sadhu Gopalakrishnan S. Susanta roy Susanta Roy souradeep roy Sharmili Roy B. C. Routara Gianpaolo Romano Jukka Rinne Xiaohu Ren Shekhar Ray Jahan Bakhsh Raoof Mouli Ramasamy Sivaramakrishnan Krishnan Rajaraman Ankush Raina A. V. Raghu Shanay Rab Hari Krishna R Arulmurugan R Shiwei Qu Mohd Asim Qadri Minakshi Prasad Mahdieh Poostchi Yosef Pinhasi akhilesh pathak Ajith Kumar Parlikad Junho Park Malathesh Pari Shailendra Pandey Mahesh P Bhat Ruud M. Oldenbeuving T Okamoto Alessendro Nutini R. Navamathavan Moustafa M. Nasralla Vinay Narwal Bo Nan Ryohei Nakayama Abir Benabid Najjar Ghasem Najafpour-Darzi Shalini Naidu Ernest Mwebaze M Motemedi Morteza Motallebnezhad kunal Mondal Debapriya Mohanty Santoshi Mohanta Hari Mohan Ahmad Mobed Hamid Reza Moazami Qingtao Meng Sarmento J. Mazivila Miloš Matúš Mangaka Matoetoe Ashish mathur Ronald Maschrhenus Mohamed Marey Khadija Maqbool Naveen Malvade Shweta J. Malode Himadri Majumder Kuldeep Mahato Qiang Ma Hanan Lutfiyya Donald W. Lupo Hang Luo Ping-Yu Liu Yi Li Long Li Jingen Li Woo Hyoung Lee Madhuri Kumari yogesh kumar Vanish Kumar Saravana Kumar Pragya Kumar Pradeep Kumar Neeraj Kumar Jagadeesh Kumar Avvaru Praveen Kumar Ashutosh Kumar Gururaj Kudur Jayaprakash S.B Krupanidhi Peter Križan Fanhua Kong Volodymyr Koman Close Sohaib Bin Altaf Khattak Manika Khanuja Shahbaz Khan Safyan A. Khan Raju Khan Ibrahim Haleem Khan Azhar Khan Jari Keskinen Ashutosh Kedar Marina Kawaguchi-Suzuki Ankur Kaushal Tavneet Kaur Ioannis Katsidimas Machavaram Venkata Kartikeyan Fatemeh Karimi Yiwei Kao Sushma Kalmodia Ramji Kalidoss Erbay Kalay Shankara Kalanur Peter Masoko John Laiming Jiang He Jiun Jiang Sudan Raj Jegan Mohan Zainul Abdin Jaffery Alyani Ismail Ömer Isildak Mir Irfan Ul Haq Qun Treen Huo Md.Bellal Hossain Akiyoshi Hizukuri G. Hemavathi S.A. Hassanzadeh-Tabrizi Shinsuke Hara Sufia Haque David A. Hall Abid Haleem Meliha Burcu Gürdere Ankur Gupta Nafisa Gull Xuemai Gu Hui-Wen Gu Muhammed Golec Yun Ii Go Liam Gillan Sukhpal Singh Gill Kajal Ghosal Damayanti C. Gharpure Harish Garg Rahul Kumar Gangwar E N Ganesh Tapan Kumar Gandhi Manasa G Yen-Pei Fu Wolfgang Fritzsche Ricardo Franco Eman Yossri Z. Frag Gabriel Filios R. Ferriols Maryam J. Rastegar Fatemi Ronaldo Censi Faria Damien Ali Hamada Fakra Etshaam Etshaam Reza Eslami Sitotaw Eshete Ricardo Antonio Escalona-Villalpando Pınar Esra Erden Amir Elzwawy Adham A. El-Zomrawy Eman S. Elzanfaly Sally E.A. Elashery Gorachand Dutta Dan Ding S Desai Kenneth A. Cusi Osman Cubuk Ana-Maria Cretu Ravikumar CR Paulo Costa Tim P. Comyn Zhenyu Chu B.C. Choudhary B. Chethan Jianguo Chen Kean-How Cheah Jasmine Chawla Rakesh Chaudhari Sanghamitra K. Chatterjee Saeid Charsouei Murat Ceylan Alejandro Castillo Atoche Manjunath C R Abdullah Al-Mamun Bulbul Ümit Budak Dr Pradeep Kumar Brahman Tayeb Brahimi Arnab Bose Vivek B. Borse Jerzy Bochnia J. L. Bhowmik Dinesh Bhatia S. V. Bhandary Neil W. Bergmann Soufiane Belhouideg Derzija Begić-Hajdarević A Baskar Souvik Basak Komal Bapna Thar Baker Shikandar D B Navid Aslfattahi Osman Nuri Aslan Manjunath AS Catur Apriono Shady Amin Elhassan amaterz Redha A. Ali Tuncay Alan Md Shah Alam Lubna Akhtar Ayman M. Abdalla | [] |
782fc62c3bcf406ea2ba0d80ef7e3b1f_Research on the durability performance of CFRP bonded anchors subjected to coupled multi-factor cond_10.1016_j.cscm.2025.e05093.xml | Research on the durability performance of CFRP bonded anchors subjected to coupled multi-factor conditions | [
"Luo, Yuting",
"Jiang, Haozhe",
"Guo, Shufeng",
"Wang, Minzhe",
"Zhuge, Ping"
] | Large carbon fiber reinforced polymer (CFRP) tendon anchors are bonded anchors, which have durability issues due to degradation of the bond between the anchor and the anchor interface. This paper tests the mechanical behavior of 54 CFRP tendons under various adverse environmental factors and loads through pull-out tests to study the mechanical behavior of the bond interface in the anchorage zone. These adverse factors include freeze-thaw cycles, temperature, temperature-humidity coupling, and temperature-humidity-sustained load coupling. Through testing, the influence of the above adverse environmental factors on the maximum pull-out force and residual bond strength of the bonded interface was obtained. Thus, these findings allowed for the evaluation of adverse environmental factors on the ultimate tensile strength of the CFRP anchors. The results show that after 30 and 50 freeze-thaw cycles, the maximum pull-out force decreased by 6.87 % and 22.34 % compared to the control group, respectively. Temperature weakens the bonding performance in the short term, but in the long term, post-curing reaction can increase the maximum pull-out force by 13–20 %. The maximum pull-out force decreased by 6.9 % in a temperature-humidity coupled environment, while the residual bearing capacity increased by 42.67 % due to increased friction. The maximum pull-out force further decreased by 16.55 % under the combined conditions of temperature, humidity, and sustained load. Finally, it was concluded that the combination of multiple factors (temperature, humidity, freeze-thaw, and sustained load) was the most unfavorable condition, leading to a maximum pull-out force decrease of 32.37 % and a complete change in the failure mode to interface failure. This study provides an important basis for the durability of CFRP used in tensile members of bridges. | 1 Introduction CFRP (carbon fiber-reinforced polymers) materials, due to their lightweight, high strength, durability, and corrosion resistance, have been increasingly applied in various civilian domains, particularly in civil engineering. In 1981, Meier (Sweden) first employed CFRP sheets to reinforce the Ebach Bridge [1] . An early notable application involved CFRP tendons in a cable-stayed bridge with a length of 61 m in Winterthur, Switzerland [2] . Common forms of CFRP materials include strips, plates, and grid fabrics [3–5] . Among these, CFRP tendons are increasingly being employed to replace steel rebars in concrete structures, such as bridges and marine engineering structures in corrosive environments, to address the limitations of steel reinforcement, including poor durability, reduced load-carrying capacity, and diminished fatigue resistance. Beyond their role as prestressed tendons, CFRP materials also hold great promise for lightweight design in structural engineering applications. In the field of large-span bridge engineering, the tendons of suspension and cable-stayed bridges play a pivotal role as load-bearing components. Their performance directly impacts the service life and operational safety of these bridges. Traditional steel cables are susceptible to corrosion, heavy, and prone to sagging, and their maintenance is costly. Conversely, CFRP tendons offer enhanced load-bearing capacity while facilitating wider spans and demonstrating exceptional corrosion resistance. However, CFRP is an anisotropic material with low shear strength, and traditional anchoring methods can easily lead to shear failure. Consequently, anchoring technology innovation has emerged as a bottleneck in the engineering application of CFRP tendons [6–11] . The current mainstream anchoring types include self-anchoring, friction, bonded, and composite [12] . Among them, the bonded anchor significantly reduces stress concentration through chemical bonding, friction interlocking [13] , and stress redistribution mechanisms at the interface between the CFRP cable body, the bonding layer, and the steel sleeve, demonstrating outstanding advantages in terms of fatigue resistance and ease of construction. However, during the entire service life of a bridge, the anchoring system must withstand the combined effects of temperature fluctuations between −30°C and 60°C, high humidity penetration of RH ≥ 90 %, and sustained loads up to 40 % of the tendon's ultimate tensile strength (UTS), [14–16] which causes nonlinear degradation of the interface performance: the coupling of freeze-thaw cycles and sustained loads reduces the ultimate tensile strength of the test sample by 24 % [17] , after 6200 h of humidity and heat aging, the bending strength of the epoxy resin matrix decreased by 60 %, and the interface shear strength exhibited an exponential decline under salt spray corrosion [18] . Although current research has revealed the pattern of significantly reduced anchorage resistance of prestressed CFRP samples after freeze-thaw cycles through single environmental tests (such as the SIA 262/1:2013–08 freeze-thaw standard [19] ), systematic understanding of the interactive mechanisms involving resin phase transitions, microcrack propagation (width > 50 µm, Paris formula threshold reduction) and stress corrosion under multi-field coupling conditions, and international standards (such as ACI 440.3R-12) only vaguely specify the durability reduction factor for CFRP materials (0.9–1.0) Δ k [20] , without providing quantitative criteria for the shear strength threshold at the anchorage interface or damage accumulation criteria. Therefore, developing a CFRP anchoring system that meets durability requirements is of critical importance. Durability testing of CFRP materials primarily focuses on the impact of environmental conditions on their performance. Karbhari et al. [21] . conducted a comprehensive durability gap analysis of CFRP materials used in civil engineering, identifying seven environmental conditions that most significantly influence FRP durability: (i) humidity/water [22,23] ; (ii) chemical environment [24] ; (iii) temperature [25,26] ; (iv) freeze-thaw cycle [27] ; (v) sustained loading [4] ; (vi) ultraviolet radiation [28] ; and (vii) fire [29] . These environmental conditions often act on materials in single or combined forms, affecting their performance to varying degrees. Presently, the majority of research on the durability of CFRP anchors is conducted using laboratory-based accelerated aging devices. These devices are capable of simulating actual environmental conditions, thereby enabling the acquisition of material performance data under conditions similar to real-world usage environments within a relatively short timeframe. This capability provides reliable data for durability assessment. In terms of research on durability evaluation methods, Yang [13] et al. proposed a decay model for the mechanical properties of FRP under single accelerated aging and composite aging tests in their study on FRP durability evaluation methods. The durability research of CFRP bonded anchors currently focuses on performance changes under water immersion and humid heat conditions. Through systematic experimental research, Xie [30] revealed the mechanism of moisture's influence on the anchoring area. The research found that CFRP tendons themselves have extremely low water absorption rates, but microcracks and defects at the interface cause a significant increase in the water absorption rate at the interface. Additionally, by placing anchorage devices in water environments at different temperatures, researchers conducted an in-depth investigation into the influence of humidity and heat on the bonding performance of the anchorage zone. They employed a combination of static and fatigue testing methods to quantitatively analyze the effects of immersion time and temperature on fatigue life [31] . Wang [32] explored the impact of temperature on the durability of CFRP anchorage, and used Arrhenius theory to predict the long-term degradation of CFRP tendon anchoring performance in representative regions around the world. Different single environmental conditions contribute differently to the bond strength degradation of bonded anchors. As demonstrated in Ren's research [33] , specimens exhibited a diminished adverse effect on ultimate tensile strength when subjected to wet and dry environments in comparison to freeze-thaw environments. Wang [34] conducted a study on the alterations in the bonding performance of CFRP fabric bonded to shale bricks. The findings indicated that the combined effect of high temperatures and freeze-thaw cycles had the most significant impact on the interfacial shear strength, followed by the effect of freeze-thaw cycles alone, while the influence of high temperatures alone was the least significant. Scholars have reached different conclusions regarding the strength degradation of bond interfaces under coupled environmental effects. Correia [22] studied the durability of prestressed CFRP strip-reinforced reinforced concrete (RC) slabs under various environmental conditions and found that sustained loading had no significant effect on the ultimate strength of beams. However, Harmanci 's study [19] showed that under freeze-thaw cycles, CFRP anchoring systems subjected to sustained loads showed a significant reduction in tensile strength, stiffness, and deformation capacity. Therefore, further systematic research is needed on the interface bonding performance of FRP anchoring systems under coupled effects. This study uses inner-cone type bonded anchors to provide reliable anchoring capacity for CFRP tendons. The anchoring capacity of the anchors is evaluated through tensile mechanical analysis of 54 specimens. The research variables encompass conventional working conditions, including freeze-thaw cycles, temperature gradient changes, temperature-humidity coupling, temperature-humidity-sustained load coupling, and temperature-humidity-load-freeze-thaw multi-factor coupling. Furthermore, the failure modes and load-slip curve changes of CFRP bonded anchors under complex environmental effects are introduced. Finally, the study focuses on the time-dependent evolution characteristics of anchorage interface slip behavior and load-carrying capacity degradation, revealing the friction-enhancing effect of residual bonding strength and failure modes. This study aims to test the central hypothesis that the cumulative degradation effect on CFRP anchorages under multi-factor coupling (e.g., freeze-thaw, sustained load, temperature-humidity) exceeds the sum of the individual effects. By investigating this hypothesis and identifying the dominant damage mechanisms involved, this research establishes the essential experimental foundation for developing a life prediction model for anchorage systems that accurately accounts for these synergistic effects of environmental and load factors. 2 Experimental program 2.1 Material properties This study investigated the bond interface degradation of CFRP bonded anchors under various complex environments, focusing on CFRP tendons as reinforcement. The anchoring adhesive was formulated using Sikadur 42 epoxy resin mixed with silica sand to reduce curing exotherm, decrease adhesive porosity, and increase the friction coefficient at the CFRP-epoxy interface, which enhances the anchoring performance [35,36] . The steel sleeves were made of 40Cr, and the CFRP tendons featured a textured surface, which enhances friction and improves the anchorage effect( Fig. 1 ). Material properties are summarized in Table 1 . 2.2 Specimen design Guo et al. [13] and Li et al. [37] reported that the maximum bond strength for textured CFRP tendons is achieved at an anchorage length of 50 mm. Each specimen included a CFRP tendon bonded within two steel sleeves (short Anchor A and long Anchor B). To ensure Anchor A fails first, set its length to 60 mm (50 mm anchorage length) and Anchor B to 70 mm. The free end of Anchor A was threaded to allow for prestress application. The dimensions are shown in Fig. 2 . 2.2.1 Sustained load application Bridge design guidelines typically limit the maximum sustained stress in tendons to 40 % of their ultimate tensile strength (UTS) [38] ,the sustained load in this study was set to 40 % of the CFRP tendon's UTS. The load was applied using preloaded springs and decreased by approximately 10 % during adhesive curing ( Fig. 3 ). An unloaded specimen is shown in Fig. 6 for reference. 2.2.2 Environmental condition configuration The experiment included six types of exposure conditions and 54 test specimens. A schematic representation of the experimental apparatus for freeze-thaw and temperature/humidity cycling is shown in Fig. 4 , and the specific environmental settings are detailed in Table 2 . For the freeze-thaw cycles, the temperature was ramped during each 3-hour cycle between (2 ± 2)℃ and (-15 ± 2)℃ and held at each extreme. The cooling, low-temperature hold, heating, and high-temperature hold phases each lasted 45 min. 2.3 Test setup and procedure To ensure effective initial anchorage, a target radial compressive stress of 100 MPa was induced at the CFRP-epoxy interface during fabrication [39–42] . As illustrated in Fig. 5 , this was achieved by applying an 18 kN axial pre-compression force to the assembly. This axial force generates the target radial stress due to the Poisson's effect of the epoxy resin being confined within the rigid anchor barrel (d=15 mm). This pre-stressing state is critical for establishing the initial frictional force and mechanical interlock of the anchor system. Pull-out tests were conducted using a universal testing machine at a loading speed of 5 mm/min. A custom-fabricated steel reaction frame was used to connect the anchor to the testing fixture, as illustrated in the experimental setup ( Fig. 6 ). The measured parameters included the UTS and slip of the specimens. 3 Test results 3.1 Failure mode A summary of the results of each group of this test is shown in Table 3 , and all the specimen’s epoxy were observed at the end of the experiment and still had good integrity and compactness, indicating that the experimental data were reliable. The typical failure modes of the specimens are shown in Fig. 8 , and each failure part is mainly categorized as follows (as shown in Fig. 7 ): (1) CF-type failure is characterized by the stripping of the surface of CFRP tendons. This is typically caused by the discrete quality of the CFRP tendon material itself. It usually leads to surface stripping of carbon fiber tendons when the bond between the CFRP tendon and the resin interface is greater than that of the tendon material itself. (2) RF-type failure is interfacial debonding between the resin and the carbon tendon, usually observed as a layer of epoxy residue on the carbon tendon near the load bearing end, along with a smooth pullout on the side near the free end. (3) EF-type failure is characterized by the fracture of the resin. A thicker layer of epoxy residue is typically noticed on the carbon tendon near the load bearing ends. (4) RAF-type failure is defined as the occurrence of interlayer debonding between the binder and the anchorage. This particular failure type is mainly due to inadequate adhesion at the interface between the epoxy resin and the anchorage, as well as the incompatibility of the coefficients of thermal expansion between the different materials, leading to interlayer separation and failure. (5) FF-type failure is characterized as a premature fracture of the CFRP tendon, typically occurring near the anchorage zone. This failure mode is primarily attributed to complex and localized stress states, such as non-axial tension and stress concentrations, induced during the tensioning process. The variation in failure modes observed under nominally identical conditions (e.g., within the T50-T7 group in Table 3 ) is attributed to the inherent stochastic nature of composite fracture. This scatter stems from unavoidable factors like microscopic material heterogeneity and specimen-specific stress states during testing. In both the Control Group and the FT Group, a CF+EF type of failure occurred. The surfaces of the specimens exhibited a smooth appearance, devoid of any visible signs of corrosion. Upon reaching their ultimate state, the anchors made a clear cracking sound. The pulled CFRP tendons had some resin remaining at the load end, and the carbon tendons had peeled off while the resin remained compact. This outcome is attributed to the fact that the freeze-thaw cycles exerted minimal influence on the adhesive interface, and the prestressing force and the innercone-type design of the anchor resulted in stress concentration at the load end. As for T and TH series specimens, the majority of failures were of the CF+RF type, while a few specimens exhibited FF and RF failures (such as T50-T7-C, T70-T7). Upon reaching the ultimate state, the presence of binder residue on the surface of the load end was found to be minimal. This reveals that the temperature factor mainly affects the failure of the specimens at the interface between the resin and the carbon tendons. In the THL series specimens, a CF+RF+EF type of failure occurred. Given that the anchor had undergone waterproofing treatment and no noticeable water rust had appeared on the surface, it was observed that the EF type of failure occurred closest to the load end during the pullout process, followed by the CF type of failure, and the RF type of failure occurred at the end of the pullout section, which was similar to the failure phenomenon in the control group. Subsequent inspection of the epoxy post-test revealed the presence of cracks, suggesting that the aging conditions exerted an influence on the bond strength of the interface and the stability of the overall structure. The THLF series specimens showed only RF type failure. The measurement of spring length indicated that the sustained load had attenuated by approximately 10 %. No surface fiber peeling on the carbon tendon was found at the load end; however, microcracking was found to have increased inside the epoxy resin. The strength of CFRP bonded anchorages is primarily determined by the bond characteristics at the interfaces of the CFRP tendon, resin, and anchorage, and by the resin's strength, rather than the strengths of the anchor sleeve or the CFRP tendon itself. Consequently, the material properties of these elements result in various forms of failure of specimens when subjected to environmental factors, including the freeze-thaw cycle, temperature action, sustained load and temperature-humidity coordination. 3.2 Load-slip response To construct these idealized trilinear models, a systematic methodology was employed to define the transition points between stages, ensuring consistency and reliability based on the triplicate test results. The procedure is as follows: The coordinates for the key transition points of the model curve are determined by averaging the characteristic points from the three experimental curves. Specifically: 1. Elastic to Plastic Transition (Yield Point): The first transition point, marking the end of the linear-elastic stage, is defined by the arithmetic mean of the load and slip coordinates where the experimental curves first exhibit a distinct deviation from their initial linear stiffness. 2. Peak Load Point: The second transition point, representing the ultimate capacity, is precisely calculated by taking the arithmetic mean of the three peak loads and their corresponding slips. 3. Residual Strength Value: The complex post-peak behavior is simplified into a constant residual strength for practical engineering applications. The rationale for determining this value is twofold: For specimens exhibiting a clear step-like 'stick-slip' curve (such as the control group), the constant residual strength is derived from the arithmetic mean of all local load peaks and valleys recorded in the post-peak stage. This method provides a balanced representation of the average frictional resistance. For specimens lacking a distinct step-like pattern and showing a more gradual decline (e.g., FT-30-C, T60-T1-C), the value is determined by averaging all load data points within a predefined slip range after the peak load. This transparent and repeatable methodology ensures that the derived model curves are a robust representation of the average performance, suitable for subsequent analysis and comparison. The control group ( Fig. 9 ) exemplifies this principle, with its load-slip curves demonstrating three distinct stages: linear-elastic, plastic, and residual stress. The specific features of these stages will be described in the following sections: (1) In the state of linear elasticity, the CFRP is in the linear elastic steady state, the load-slip curve increases linearly, and the average stiffness reaches 3.1 kN/mm. Subsequent to the application of a certain pull-out force, the epoxy resin undergoes further cracking. (2) In the Plastic stage, the occurrence of plastic phase may be closely related to the interface bond slip, local damage within the resin or CFRP tendon and stress redistribution. As the pullout force increases, the bond interface between the CFRP tendon and the anchorage gradually slips, leading to local debonding and stress redistribution. Concurrently, the resin matrix or the CFRP tendon may undergo local plastic deformation or damage in the area of stress concentration (such as fiber breakage or resin cracking). Consequently, these factors collectively induce a transition from the linear elastic phase to the plastic phase, which is characterized by nonlinear deformation and changes in bearing capacity. (3) In the Residual stress stage: After the maximum tensile strength is reached, the residual stress state occurs. The CFRP tendon and the transmission medium slide relative to each other. According to Guo's research [13] , the interfacial adhesion force basically disappears, and the interfacial adhesion force mainly depends on the mechanical shear force and friction on the surface of the CFRP tendon. As the tensile force increases, a stepwise curve repeatedly occurs. As displacement increases, the interface becomes smoother, causing both shear force and friction force to decrease continuously, ultimately approaching zero. For the FT group, the load-slip curve also shows a three-stage characteristic. As shown in Figs. 10 (a) and 8 (b), due to the change in the properties of the epoxy resin caused by the freeze-thaw action, the linear elastic stage shows high stiffness, but the maximum pull-out force after 30- and 50-times cycles decreased by 6.87 % and 22.34 %, respectively, compared with the control group. In the residual bearing capacity stage, the load can still be increased and a high stiffness can be maintained. Moreover, the residual bearing capacity after 30 times cycle groups increased by as much as 103.34 % compared to the control group, while the residual bearing capacity after 50 times cycle groups decreased by 15.17 % compared to the control group. As illustrated in Fig. 11 , the average stiffness of the specimens at 50 °C, 60 °C, and 70 °C during the in-service elasticity stage was 2.64 kN/mm, 3.44 kN/mm, and 3.05 kN/mm. These values were found to be consistent and relatively stable. Compared with the Control Group, the maximum pull-out force of the specimens under these conditions did not change much after seven months of loading at 50°C and one month of loading at 60°C. However, after five and seven months of loading at 60°C and five and seven months of loading at 70°C, the maximum pull-out force exhibited a substantial increase compared with the Control Group, with growth rates of 12.98 %, 17.07 %, 19.58 %, and 19.97 %, respectively. As illustrated in Fig. 15 (a), the TH group exhibited an increase in stiffness of 3.64 % in the linear growth phase compared to the control group, while the stiffness in the plastic phase was 1.95 %. Conversely, the maximum pull-out force exhibited a 6.90 % decrease compared to the Control Group, while the average residual load capacity demonstrated an increase of 42.67 %. For the THL-AT3 group, as illustrated in Fig. 12 (b), the two replicate specimens (THL-AT3-A and THL-AT3-B) exhibited excellent agreement in their load-slip behavior up to the peak load. This consistency validates the determination of the maximum pullout force, which decreased by 7.37 % compared with the Control Group. However, the specimens displayed highly divergent post-peak behaviors. Specimen THL-AT3-A underwent a brittle, catastrophic failure with an abrupt loss of load-carrying capacity, whereas specimen THL-AT3-B showed a more gradual softening with significant residual strength. This stark difference in post-peak response makes it unfeasible to define a single, representative residual phase for the THL-AT3 model. For THL-BT3, THL-BT5, and THL-BT7, the maximum pull-out forces were reduced by 17.87 %, 9.59 %, and 16.55 %, respectively, and the average residual bearing capacity was increased by 48.59 %, 24.16 %, and 65.30 %, respectively. Additionally, it was evident that, under the combined influence of temperature, humidity, and sustained load, the specimens exhibited a faster rate of growth in the linear phase. This was evidenced by the fact that, when the maximum pull-out force was attained, the slip of the specimens was significantly smaller compared to that of the control group. For the THLF group, epoxy resin failure within the anchorage is of particular severity. The maximum pull-out force is the lowest among all groups, averaging only 15.88 kN, which is 32.37 % lower than that of the control group. However, in the stage of residual bearing capacity, this group demonstrates a marked increase in ductility, characterized by multiple step-like curves and a linear increase in stiffness value of approximately 3.1, accompanied by a gradual decrease in residual bearing capacity 4 Multi-factor analysis 4.1 Freeze-thaw factor As shown in Fig. 14 , the maximum pull-out force of the test specimens all decreased to varying degrees after different numbers of freeze-thaw cycles. After experiencing 30 freeze-thaw cycles, the maximum pull-out force of the test specimens decreased by 6.87 %, and after experiencing 50 freeze-thaw cycles, the maximum pull-out force of the test specimens decreased by 22.34 %. It is evident that as the number of freeze-thaw cycles increases, the average maximum pull-out force of the specimens exhibits a substantial degradation trend. This phenomenon is primarily attributed to damage at the interface between the CFRP tendons and the resin matrix, induced by freeze-thaw cycles. During these cycles, microcracks initiate and propagate within the resin matrix. These microcracks create pathways for moisture ingress. Subsequently, the volumetric expansion of this entrapped moisture during freezing further exacerbates the material damage. In addition, relevant studies have also shown that the mechanical properties of CFRP tendons will change significantly after experiencing Freeze-thaw cycles, and the tensile strength may decrease by 10–20 %, the Young's modulus also tends to decrease. [21] From the perspective of interface performance, the effect of freeze-thaw cycles on the adhesive properties of resins is of particular significance. Moisture penetration, followed by its freezing and subsequent expansion within the resin, generates stress concentrations. These stresses promote the initiation and propagation of microcracks at the interface, thereby reducing the interface bonding strength. Furthermore, the metal utilized as an anchoring material also experiences performance degradation in a freeze-thaw environment, which manifests itself in increased surface corrosion and microcrack propagation. Consequently, these factors collectively contribute to a weakening of the bond between the anchoring material and the CFRP tendons, resulting in a substantial reduction in tensile strength. The residual bearing capacity and slip value of the specimens exhibited a non-monotonic change rule in accordance with different times of freeze-thaw cycles: After 30 freeze-thaw cycles, the residual bearing capacity increased by 103.34 % compared to the Control Group, reaching a peak. However, further freeze-thaw cycles (up to 50 cycles) led to severe performance degradation. At this point, the bearing capacity was not only 15.17 % lower than the initial Control Group, but also decreased sharply by 58.28 % compared to the peak level after 30 cycles. This phenomenon may be related to the progressive damage mechanism of epoxy resins. At the beginning of the freeze-thaw cycle (30 cycles), the interfacial bond matrix deteriorates, and the chips produced fill the gaps at the CFRP-anchorage interface. Due to the mechanical interlocking( ) effect that ensues, the interfacial friction( P interlock ) increases. Consequently, this leads to an enhancement in the residual bearing capacity (As shown in P friction Fig. 13 ). As the number of freeze-thaw cycles increases to 50 times, the bond matrix is further broken and even partially crumbled, the interfacial friction coefficient is therefore reduced, and the residual bearing capacity decreases accordingly. Similarly, the changing pattern of slip values (30 times down 34.07 %, 50 times down 19.24 %) can also be attributed to a phased change in the damage pattern——After 30 freeze-thaw cycles, the sudden peeling failure was caused by the severe local degradation of the interface adhesion( ), which significantly inhibited the development of slippage. However, after 50 freeze-thaw cycles, the more uniform composite damage system of matrix and interface delayed the failure process, resulting in a smaller decrease in the amount of slippage. This damage evolution feature is consistent with the phenomenon of “pseudo-enhancement” of bearing capacity during the initial stage of freeze-thaw reported in the literature P adhesion [33] . Both reflect the nonlinear regulatory effect of the accumulation of micro-damage inside the material on the macroscopic mechanical response. In summary, the freeze-thaw cycle caused microcracks in the resin matrix of CFRP tendons, and moisture penetration and freezing expansion exacerbated the material damage, which significantly reduced the tensile strength and mechanical properties; at the same time, the deterioration of resin bonding properties and metal anchorage by the freeze-thaw cycle further weakened the interfacial bonding effect, which resulted in the residual load carrying capacity may increase at the initial stage due to the increase in friction. However, as the number of cycles increased, the load-carrying capacity ultimately exhibited a decreasing trend. 4.2 Temperature factor As can be seen in Fig. 15 (a), under the condition of a 30-day aging period, the maximum pull-out force decreases with increasing temperature compared to the control group. A decrease of 5.84 % was recorded at 50 °C, 8.39 % at 60 °C, and a more substantial decrease of 19.03 % at 70 °C. This phenomenon is primarily associated with the inherent properties of epoxy resin. Epoxy resin exhibits temperature sensitivity. At elevated temperatures, the molecular chain movement within the resin matrix is intensified, leading to a softening of the material and a subsequent decay in bond strength [43] . Additionally, the disparity in thermal expansion coefficients between CFRP and the anchoring system, attributable to the elevated temperature, creates interfacial stresses that further degrade bond strength. After 150 and 210 days (see Fig. 15 (b) and (c)), the results showed that post-curing reaction of epoxy resin is more effective at higher temperatures or for longer periods of time. The rationale for this is that, according to the Arrhenius equation, a higher temperature accelerates the rate of a chemical reaction [44] . The curing reaction of epoxy resin adheres to this principle as well; the reaction between the epoxy groups and the curing agent is accelerated at higher temperatures. Increases in temperature cause an increase in molecular motion, which accelerates the ring-opening and cross-linking reactions of the epoxy groups. An extension of the curing time leads to an increase in the epoxy groups engaged in the reaction, resulting in enhanced cross-linking [45,46] . However, it is imperative to note that this does not inherently signify an enhancement in durability. In the short term (30 days), elevated temperatures primarily exert a disadvantage. This includes increased interfacial stress due to thermal expansion and initial softening of the adhesive, resulting in a decrease in tensile strength with increasing temperature. Conversely, as the duration extends to 150 and 210 days, a confluence of advantageous factors emerges, culminating in an enhancement of bond strength with increasing temperature. A number of factors may be relevant in this case. First, the post-curing process of the resin can improve the overall performance of the material; second, stress relaxation may reduce the initial interfacial stress concentration; third, the improvement in interfacial chemical bonding may improve adhesion; and fourth, the aging and restructuring processes of the material will gradually stabilize the interfacial structure. The combined effect of these processes elucidates the phenomenon whereby the initial bond strength diminishes with increasing temperature, while in the long term, the reverse trend is observed. As Fig. 15 (d) shows, the tensile strength at 210 days is similar to that at 150 days, indicating that the early stage of the curing reaction was rapid. As the curing time increased, the maximum bond strength at the same temperature no longer changed and tended to stabilize. In addition, Fig. 15 a and b show that the residual bond strength has increased to varying degrees compared to the control group specimens. This outcome is analogous to that observed in the FT-30 series specimens that have undergone 30 freeze-thaw cycles. The temperature effect has been shown to cause the formation of microscopic cracks within the resin matrix and fine particles between the interface of the tendon and the resin. These microscopic alterations, in fact, serve to augment the friction between the interfaces. From the above analysis, it can be seen that in the short term (30 days), high temperatures cause the epoxy resin to soften and the difference in thermal expansion, which reduces the tensile strength as the temperature increases. In the long term (150 and 210 days), high temperatures accelerate the post-curing reaction of the epoxy resin, increasing the degree of cross-linking and thus increasing the bond strength as the temperature increases. As the curing time increases, the bond strength tends to stabilize, and the residual bond strength is improved due to interfacial friction and microstructural changes. 4.3 Temperature and humidity coupling The experimental data in Fig. 12 (a) demonstrate that, in comparison to the Control Group, the maximum pull-out force within the anchorage decreased by 6.9 % following 7 months of coupling at 50°C and 100 % humidity. Concurrently, the average residual bearing capacity exhibited a significant increase of 42.67 %. However, Xie's article [31] revealed that the maximum tensile strength of the anchors declined by 33.86 % after being submerged in water at 50 °C for a period of only two months. This result is significantly lower than the outcomes observed in this particular test. This is because in submerged conditions, water molecules are more likely to penetrate along the interface between the epoxy resin and the tendons, causing the dissolution of soluble and low molecular weight substances in the epoxy resin. Furthermore, the additional pressure induced by the pore water serves to intensify the stress at the crack tip. In this experiment, the anchorage is placed in a high and low temperature alternating humidity test chamber. Compared to immersion, water molecules are less likely to penetrate the interface, so the degree of failure is relatively small. A comparison of the T50-T7 group with the TH-T7 group can provide a more intuitive reflection of the impact of humidity. As shown in Fig. 17 , the maximum extraction strength of the temperature and humidity group is reduced by 6.26 % and the residual bearing capacity is reduced by 27.07 % compared to the T50-T7 group. As reported in Section 4.2 , the bond strength of the resin in the anchorage first decreased and then increased during the 210-day curing cycle. In other words, the maximum pull-out force decreased compared to the Control Group at 30 days. However, in a high-humidity environment (100 % humidity), according to Tam [47] , the accelerated movement of water molecules affected the molecular interactions and local structure of the epoxy resin. The softened epoxy resin exhibited a propensity to dislodge from the tendon-resin interface. The plasticization of epoxy resin molecules, induced by water and salt in proximity to the interface, significantly contributes to the degradation of interface integrity. This results in the post-curing reaction of the resin not being able to reach its previous state, and therefore the maximum load-bearing capacity is reduced. In addition, due to the intrusion of water molecules, the interfacial friction coefficient is reduced, and the residual load-bearing capacity will be reduced compared to T50-T7. Overall, the effect of the combined action of heat and humidity on the performance of the anchorage system is complex. While it is evident that this combination diminishes the tensile strength, the impact on the residual load-bearing capacity is contingent upon the basis of comparison. This phenomenon is primarily attributable to alterations in the properties of the epoxy resin and the state of bonding at the resin-CFRP tendon interface. The synergistic effect of elevated temperatures and humidity has been observed to expedite material degradation and damage processes; however, concurrently, it has been demonstrated to enhance interfacial adhesion and friction in certain instances. 4.4 Hygrothermal and sustained load coupling Given that CFRP anchors are invariably subjected to loading in practical applications, this experiment designed a coupled environment of temperature, humidity, and sustained load. As demonstrated in Fig. 16 , the maximum pull-out force of the THL-BT3 series specimens was 19.29 kN. A comparison of the specimens from the THL-AT3 series with the subject pieces reveals a 11.35 % decrease in tensile strength, a finding that aligns closely with the experimental results documented in references [4,22,23] . It has been demonstrated that in a coupled temperature and humidity load environment, elevated temperatures and humidity levels exert a more substantial influence on the failure of specimens. Consequently, the 50°C temperature and 100 % humidity group will be subjected to further analysis to identify the most unfavorable factors. A comparative analysis of the THL-BT7 series and the TH-T7 series reveals the influence of sustained load on anchoring performance. As demonstrated in Fig. 17 , under identical conditions of 50°C and 100 % humidity, sustained load resulted in a substantial decrease in tensile strength of the specimens, reaching 10.38 %. This phenomenon indicates that, when considering the combined effects of temperature and humidity, sustained load further exacerbates the damage to the resin matrix inside the steel sleeve. It is worth noting that although the interface damage, such as crack propagation and debris generation, enhanced the friction effect on the carbon tendon surface, resulting in a 15.86 % higher residual bearing capacity compared to the temperature-humidity coupling group, the test data showed that this residual capacity was still lower than the initial pull-out force. This underscores the critical role of the chemical bonding performance at the resin-carbon tendon interface as the primary determinant of anchoring quality. The deterioration of this chemical bonding is the primary cause for the overall decrease in bearing capacity, as the enhancements in interface friction and mechanical interlocking forces could not compensate for the degradation of the chemical bond. As shown in Fig. 12 (c), (d)and(e), after curing process at 50°C and 100 % humidity, the specimens exhibited nonlinear deterioration. In comparison with the control group, the maximum pull-out force of the specimens exhibited a decrease of 17.87 %, 9.59 %, and 16.55 % after 90, 150, and 210 days of curing, respectively. This change pattern is closely related to the material's multi-response mechanism: As previously stated in Section 3.2 , the post-curing reaction of resin is typically completed within a limited time frame, potentially reaching its completion at the three-month mark. Consequently, a slight decrease in tensile strength (9.59 %) was observed at the five-month mark. However, as the curing time was extended to seven months, the cumulative effect of the specimens' aging under harsh conditions became dominant once again, causing the decrease to intensify (16.55 %). This phenomenon suggests that exposure to high temperatures and humidity exerts a distinct, phase-dependent effect on material performance. In the initial stage (90 days), the synergistic effects of moisture penetration and thermal effects are the predominant factors, leading to rapid degradation of the interface. In the middle stage (150 days), a new equilibrium state may be established within the material, causing the performance degradation to level off. In the long-term exposure stage (210 days), the continuous influence of environmental factors ultimately results in further degradation of material performance. As demonstrated in the preceding section, the performance of CFRP anchors exhibited marked phased degradation characteristics under the interaction of sustained load and temperature and humidity. The material underwent accelerated deterioration in the initial stage, a deceleration in the middle stage due to the post-curing reaction reaching a dynamic equilibrium, and further deterioration in the late stage due to the cumulative effects of the environment. Among these factors, the deterioration of the bonding performance of the resin-tendon interface was identified as a primary contributor to the reduction in load-bearing capacity. 4.5 Hygrothermal, freeze-thaw, and sustained load factors In light of the previously mentioned environmental factors, this study explores the failure of bonded anchors under various environmental influences, with a particular focus on the impact of the freeze-thaw cycle within a temperature-humidity sustained loaded coupled condition. The tensile strength of the THLF-T7 group exhibited a 32.3 % decrease, which is analogous to a 1/3 reduction in anchorage reliability when compared to the Control Group. Compared to the THL-BT7 series group (see Fig. 17 ), we found that the tensile strength decreased by 18.96 %, while the residual bearing capacity decreased by 30.33 %. According to the analysis in Section 3.3, temperature-humidity coupled environments (such as 50°C, 100 % humidity) accelerate the hygrothermal aging of the resin matrix, leading to interfacial debonding. The further introduction of Freeze-thaw cycle leads to the following effects: (1) Microcrack network formation: Freeze-thaw cycles cause isolated microcracks formed in a humid and hot environment to connect into a network structure, reducing the integrity of the substrate. (see reference [48] ) (2) Increased porosity: Freeze-thaw causes volume changes that lead to increased porosity within the matrix. Similar to the increase in concrete porosity observed in the freeze-thaw and fatigue load coupling group in reference [48] , the resin matrix may also produce a similar effect. (3) Chemical bond breakage: Temperature and humidity have been shown to cause hydrolysis reactions in the resin, while freeze-thaw cycles have been known to accelerate chemical bond breakage through mechanical stress, thereby reducing the interfacial bond strength. (see reference [49] ) Moreover, the presence of debris and micro-protrusions, resulting from interface damage, led to an augmentation in the friction contact area between CFRP and resin. Consequently, this led to an increase in residual bearing capacity and ductility of the specimens when compared to the control group. The introduction of freeze-thaw cycles resulted in a decline in the tensile strength and residual bearing capacity of the anchors, exhibiting variation in their response to sustained load coupling conditions of temperature and humidity. Freeze-thaw cycles exacerbated microcrack networking, increase porosity, and break chemical bonds, leading to interface debonding and a decrease in the overall integrity of the matrix. 4.6 Summary of worst-case factor analysis The experimental findings indicate that when the tensile strength is ≤ 18 kN (equivalent to 76.6 % of the control group), the failure mode transitions from CF+EF to RF, a phenomenon that resembles the failure mode of anchorage under varying aging conditions, as referenced in works such as [23,50,51] . In these cases, the failure mode shifts from concrete base failure to epoxy/concrete interface failure. A comprehensive analysis of the first four sections reveals significant variations in key control factors under diverse environmental conditions. In the context of a freeze-thaw cycle environment, moisture penetration and freezing expansion emerge as the predominant control factors. In high-temperature environments, resin softening and post-curing reactions assume a dominant role. In environments involving coupled temperature and humidity, resin aging and alterations in the interface bonding state become pivotal factors. In sustained load-coupled environments, resin matrix damage and the deterioration of interface bonding performance emerge as the primary control factors. As shown in Fig. 19 , under the influence of various environmental factors (S-3, FT-30, T50-T7, TH-T7, THL-BT7, THLF-T7), except for the FT-30 series group, the decay rate of slip values in the other groups showed a certain positive correlation with the decay rate of residual bearing capacity. For the tensile behavior of CFRP bonded anchors, the load-slip curve typically exhibits a distinct peak load followed by a residual load stage. During the transition from the peak load to the residual load P max , the anchor interface undergoes a transition from being dominated by P res to being dominated by P adhesion . Research indicates that peak load primarily depends on the chemical bonding strength provided by the resin matrix. Once P friction + P interlock is reached, the chemical bond begins to suffer cumulative damage until it significantly degrades. At the same time, friction forces P max between interfaces and mechanical interlocking forces P friction (generated by micro-slip or substrate irregularities) are activated. However, the contributions of the latter two ( P interlock ) are typically much smaller than the loss of chemical bonding strength ( P friction + P interlock ). Therefore, the decrease in load ( P adhesion ) largely reflects the loss of chemical bonding strength. The test data showed that the decrease in the Control Group was 19.59 kN, indicating that the interfacial bond strength could provide at least 19.59 kN of tensile strength under non-corrosive conditions. However, the value ( P max − P res )of the THLF-T7 series specimen decreased to 11.40 kN. A comparison of the performance degradation of each group of specimen in P max − P res Fig. 18 reveals that the THLF-T7 group exhibited the most significant loss of bonding strength, with a decrease that was considerably greater than that observed in the other test groups. This finding serves to underscore the profound impact of extreme environmental coupling on the adhesive performance of anchor interfaces. It is worth noting that the decline in the value of the multi-factor coupling effect and the degree of residual bearing capacity are significantly greater than those of the single-factor effect.This phenomenon indicates that, compared with the single or dual-factor effects, the multi-factor synergistic effects of temperature-humidity coupling, freeze-thaw cycle, and sustained load have a significant enhancing failure effect on anchor performance, which provides important empirical evidence for the cumulative effects of environmental factors in actual engineering. Furthermore, to provide a more comprehensive basis for durability assessment, the individual contributions of each environmental factor were decoupled and analyzed. The results indicate that freeze-thaw cycles were the most detrimental single factor, leading to a 22.35 % reduction in anchorage capacity, followed closely by elevated temperature, which caused a 19.03 % decline. By comparing the results of coupled- and single-factor tests, the degradation attributable solely to sustained load was determined to be 9.65 %, while the impact of 100 % humidity alone was responsible for a 6.52 % loss. These results therefore point towards the varying degrees of influence from each factor, with a general order of impact being: Freeze-Thaw > Temperature > Sustained Load > Humidity. This understanding provides a crucial baseline that enhances the understanding of the more complex synergistic interactions that were the primary focus of this work. In the context of engineering practice, priority should be accorded to the mitigation of the synergistic effects of freeze-thaw and humidity (which contributed to a 32.37 % decrease in the THLF group), followed by the defense against creep damage caused by sustained loads (which contributed to a 16.55 % decrease in the THL group). It is noteworthy that the adverse effects of elevated temperatures can be transformed into advantageous factors through material modification and the exploitation of post-curing effects. 5 Conclusion In this study, we conducted 54 groups of anchor single-axis tensile tests and multi-factor analysis to examine the degradation of the interface performance of CFRP bonded anchors under freeze-thaw cycles, temperature, temperature-humidity coupling, sustained loads, and their combined effects, as well as the failure mode. The following conclusions were drawn: • Freeze-thaw cycles cause microcracks in the epoxy resin matrix to expand, leading to interfacial debonding. As the number of freeze-thaw cycles increases, interfacial failure gradually worsens. After 30 and 50 cycles, the tensile strength decreased by 6.87 % and 22.34 %, respectively. • Under the influence of high temperatures alone, the resin softens within a short period of time, leading to a decrease in bond strength (50–70°C: 5.8 %–19 % reduction). However, prolonged exposure to high temperatures triggers a post-curing reaction, increasing the cross-linking density and improving tensile strength by 13–20 %. In a hot and humid environment (50°C + 100 % humidity), hydrolysis of the chemical bonds at the interface causes a 6.9 % decrease in tensile strength, revealing the accelerating effect of humidity on interface degradation. • The coupling effect of sustained load significantly amplifies environmental damage effects, transitioning the failure mode from a combination of Carbon Fiber surface delamination (CF) and Epoxy resin cohesive failure (EF) to one involving Carbon Fiber surface delamination (CF), Carbon fiber-resin interface failure (RF), and Epoxy resin cohesive failure (EF). This indicates that sustained load accelerates interfacial debonding and resin cohesive strength degradation. • The combination of multiple factors (temperature and humidity + freeze-thaw + sustained load) is the most disadvantageous condition, with a 32.37 % decrease in tensile strength and a 30.3 % decrease in residual bearing capacity. The failure mode completely changed to RF type. Freeze-thaw cycles promote microcrack networking and increased porosity. These effects, synergizing with wet heat aging and load creep, lead to interface failure and anchorage failure. The macroscopic failure modes observed in this study under novel coupled multi-factor conditions provide a critical foundation for understanding the long-term behavior of these anchor systems. To build upon these findings, future research employing microscopic techniques, such as SEM, is essential to elucidate the underlying damage mechanisms, such as interfacial crack initiation and the evolution of micro-defects, which are hypothesized to drive the observed performance degradation. CRediT authorship contribution statement Ping Zhuge: Project administration, Funding acquisition, Conceptualization. Minzhe Wang: Writing – review & editing, Conceptualization. Shufeng Guo: Formal analysis, Data curation, Conceptualization. Haozhe Jiang: Writing – review & editing, Validation, Supervision, Software. luo yuting: Writing – original draft, Visualization, Investigation, Formal analysis, Data curation, Conceptualization. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This research was sponsored by the National Natural Science Foundation of China (Grant No. 51208269 ), Zhejiang Provincial Natural Science Foundation (Grant Nos. LY15E080012 and LY18E080013 ), Ningbo Natural Science Foundation (Grant No. 2017A610314 ), National Engineering and Research Center for Mountainous Highways (Grant No. GSGZJ-2015-3 ), and K.C. Wong Magna Fund in Ningbo University . This work is also supported by the key basic research project (973 Project) of P.R. China , under Contract No. 2015CB057701 . | [
"BALAGURU",
"TODUT",
"XIAN",
"FENG",
"CAI",
"XIE",
"FANG",
"PUIGVERT",
"CAMPBELL",
"ANNI",
"SHUFENG",
"LI",
"LI",
"LI",
"OMRAN",
"DENGXIA",
"HARMANCI",
"KARBHARI",
"CORREIA",
"DAI",
"CHOI",
"ASHRAFI",
"ZHI",
"ZUOHU",
"CABRALFONSECA",
"DONGER",
"XIE",
"XIE",
"W... |
35c313b69a2844d4bbf6ca0fd88bf6cf_Explanted skull flaps after decompressive hemicraniectomy demonstrate increased bone avitality Is th_10.1016_j.bas.2022.101558.xml | Explanted skull flaps after decompressive hemicraniectomy demonstrate increased bone avitality. Is the reimplantation of autologous skulls still justified? | [
"Agrawal, R.",
"Stricker, I.",
"Rompf, C.",
"Granada, A.",
"Hoyer, A.",
"Theocharous, T.",
"Tannapfel, A.",
"Gousias, K."
] | null | Objective: Reimplantations of autologous skull flaps after decompressive hemicraniectomies (DC) are associated with higher rates of secondary osteonecrosis compared to cranioplasties with computer-assisted-design (CAD) implants. In the context of our clinical trial DRKS00023283, we assessed histologically the cell viability and the bone avitality of explanted bone flaps after DC, in order to account for possible sources of osteonecrosis. Methods: Skull bone flaps explanted during a DC between 2019 and 2020 for a vascular disease or traumatic brain injury were sterile stored in a freezer at either -23°C or -80 °C. After their thawing process, the skulls have been collected for microbiological and histological investigations. Bacterial cultures were performed under sterile conditions as well as after contamination of the bone fragments in a suspension with specific pathogens (S.aureus, S.epidermidis and C.acnes). Parameters of cell viability, namely PTH1 and OPG have been analyzed via immunohistochemistry. H&E stain was used to assess at two different times the degree of avital bone tissue, whereas the repeated measures were performed after 6 months. Results: A total of 17 stored skull flaps (8 -23°C; 9 -80°C) were analyzed.Median age of our cohort was 70 years, whereas 9 patients (53%) were male. Median duration of cryopreservation was 10.5 months (2 to 17 months). The microbiological investigation showed no significant differences between the studied subgroups.Relevant degree of bone avitality has been observed in all skull flaps. Preservation at -23°C (p=0.015) as well as longer time of storage (p<0.001) have been identified as prognostic factors for higher rates of bone avitality in a linear mixed regression model. Conclusion: Our analysis revealed a significant degree of bone avitality, a potential precursor of osteonecrosis, also in skull flaps stored for several weeks.To this end, we should reconsider whether the reimplantation of autologous skull flaps is still justified. | [] |
31d6fd34d6bf4751a2763d9a3e098dac_Tracking sucking herbivory with nitrogen isotope labelling Lessons from an individual trait-based ap_10.1016_j.baae.2022.06.004.xml | Tracking sucking herbivory with nitrogen isotope labelling: Lessons from an individual trait-based approach | [
"Neff, Felix",
"Lehmann, Marco M.",
"Moretti, Marco",
"Pellissier, Loïc",
"Gossner, Martin M."
] | Response and effect traits help to understand how changes in ecological communities (e.g. in response to land use) relate to changes in ecosystem functioning. In grasslands, plants and insect herbivores are involved in many ecosystem processes such as herbivory and plant biomass production. Simultaneous changes in the trait composition of both plants and herbivores should affect herbivory rates, with consequences for plant growth and potentially biomass production. The mechanisms underlying these links are little understood for grasses and sucking insects, which build a major part of grassland communities. In a mesocosm experiment, we manipulated the composition of grasses and sucking herbivores (Hemiptera) to study the role of plant traits, herbivore traits and their interaction on herbivory and plant growth. Because sucking herbivory is generally difficult to quantify, we developed a novel experimental setting, in which we labelled plants with 15N isotope. This allowed to quantify 15N uptake and thus sucking rates of individuals. We found that herbivory and simultaneous plant growth reduction are most strongly linked to herbivore species identity. Unexpectedly, herbivory did not increase with herbivore size, but was highest for small species and for thin-bodied Heteroptera. Additionally, herbivory and plant growth reduction depended on the interacting herbivore and plant species, indicating trait matching, which could, however, not be explained with commonly used traits. This indicates that mechanisms linking ecological communities and ecosystem processes are highly context-specific. To understand how global change affects ecosystem functioning, studies need to cover all functionally relevant groups, including plant sap suckers. | Introduction Global change drivers such as land-use change and intensification shift the composition of ecological communities across ecosystems and trophic levels, with consequences for ecosystem functioning ( Allan et al., 2015 ). In semi-natural grasslands, which are important hotspots of biodiversity in temperate regions ( Habel et al., 2013 ), intensive agricultural use has been shown to be a major filter in the assembly of both plant and insect communities ( Neff et al., 2019 ; Socher et al., 2012 ). At the same time, the rates of different ecosystem processes have been found to be strongly affected by land-use intensity ( Ambarlı et al., 2021 ) and these changes have been related to shifts in diversity or composition of ecological communities ( Wang et al., 2020 ). The underlying mechanisms by which changes in community composition affect ecosystem processes are, however, understudied. An important ecosystem process in semi-natural grasslands is insect herbivory, which might be strongly linked to plant growth and thus affects plant biomass production. Plant biomass production in these systems is an important provisioning service contributing to agricultural production ( Bengtsson et al., 2019 ). Insect herbivory might either reduce plant biomass production through reduced plant growth ( Crawley, 1989 ) or stimulate plant growth ( Dungan et al., 2007 ). Plant biomass production and insect herbivory are tightly linked to plant and insect communities ( Lavorel et al., 2013 ), but how changes in these multi-trophic communities affect ecosystem processes is still poorly understood. The use of effect traits, i.e. species or individual morphological or physiological characteristics that affect ecosystem processes, can improve the understanding of the mechanisms linking ecological communities and ecosystem processes ( Lavorel & Garnier, 2002 ). For example, plants characterised by high specific leaf area (SLA) and leaf nitrogen content (LNC) tend to be associated with faster plant growth and contribute to higher plant biomass production ( Funk et al., 2017 ; Wright et al., 2004 ). At the same time, plant biomass production was found to be more strongly reduced by larger grasshoppers ( Moretti et al., 2013 ) with stronger mandibles ( Deraison et al., 2015 ). Additionally, traits of organisms belonging to different trophic levels might have interactive effects on ecosystem processes through trait matching ( Schleuning et al., 2015 ). For example, plant biomass consumption depends on the interaction between plant toughness and the grasshopper's mandible strength ( Ibanez, Lavorel, et al., 2013 ). Thus, we need to better understand how traits of organisms at different trophic levels jointly affect ecosystem processes to predict how shifts in communities affect ecosystem functioning. Such questions have rarely been studied at the level of single species or functional groups (but see Ibanez et al. 2013 ). Furthermore, studies addressing similar questions so far never addressed herbivores that feed by sucking plant saps, which are, however, accounting for a large share of herbivore communities in grasslands (e.g. Risch et al. 2015 ) and can significantly reduce plant growth (e.g. Meyer & Whitlow 1992 ). This is not least because sucking herbivory rates are genuinely hard to quantify, given that feeding marks are hard to see and may not well be related to uptake rates ( Schowalter, 2011 ). However, sucking herbivore communities are substantially affected by intensive land use, which changes their trait composition, e.g. by filtering for smaller species ( Neff et al., 2019 ). How these changes in trait composition affect insect herbivory and relate to plant growth, and consequently plant biomass production are still open questions. Here, we manipulated the trait composition of plants and herbivores in a fully crossed mesocosm experiment to study how traits are related to insect herbivory and plant growth and whether there is indication for trait matching between the two trophic levels. We focused on hemipteran species sucking on grasses, both of which are important functional groups in semi-natural grasslands ( Neff et al., 2021 ). To overcome the difficulty of assessing sucking herbivory, we developed a novel experimental setting, where plants were labelled with a heavier isotope of nitrogen ( 15 N), which enabled us to track the flow of nitrogen in the system (e.g. Steffan et al. 2001 ). Stable isotope techniques are increasingly used in insect ecology (e.g. Quinby et al. 2020 ), and also to study nutrient flows in food webs or to assess herbivory (e.g. Schallhart et al. 2012 , Porras et al. 2020 ). Here, labelling of plants with 15 N allowed us to quantify herbivory rates of single sucking herbivores, which has to our knowledge not been done before, but provides large potential for more mechanistic studies on insect herbivory. The grass species included in the experiment were chosen to cover a gradient in palatability inferred from three traits (leaf dry matter content (LDMC), SLA, LNC), which have commonly been used to relate plant palatability to chewing herbivory (e.g. Schädler et al. 2003 ). Herbivore species were chosen to cover a trait space defined by three potential effect traits (body volume, body shape, rostrum length). We were interested in the interplay of these traits in determining insect herbivory and changes in plant growth. We predicted that herbivory rates would be highest on plant species characterised by high palatability and for the largest herbivore species, resulting in reduced plant growth, unless there is a stimulation of compensatory plant growth by herbivory. Additionally, if there is trait matching evident for the trophic relations between these two groups, we predicted that highest herbivory and consequently highest plant growth reduction should be observed at certain combinations of plant and herbivore traits. For example, we expect that plants with thicker leaves (i.e. low SLA; Wilson et al. 1999 ) are better accessible to herbivores with longer rostra and thus deeper leaf penetration potential, inducing trait matching. Materials and methods Plant and herbivore material Plant species were restricted to Poaceae and were selected based on three traits, which are essential determinants of the global leaf economic spectrum ( Wright et al. 2004 ) and are related to palatability and herbivory rates (e.g. Schädler et al. 2003 ): LDMC, SLA and LNC. Three species were selected from each of three clusters of species sharing similar traits ( Fig. 1A ): Agrostis capillaris, Arrhenatherum elatius and Poa trivialis in the high palatability cluster (low LDMC, high SLA and high LNC); Cynosurus cristatus, Festuca arundinacea and Holcus lanatus in the medium palatability cluster (low LDMC, high SLA and low LNC); and Deschampsia cespitosa, Festuca ovina agg. and Sesleria caerulea in the low palatability cluster (high LDMC, low SLA and low LNC) ( Appendix A for details). Insect herbivore species were selected from grass feeding Hemiptera (suborders Auchenorrhyncha and Heteroptera) based on three morphometric traits related to sucking herbivore effects, i.e. body volume, rostrum length and body shape (see Appendix A for inclusion rationales). Species were selected to cover the trait space ( Fig. 2A ) and included Aelia acuminata, Lygus spp., Notostira spp., Trigonotylus caelestialium, Stenodema laevigata, Deltocephalus pulicaris , and Laodelphax striatella ( Appendix A for details). Experimental design and setup The mesocosm experiment was performed in experimental cages in August/September 2019 with a completely randomized design with two crossed treatment factors ( Appendix B : Figs 1–3): plant palatability (three factor levels) and herbivore species identity (seven species and one control treatment without herbivores). Each treatment combination was replicated five times (3 plant treatments × 7 herbivore treatments × 5 replicates = 105 cages), except for the control treatments, to which some additional cages originally planned to contain further herbivore species were added, resulting in up to eight replicates (3 plant treatments × 1 herbivore control × 7–8 replicates = 23 cages). Because some cages were built with a plexiglass that was unexpectedly preventing plant growth, these cages were excluded from analyses. These cages had been randomly assigned to the study treatments and together with other, minor incidents, we ended up with two to five (seven for some controls) replicates per treatment combination and a total of 94 experimental cages (see Appendix B : Table 1 for a complete overview of replicates), which still enabled robust analyses given the fully crossed experimental design. At the start of the experiment, each cage contained an individual of each of the three plant species selected for the palatability cluster, which were labelled with 15 N to track the flow of nitrogen in the system, and two individuals of a herbivore species ( Appendix B : Fig. 1). Different measures were taken on plants and herbivores at the start and the end of the experiment ( Appendix B : Table 2), which were used to quantify traits, plant growth and herbivory rates. For details on the experimental setup, see Appendix A . Estimation of ecosystem processes For each plant individual, we predicted dry mass at the end of the experiment ( m ) that would have been expected in the absence of herbivores from estimated dry mass at the beginning of the experiment and growth observed for control plants not affected by herbivores ( end,pred Appendix A ). Predicted dry mass was related to measured dry mass at the end of the experiment ( m ) to determine relative deviation from expected growth end g as off with positive values representing lower than expected growth and negative values representing higher than expected growth. These values were used as proxies of plant growth reduction. (1) g o f f = m e n d , p r e d − m e n d m e n d Insect herbivory was estimated based on uptake of 15 N by herbivores ( u ) and mean abs 15 N concentration of the available plants ( , which were determined from c 15 N ¯ ) δ ratios, nitrogen content and biomass of plant and herbivore samples ( 15 N Appendix A ). 15 N uptake by herbivores relative to available 15 N in plants ( u ) was determined as rel and was used as a proxy of insect herbivory. (2) u r e l = u a b s c 15 N ¯ Statistical analyses All analyses were conducted in R v3.5.2 ( R Core Team, 2018 ). Linear mixed effects models were used to relate herbivory and plant growth reduction to (i) herbivore species and plant palatability cluster identities and (ii) herbivore traits and plant traits. The identity models contained herbivore species identity, plant palatability cluster identity, their interaction as well as the potentially confounding variables herbivore survival and distance to light (integer denoting the row at which the cage was positioned; Appendix B : Fig. 4) as fixed effects and a random effect for the cage. Herbivory was analysed at the level of individual herbivores, with survival indicating whether the individual was found alive at the end of the experiment (0/1), whereas plant growth reduction was analysed at the level of the individual plants, with survival indicating the number of individual herbivores that were found alive at the end of the experiment (0–2). Herbivory was log-transformed prior to analyses to meet distributional assumptions. The trait models had the same structure as the identity models, but herbivore species and plant palatability cluster identities were replaced with herbivore and plant PC axes. Based on the principal component analyses that were used for the selection of herbivore and plant species, study specimens were placed on the same PC axes based on their measured trait values. PERMANOVA from the package ‘vegan’ ( Oksanen et al., 2018 ) was used to check whether plant palatability clusters for the study plants were also represented by their PC axis values based on measured traits (9999 permutations). The two PC axes per trophic level were then included in the models. Additionally, all possible interactions between herbivore PC axes and plant PC axes ( n = 4) were included in the model. Backward model selection (based on χ tests) was used to find the optimal interaction structure for each model. Only interactions but no main effects were excluded during model selection. As for the identity models, herbivory was analysed at the level of individual herbivores, whereas plant growth reduction was analysed at the level of individual plants. Trait values of the respective other level were aggregated at cage level by taking mean values. The effect of sex on herbivory was tested in both the identity and trait models, but was not found to be significant, which is why it was excluded from the final models. All linear mixed effects models were run through the package ‘glmmTMB’ ( 2 Magnusson et al., 2020 ). Results Plant palatability clusters were represented by traits measured for the study plants (PERMANOVA: P < 0.001 for all pairwise comparisons based on PC axes), although variation within the clusters was quite large ( Fig. 1B ). On average, the study plants had higher LNC than plants in the data base, indicating a fertilisation effect caused by the 15 N labelling ( Appendix B : Fig. 5). All study plants were strongly enriched in 15 N compared to plants of the same species that were not included in the e x periment ( Appendix B : Fig. 5). Dry mass of control plants without herbivores present increased by 230% ± 16% (mean ± SE) relative to predicted dry mass at the start of the experiment, while dry mass of plants with herbivores present increased by 160% ± 8%, which was significantly less than for control plants (LMM: χ = 14.36, 2 P = 1.5e-04; Appendix B : Fig. 6). The traits measured on the study herbivore specimens matched closely the expected trait ranges ( Fig. 2B ). Mortality among retrieved study specimens was 52.7% ( n = 68). Additionally, 14.0% ( n = 21) of individuals could not be retrieved at the end of the experiment and were thus recorded as dead, resulting in an overall survival rate of 40.7% ( n = 61), which differed greatly among study species ( Appendix B : Table 3). All specimens, including the ones that had died, had clearly elevated 15 N concentrations, indicating (premortem) feeding activity of all specimens ( Appendix B : Fig. 7). Average absolute 15 N uptake by herbivores was estimated to 0.590μg (0.006 – 2.939μg [5% and 95% quantiles]), which relative to plant content of 15 N corresponds to 97.7μg (1.09 – 476.8μg) of dry plant material that was taken up ( Appendix B : Fig. 8). Average dry mass of herbivores was 3.73mg (0.214 – 16.44mg; Appendix B : Fig. 9). Differences in relative 15 N uptake between herbivore species were ranging from 27.1μg (1.68 – 103.2μg) of dry plant material for Stenodema laevigata to 188.9μg (0.294 – 518.0μg) for Trigonotylus caelestialium ( Appendix B : Fig. 8). Signs of herbivory on the plants were recorded on 25 plants (11.3%), 12 of which were on plants that were with T. caelestialium . Effect of plant and herbivore species on herbivory and plant growth reduction Herbivory ( 15 N uptake of herbivores relative to average plant 15 N content) was strongly affected by the interaction of plant palatability cluster identity and herbivore identity (LMM: χ = 41.40, 2 P = 4.2e-05; Appendix B : Table 4), while plant growth reduction (relative deviation in plant growth from control) was marginally significantly related to the interaction (LMM: χ = 18.55, 2 P = 0.10; Appendix B : Table 5). Also, there was a significant effect of herbivore identity on herbivory (LMM: χ = 52.09, 2 P = 1.8e-09; Appendix B : Table 4). Apart from the interactive effects, plant palatability cluster identity did neither show a significant relation to herbivory nor plant growth reduction. Herbivory but not plant growth reduction was higher for surviving individuals ( Appendix B : Table 4). Model predictions from both process models indicate that the higher herbivory, the higher plant growth reduction ( Fig. 3 ). Highest predicted herbivory and plant growth reduction were observed for T. caelestialium on plants of the medium and high palatability cluster and for Notostira spp. on plants of the low palatability cluster ( Fig. 3 ). While for Notostira spp., no difference in herbivory rates were found between the two species N. elongata and N. erratica (student's t-test: P = 0.39), there was a tendency for higher herbivory rates in Lygus rugulipennis compared to L. pratensis (student's t-test: P = 0.068). Accounting for the different Lygus species in the analyses of herbivory rates did, however, not change the overall picture ( Appendix B : Fig. 10). Effect of plant and herbivore traits on herbivory and plant growth reduction Herbivory was highest for specimens with small body volume (low herbivore PC axis 1 values) and thin bodies (high herbivore PC axis 2 values) ( Fig. 4 , Appendix B : Table 6). Neither plant PC axes nor the interactions between herbivore and plant PC axes were significantly related to herbivory. Plant growth was reduced most strongly by large herbivores (herbivore PC axis 1) on plants with high LNC (plant PC axis 2) or by small herbivores on plants with low LNC, as was indicated by a significant interaction between the two PC axes ( Fig. 5 , Appendix B : Table 7). Discussion Plant growth was clearly inhibited by herbivore presence and tended to be most strongly reduced in settings that showed highest herbivory rates, supporting the potential of sucking herbivores to affect plant biomass production. Because sucking herbivores withdraw photosynthates from the plants, they potentially reduce their ability for growth. Furthermore, herbivory can lead to plant stress-responses such as lowered photosynthesis ( Sulaiman et al., 2021 ), also resulting in lowered plant growth. Alternatively, plants may hold their C uptake constant but invest a large part of their photosynthetically obtained C into defence (and thus respiration) or store it in the roots, which would reduce the relative amount of C available for aboveground growth ( Dyer et al., 1991 ; Walling, 2000 ). Although these different mechanisms can explain the observed plant growth reduction, it might still be unexpected, given that in non-outbreak situations, insect herbivory is often expected to increase plant productivity ( Dyer et al., 1993 ). However, such stimulation in growth might only be apparent once herbivory pressure is reduced again ( Hawkins et al., 1986 ), which was not the case here with herbivores being present during the whole experiment. Also, the study design only allowed us to study the plants for two weeks after infestation with herbivores, which might not be long enough to observe compensatory growth. Thus, although the observed reduction in plant growth with increasing herbivory was considerable, more work needs to be done to understand its quantitative impact in real-world ecosystems. Process rates differed between herbivore species, but the observed relations did not match our expectation that large herbivore species would consume more and reduce plant growth more, as is the case for grasshoppers ( Moretti et al., 2013 ). Although survival was included in our models, this result may still have been partly influenced by differences in survival rates between herbivore species. As survival rates were high for very different species such as large bugs (e.g. Aelia acuminata ) and small leafhoppers (e.g. Deltocephalus pulicaris ), we expect other factors to be more important in explaining the observed species differences. Increasing consumption rates and thus herbivory increasing with body size are generally expected due to higher metabolic rates ( Brown et al., 2004 ). While the positive relation between body size and metabolic rates in herbivores is undisputed (e.g. Ehnes et al. 2011 ), other factors can affect metabolic rates of herbivores. For example, species that are engaged in regular activities with high metabolic demand (e.g. flying, producing sounds) tend to have higher metabolic rates ( Reinhold, 1999 ). The smaller species included in our study have shorter generation times ( Biedermann & Niedringhaus, 2004 ; Wachmann et al., 2004–2012 ), which might require more activities with high metabolic demand in a shorter time to fulfil their life cycle. Measures such as metabolic rate should be further addressed as potential effect traits related to herbivory and plant growth reduction. Mechanical plant palatability traits such as LDMC and SLA have been related to leaf toughness and are thus regularly postulated to be negatively related to plant palatability for chewing herbivores (e.g. Descombes et al. 2020 ). The lack of clear relationships in this study suggests that those traits are less related to accessibility of leaf tissue and transport vessels for sucking herbivores and that other traits such as nutrient contents could be stronger determinants ( Prestidge, 1982 ). Because LNC is of essential value for sucking herbivores, given it is generally a major limiting nutrient in their diet ( Elser et al., 2000 ), the lacking relation between LNC and herbivory in our study is surprising. It might, however, be related to the elevated LNC of all study plants compared to values reported in previous studies, which was a consequence of the fertilization imposed by the labelling. Thus, the plant palatability clusters that were defined based on literature traits were partly blurred. Consequently, all herbivores might have met their nitrogen demand in all palatability clusters, such that differences in consumption rates rather reflect differences in physiological needs of herbivores than of plant palatability defined by the three investigated plant traits. While the results of our study question the usefulness of commonly used plant palatability traits for sucking herbivores, further work needs to investigate which traits might be more relevant for this important group of insect herbivores. Both herbivory and plant growth reduction depended on the combination of herbivore species and plant palatability cluster. This indicates trait matching, but because the interactive effect could at best weakly be explained by the investigated traits, other traits might be involved to explain the specialisation of sucking herbivores to certain grasses. In dicotyledons, specialisation is often explained by the highly diversified composition in terms of secondary compounds, which is postulated to be an evolutionary response to herbivores ( Ehrlich & Raven, 1964 ). Grasses, however, lack this diversity in secondary compounds ( Tscharntke & Greiler, 1995 ), posing the question of what is mainly driving specialisation. A probable factor are once more varying nutrient levels among grass species and individuals, with herbivores being physiologically adapted to very specific host stoichiometries ( Denno & Roderick, 1990 ). Furthermore, grasses are known to use elevated silicon concentrations as defence against herbivores ( Vicari & Bazely, 1993 ). Thus, differences in silicon concentrations could explain the observed patterns, although their efficacy against sucking herbivory is not well understood so far ( Keeping & Kvedaras, 2008 ). Investigating trait matching by assessing host and herbivore stoichiometries and additional host defence structures could be a way forward to extend this concept to sucking herbivores. By labelling plants with 15 N isotope, we successfully quantified sucking herbivory at the level of single individuals, which is otherwise hard to observe. As such, the method provides great potential for future mechanistic studies on insect herbivory. We show that different herbivore species differently affect herbivory and plant growth and find indications for interactive effects between herbivores and plants in determining process rates, which suggest trait matching. Such relationships are in line with previous studies from grasslands with grasshoppers and indicate the importance of plant and herbivore community shifts for ecosystem functions such as plant biomass production. However, the traits generally recognised to be involved in the relationships among plants, grasshoppers and ecosystem processes had little explanatory power in our model system. This suggests that new traits should be addressed to understand the consequences of changes in multi-trophic community composition, e.g. in response to land-use intensification, for ecosystem functioning. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements We thank S. Bänziger, H. Berner, G. Casciano, C. Cattaneo, N. Feddern, G. Flückiger, U. Graf, T. Juchli, M. Laski, L. Neff, S. Neff, M. Oettli, A. Perret-Gentil, G. Reiss, D. Schneider, B. Sneiders, G. Szemes, M. Tran and B. Wermelinger for their help in setting up the experiment and in collecting and analysing samples. We are grateful to the fenaco Genossenschaft and to J. Heinze for providing seeds for the experiment. We thank two anonymous reviewers for their valuable inputs. This study was funded by the SNF [310030E-173542/1, granted to MMG] and by the SNF Ambizione project ‘TreeCarbo’ [No. 179978, granted to MML]. Supplementary materials Supplementary material associated with this article can be found in the online version at doi: 10.1016/j.baae.2022.06.004 . Supplementary materials Image, application 1 Image, application 2 | [
"ALLAN",
"AMBARLI",
"BENGTSSON",
"BROWN",
"CRAWLEY",
"DENNO",
"DERAISON",
"DESCOMBES",
"DUNGAN",
"DYER",
"DYER",
"EHNES",
"EHRLICH",
"ELSER",
"FUNK",
"HABEL",
"HAWKINS",
"IBANEZ",
"IBANEZ",
"KATTGE",
"KEEPING",
"LAVOREL",
"LAVOREL",
"MAGNUSSON",
"MEYER",
"MORETTI",
... |
6d92372d784247fb87ab0a01fcd8bfdc_Modified theoretical stage-discharge relation for circular sharp-crested weirs_10.3882_j.issn.1674-2370.2012.01.003.xml | Modified theoretical stage-discharge relation for circular sharp-crested weirs | [
"Ghobadian, Rasool",
"Meratifashi, Ensiyeh"
] | A circular sharp-crested weir is a circular control section used for measuring flow in open channels, reservoirs, and tanks. As flow measuring devices in open channels, these weirs are placed perpendicular to the sides and bottoms of straight-approach channels. Considering the complex patterns of flow passing over circular sharp-crested weirs, an equation having experimental correlation coefficients was used to extract a stage-discharge relation for weirs. Assuming the occurrence of critical flow over the weir crest, a theoretical stage-discharge relation was obtained in this study by solving two extracted non-linear equations. To study the precision of the theoretical stage-discharge relation, 58 experiments were performed on six circular weirs with different diameters and crest heights in a 30 cm-wide flume. The results show that, for each stage above the weirs, the theoretically calculated discharge is less than the measured discharge, and this difference increases with the stage. Finally, the theoretical stage-discharge relation was modified by exerting a correction coefficient which is a function of the ratio of the upstream flow depth to the weir crest height. The results show that the modified stage-discharge relation is in good agreement with the measured results. | null | [
"BOS",
"CHOW",
"GREVE",
"GREVE",
"LENCASTRE",
"PANUZIO",
"QU",
"RAJARATNAM",
"RICKARD",
"STAUS",
"STEVENS",
"VATANKHAH"
] |
5c683eac291a419c8cd830c26fc78d7a_A methodology to define risk matrices Application to inland water ways autonomous ships_10.1016_j.ijnaoe.2022.100457.xml | A methodology to define risk matrices – Application to inland water ways autonomous ships | [
"Bolbot, Victor",
"Theotokatos, Gerasimos",
"McCloskey, James",
"Vassalos, Dracos",
"Boulougouris, Evangelos",
"Twomey, Bernard"
] | The autonomous ships’ introduction is associated with a number of challenges including the lack of appropriate risk acceptance criteria to support the risk assessment process during the initial design phases. This study aims to develop a rational methodology for selecting appropriate risk matrix ratings, which are required to perform the risk assessment of autonomous and conventional ships at an early design stage. This methodology consists of four phases and employs the individual and societal risk acceptance criteria to determine the risk matrix ratings for the groups of people exposed to risks. During the first and second phase, the required input parameters for the risk matrix ratings based on the individual risk and societal risk are calculated, respectively. During the third phase, the risk matrix ratings are defined using input from the first and second phases. During the fourth phase, the equivalence between the different types of consequences is specified. The methodology is applied for the case study of a crewless inland waterways ship to assess her typical operation within north-European mainland. The results demonstrate that the inclusion of societal risk resulted in more stringent risk matrix ratings compared to the ones employed in previous studies. Moreover, the adequacy of the proposed methodology and its effectiveness to provide risk acceptance criteria aligned with societal and individual risk acceptance criteria as well as its applicability to conventional ships are discussed. | 1 Introduction The continuous research and advancement of technology has resulted in the development of novel systems, such as autonomous and crewless ships, the introduction of which is expected to bring substantial benefits, such as enhanced safety level, increased energy efficiency, reduced operational and lifecycle costs, reduced environmental footprint and enhanced equity ( Abaei et al., 2021a ; de Vos et al., 2021 ; Kim et al., 2019 ; Rødseth and Burmeister, 2015a ; Wróbel et al., 2017 ). Yet, these claims need to be verified. The autonomous ships are a subject of intense research efforts, with a number of systems being developed supporting their operations, such as specialised fire suppression systems ( Lee et al., 2020 ), collision avoidance systems ( Hu and Park, 2020 ; Zhou et al., 2021 ), path planning systems ( Yang et al., 2015 ), and remote inspections ( Poggi et al., 2020 ). However, the introduction of autonomous and crewless ships is associated with several challenges, which include their safe design and operation. The lack of a detailed regulatory framework renders the use of utilitarian approaches and tools, such as probabilistic risk assessment ( Rozell, 2018 ), necessary for carrying out the safety assurance of the next generation autonomous ships ( Nzengu et al., 2021 ), whereas the use of risk assessment for novel systems is considered a requirement in the maritime community ( IMO, 2013 ). This is associated to further challenges, such as the lack of statistical data, pertinent to the ranking of hazardous scenarios and the risk estimation for autonomous ships, the lack of standardised approaches to perform the risk assessment, and the ambiguity on the acceptable risk levels for the autonomous and crewless ships functional failures ( Bolbot et al., 2021a ; Chang et al., 2021 ; Hiroko Itoh et al., 2021 ; Hoem, 2019 ; Montewka et al., 2018 ). The existing maritime regulations provide examples of individual risk criteria ( IMO, 2018 ) and guidance for the estimation of the societal risk criteria ( IMO, 2000 ). Yet, these guidelines are provided for crewed ships and not in the context of autonomous ships. These guidelines typically refer to the aggregated ship risk, and therefore, they cannot be used for the assessment of individual functional failures and hazardous scenarios. However, it is important that both individual and societal risk criteria are considered as early as possible during the design phase to determine the safety and integrity requirements for the investigated system. The assessment of functional failures and hazardous scenarios can effectively be achieved by using risk matrices, as demonstrated in a number of studies ( EMSA, 2020 ; Rødseth and Burmeister, 2015b ). Whilst the use of risk matrices is associated with a several limitations ( AnthonyCox, 2008 ; Duijm, 2015 ; Thomas et al., 2014 ), risk matrices can be useful during the initial design stages of systems generally ( DoD, 2012 ), and autonomous ships specifically ( AuthorAnonymous, 2019 ). Risk matrices still constitute a popular tool for decision-making in several industries ( Duijm, 2015 ; Thomas et al., 2014 ), which is strongly recommended for use according to the Formal Safety Assessment procedures ( Kontovas and Psaraftis, 2009 ). Typical examples of risk matrices used in the maritime industry can be found in the class societies guidance for the assessment of novel technology ( ABS, 2017 ; DNV, 2011 ) and the IMO Formal Safety Assessment (FSA) guidelines ( IMO, 2018 ). However, the current regulations and guidance do not provide any direction on how to determine the risk matrix, risk ratings and contextualise them for the investigated problem. The ambiguity in connection to the risk matrix determination can be of high importance, as an arbitrary defined risk matrix and risk ratings can directly influence the design of crewless ship or other maritime systems, misleading the decision-making process ( AnthonyCox, 2008 ; Duijm, 2015 ; Thomas et al., 2014 ). The maritime industry, in this respect, has been lagging behind the aviation industry, where acceptable probabilities of failure that depend on the consequences of failures are already defined and employed in the design process ( EASA, 2010 ; FAA, 2011 ; GOVINFO, 2002 ; IEC, 2010 ; Lawrence, 2011 ; SAE, 1996a ). Several research studies focused on the definition of risk matrices and rating schemes. Guidance and rationale for specified acceptable probability of failure for aircrafts are reported in (transportation, 2011). Anthony Cox Jr (2008) discussed the main limitations of the risk matrices and reported ways to address them. Garvey (2008) , and Meyer and Reniers (2016) investigated ways to consider the decision-makers risk attitude (consequence- or likeliness averseness) during the risk ranking. Ni et al. (2010) reported the extensions of the risk matrix approach by considering additional operators. Levine (2012) proposed the use of risk matrices with logarithmic scales demonstrating their applicability to information systems. Iverson et al. (2012) developed a risk matrix tailored to the needs of the climate change challenge. Ruan et al. (2015) connected the risk matrix development with the utility theory. Hsu et al. (2016) recommended the use of a revised risk matrix integrated with the analytical hierarchical process for the risk assessment of aviation systems. Goerlandt and Reniers (2016) have reviewed the use of uncertainty in risk matrices and risk diagrams, proposing ways to improve the uncertainty. Li et al. (2018) proposed a sequential approach for altering the rating schemes based on a set of assumptions. Oliveira et al. (2018) developed an approach for designing the risk matrix by using multiple acceptance criteria. Garvey (2019) has proposed the use of scored risk matrix to facilitate prioritisation of scenarios with the same risk index but with different probability and consequence scores. Jensen et al. (2022) provided recommendations based on questionnaires’ results for updating the characterisation of likelihood and severity used in risk matrix-based risk assessments. Other pertinent studies focused on the identification and calculation of risk levels in autonomous systems. Blom et al. (2021) proposed an approach to estimate the third party risk in autonomous drones based on simulation results. de Vos et al. (2021) examined the potential impact of autonomy on safety on various ship types. Wróbel et al. (2017) investigated the impact of autonomy in terms of safety from the perspectives of prevention and mitigation. Vinnem (2021) investigated the applicability of current risk acceptance criteria in the context of autonomous offshore installations. Several studies implemented risk and reliability assessments for the autonomous ships ( Abaei et al., 2021a , 2021b ; Bolbot et al., 2019 , 2020 , 2021a ; Chang et al., 2021 ; Tam and Jones, 2018 ; Utne et al., 2020 ), however, without specifying specific risk acceptance criteria. The pertinent literature demonstrates that: (a) very few studies focused on development of risk matrices; (b) the majority of studies did not interconnect the matrix ratings with individual and societal risk acceptance criteria; (c) there is a lack of guidance to support the development of the risk matrix and risk matrix ratings required for the risk assessment of maritime systems, which can lead to a number of challenges. This study aims to develop a methodology for defining the risk matrix and rating schemes. This study focuses on the safety related consequences, whilst including the financial, environmental and reputational consequences, ignoring other aspects, such as, the ones related to the risk perception, accountability, liability, social benefits other than revenue, political costs, and trust, which can influence the decision-making. Aspects related to uncertainty are considered outside the scope of this study, as they have been addressed and discussed in detail in Goerlandt and Reniers (2016) . The novelty of this study stems from the developed methodology and demonstration of its applicability through a case study. The remaining of this article is organised as follows. The developed methodology for determining the risk matrix and risk matrix rating schemes is presented in Section 2. Section 3 provides the characteristics of the investigated case study ship. Section 4 presents the results derived by implementing the developed methodology followed by the discussion of the findings and limitations. Section 5 summarises the main findings of this study. 2 Developed methodology description 2.1 Methodology assumptions and overview The developed methodology is based on the following overarching assumptions, which influence the risk matrix development: • Assumption 1: All the developments focus on the risk matrix and regulations used in the international maritime framework, namely the FSA risk matrix and guidance ( IMO, 2018 ), since this has been already employed by the maritime community as reported in ( Bolbot et al., 2021a ; EMSA, 2020 ; Rødseth and Burmeister, 2015b ; Wang et al., 2020 ). The FSA risk matrix has logarithmic scales, which is a useful property as demonstrated by previous studies ( Duijm, 2015 ; Levine, 2012 ). As a consequence of this assumption, this study considers that one fatality is equivalent to ten severe injuries, whereas one severe injury is equivalent to ten minor injuries ( IMO, 2018 ). • Assumption 2: Aversion against accidents resulting in more than 10 fatalities is not considered. Instead, neutrality is assumed with respect to the risk taking when studying the accidents size and frequency. In other words, several small accidents are considered equal to a big one with the same risk. This is in line with the advice provided in ( IMO, 2000 ), as well as several guidelines in other industries ( Ball and Floyd, 1998 ; EMSA, 2015 ). However, it should be noted that some national authorities might require risk aversion for societal risks ( EMSA, 2015 ). • Assumption 3: The autonomous ships design should exhibit at least an equivalent level of safety compared with the conventional ships or equivalent safety requirements. This assumption is prescribed in the international guidelines for approval of alternative designs ( IMO, 2013 ) and other previous studies ( van Lieshout et al., 2021 ). • Assumption 4: The risks are classified in the following three categories considering the As Low As Reasonably Practicable (ALARP) limit: (Intolerable, tolerable or ALARP and negligible). This is in line with existing guidelines for FSA ( IMO, 2018 ), several class societies ( ABS, 2017 ; DNV, 2011 ) and other industries ( Ball and Floyd, 1998 ; Duijm, 2015 ; EMSA, 2015 ). • Assumption 5: All risks types (e.g., environmental, safety, reputational) are considered as equally important. Therefore, the aversion of different risks types, as employed for instance in the nuclear industry ( Ball and Floyd, 1998 ), is not considered herein. • Assumption 6: It is assumed that the overall risk can be attributed to maximum 10 functional failures with severe consequences (leading to single fatality). This is implemented in line with (transportation, 2011). The application of this assumption is further elaborated in section 2.4 and the results section. • Assumption 7: Aspects related to the risk perception, accountability, liability, general social benefits, political costs, and trust are excluded from the scope of this study. The developed methodology overview is provided in the flowchart shown of Fig. 1 . The methodology consists of four major phases. The first phase deals with the estimation of the intolerable ( ) and negligible ( F i n t N F = 1 ) fatality rates for a single person based on the individual risk ( F n e g N F = 1 is used to denote the number of the occupational fatalities per annum). The second phase includes the steps related to the estimation of the intolerable N F ) and negligible ( ( F i n t N F = 1 ) fatality rates for a single person from the societal risk criteria. The third phase focuses on the development of the risk matrix and the selection of the risk matrix ratings based on the previous phases results. The final phase deals with the expansion of the risk matrix with respect to other consequence types based on the assumption of equivalence between the risks. F n e g N F = 1 2.2 Phase 1: estimation of single fatality frequency based on individual risk This phase involves the following steps: (a) identification and grouping of the persons who are exposed to the risks from the investigated ship; (b) selection of tolerable and negligible risk levels for individuals in each group; (c) estimation of exposure for individuals in each group; (d) estimation of the tolerable and negligible levels of the single fatality frequency ( and F i n t N F = 1 for the most exposed individual in each group. F n e g N F = 1 ) 2.2.1 Identification of person group's exposed to safety risks In this step, the persons who are exposed to risks from the investigated ship are identified with the assistance of a questionnaire filled by ship operators and pertinent literature review. The identified persons are then classified as primary parties, third parties or passengers. The primary parties are those who reap direct financial benefits from the specific activity (army, 2002). The third parties are those who are involuntarily exposed to the safety risks stemming from the ship operation ( Skjong, 2002 ). The notion of the second parties could be also employed in line with (army, 2002), to denote those people who indirectly benefit from the related activities, e.g., cargo operators at ports. However, in line with the FSA guidelines ( IMO, 2018 ) (following assumption 1) and because these parties can be classified as primary parties or passengers, the notion of second parties is not employed herein. 2.2.2 Selection of tolerable and negligible individual risk The individual risk can be measured in terms of the single fatality frequency due to specific activities during a specific time period (e.g., one year) ( Vinnem, 2014 ). This type of risk can be used for the risk estimation to the first and third parties. The levels of intolerable and negligible risk can be estimated by using: (a) statistical analysis of accidents as reported in (army, 2002); (b) the predefined individual set risk criteria from IMO ( IMO, 2018 ) (following assumption 1) and categorisation into intolerable, ALARP, negligible (following assumption 4); (c) the criteria set by the national authorities guidelines. The selected levels for the individual risk constitute ‘anchoring points’ ( Ball and Floyd, 1998 ) and directly influence the developed risk matrix. The levels of intolerable and negligible individual risks can vary for different parties (first or third) and different groups of each party. 2.2.3 Exposure calculation The exposure for the crew and passengers can be estimated using the following equation, which is based on the time that crew and passengers spend onboard ship on an annual basis as reported in ( IMO, 2008 ): where (1) E p [ − ] = T p [ h ] / T a [ h ] T denotes the annual time a person from a specific group is exposed to the considered risk (in h), where p T denotes the hours of one year (8,760 h). a Eq. (1) can also be used to estimate the exposure for the personnel of the remote control centre and the personnel maintaining autonomous ships. For third parties that can be found onboard the ship, e.g., passengers, the exposure can also be estimated according to Eq. (1) . The estimation of exposure for the third parties, located outside the ships is more challenging, as the ships are not fixed objects (apart from the cases in anchorage and at port), and therefore, the exposure estimation requires consideration of navigational factors. Approaches as presented in ( Blom et al., 2021 ) can be properly marinised and subsequently employed to estimate the average exposure of third parties; however, they are rather computationally expensive. For this reason, this study estimates the time of exposure based on the time the third parties are within the autonomous ship safety domain as explained below. First, the average duration of the encounter ( in h) is estimated according to the following equation: T E where (2) T E [ h ] = S D [ m ] 1852 [ m n m ] V [ k n ] is the safety domain diameter (in m) and S D is average ship speed (in kn). V Subsequently, the can be approximated according to the following formula ( S D Namgung and Kim, 2021 ): where (3) S D m = { 8 − 0.6 10 − V L , V ≤ 10 k n 8 + 0.6 V − 10 L , V > 10 k n is the ship length (in m). L It should be noted that Eq. (3) constitutes an oversimplification and a very conservative approach to define the safety domain. Other approaches define the safety domain as an ellipse ( Hansen et al., 2013 ; Namgung and Kim, 2021 ; Pietrzykowski and Wielgosz, 2021 ), as block areas ( Kijima and Furukawa, 2003 ), as quaternion ( Wang, 2010 ) or as a polygon ( Bakdi et al., 2020 ). This simplification is used to facilitate the implementation and investigation of the overall methodology presented herein, whereas the consideration of other representations for the safety domain and the selection of the most appropriate is left as an area for future research. Eq. (3) provides the advantage of rendering the safety domain dependent on the ship length (representing the generic manoeuvrability characteristics) and the ship speed. A comprehensive review of safety domains can be found in ( Du et al., 2021 ; Szlapczynski and Szlapczynska, 2017 ). Lastly, the exposure is estimated according to the following equation by using the number of encounters between the ship and the individual per year ( E p ), as well as the average duration of encounter ( N E ): T E (4) E p [ − ] = N E [ − ] T E [ h ] / T a [ h ] To simplify the calculation procedure, the next step considers the most exposed person either among the first parties or third parties, based on pertinent concepts from the chemical industry ( EPA, 2011 ). 2.3 Estimation of single fatality frequency tolerable and negligible levels The Individual Risk (IR in fatalities per year) can be estimated according to the following equation as reported in the IMO FSA guidelines ( IMO, 2018 ): where (5) I R = F u e P p E p is the frequency of an undesired event, F u e denotes the probability of the event resulting in a casualty, whereas P p is the individual's exposure. E p By manipulating Eq. (5) , the following equations for estimation of the limits of intolerable and negligible accidental frequencies for a single fatality (fatality of an individual) (in line with assumption 4) ( and F i n t N F = 1 , respectively) are derived: F n e g N F = 1 (6) F i n t N F = 1 [ f a t a l i t i e s / a ] = F u e i n t P p i n t = I R i n t [ f a t a l i t i e s / a ] / E p [ − ] (7) F n e g N F = 1 [ f a t a l i t i e s / a ] = F u e n e g P p n e g = I R n e g [ f a t a l i t i e s / a ] / E p [ − ] The values of and F i n t N F = 1 are used as the reference points for the development of the risk matrix ratings in phase 3. F n e g N F = 1 2.4 Phase 2: estimation of single fatality annual frequency based on societal risk The societal risk is the “average risk, in terms of fatalities, experienced by a whole group of people (e.g., crew, port employees, or society at large) exposed to an accident scenario” ( IMO, 2018 ). The societal risk can be represented using the F–N curve or the Potential Loss of Life (PLL) metric ( EMSA, 2015 ). The levels of risks for different types of ships can be assured by using the relevant IMO guidance ( IMO, 2000 ) and ensuring that the number of accidents associated with the economic activity and societal benefits will be similar for the specific type of ship as in other industries. In this guidance, the financial benefits and the safety level for the whole of economy constitute the ‘anchoring point’ ( Ball and Floyd, 1998 ) considered in Phase 2. This may result in rather conservative estimation of acceptable and negligible risks, as the actual accidents levels can vary among the different industries even up to twenty times ( HSE, 1992 , 2020 ), whereas it is widely recognised that the maritime industry lags behind the other sectors in terms of safety levels. However, by considering the safety performance in other industries, motivation for pursuing the safety improvement in the maritime industry is provided. The approach for estimating the societal risks is described in ( IMO, 2000 ). Whilst this approach is applicable for conventional ships, it is employed herein, in line with assumption 3, for autonomous and crewless ships. The crewless ships do not employ crew onboard, however, third parties exposed to safety risks still exist, as the most likely scenario for autonomous ships in the short-to medium-term includes the coexistence of crewless and conventional ships. The following equations that are reported in ( IMO, 2000 ) are employed for calculating the pertinent safety metrics: (8) q f a t a l i t i e s / $ B = N F f a t a l i t i e s / a / G N P $ B / a (9) P L L A f a t a l i t i e s / a = q f a t a l i t i e s / $ B R $ B / a where (10) F A s i n g l e f a t a l i t y / a = P L L A ∑ N F = 1 N F = N u 1 N F = k − P L L A f a t a l i t i e s / a is the ratio of annual fatalities to the annual gross national income ( q , G N P i n $ B ) is the number of the occupational fatalities per annum, N F denotes the annual economic value (revenue) in $B per annum, R denotes the probability of the loss of life in fatalities per annum, P L L A denotes the frequency of single fatality per annum, whilst F A denotes the maximum fatalities number. N u The parameter can be approximated by using second of the assumptions referred in 2.1, as follows ( k EMSA, 2015 ): (11) k = 1 ∑ N F = 1 N F = N u ( 1 N F ) ≈ 1 0.577 + ln ( N u + 1 ) [ - ] The following equation is used to calculate the intolerable risk for a single fatality expressed in terms of the fatality frequency per annum ( IMO, 2000 ) (according to assumption 4): (12) F i n t N F = 1 [ s i n g l e f a t a l i t y / a ] > 10 F A [ s i n g l e f a t a l i t y / a ] Based on assumption 4, the negligible risk is defined by the following equation ( IMO, 2000 ): (13) F n e g N F = 1 [ s i n g l e f a t a l i t y / a ] < 0.1 F A [ s i n g l e f a t a l i t y / a ] The values of and F i n t N F = 1 refer to a single fatality for a single ship per annum, the revenue F n e g N F = 1 represents the annual revenue for a single ship, whilst R and N F refer to these parameters annual values. G N P 2.5 Phase 3: risk matrix and risk ratings development As it can be observed, Eq. (6) and Eq. (12) provide the estimation of . Similarly, Eq. F i n t N F = 1 (7) and Eq. (13) provide the estimation of During this phase, through the comparison of different estimations, a decision with respect to the F n e g N F = 1 . and F i n t N F = 1 values is made. This is rather qualitative methodology that involves judgement from the decision makers. Preference is given to the most conservative values of F n e g N F = 1 and F i n t N F = 1 , so that both the societal and individual risk criteria are satisfied. F n e g N F = 1 The actual single fatality frequency refers to the total risk to the individual resulting from different types of accidents, such as collision, fire, flooding, etc. To account for the risk associated with different hazardous scenarios that can arise from functional failures, in line with (transportation, 2011), this value is reduced by a factor of 10 (assumption number 6). In other words, it is considered that maximum 10 critical scenarios or functional failures can be encountered for the investigated ship, which can lead to the consequences equivalent to a single fatality with annual frequency . This is one of the important limitations of this study. F i n t N F = 1 / 10 According to ( IMO, 2000 ), the scaling up of the ratings is implemented using a logarithmic rule without risk aversion (in line with assumptions 1 and 2). Thus, the intolerable and negligible frequencies for N fatalities per annum for a single ship can be calculated according to the following equations: (14) F i n t N F = N [ N f a t a l i t i e s / a ] > F i n t N F = 1 N − 1 / 10 (15) F n e g N F = N [ N f a t a l i t i e s / a ] < F n e g N F = 1 N − 1 / 10 Employing Eqs. (14) and (15) bears the advantage of incorporating the isorisk assumptions more effectively in the risk matrix compared to when the linear scale is employed ( Duijm, 2015 ; Levine, 2012 ). The IMO regulations ( IMO, 2018 ) prescribe that the interrelation between the Frequency Index (FI) (used for ranking the frequency in the risk matrix) and the frequency (F) is provided by the following equation: (16) F [ e v e n t s / a ] = 10 F I − c o n s t ⇔ F I = log F + c o n s t Therefore, based on Eqs. 14–16 , the intolerable and tolerable regions in the risk matrix (risk matrix ratings) can be estimated. Rounding downwards the values calculated by Eq. (16) is employed for the selection of the frequency index risk ratings (FI). The risk matrix scales in terms of severity are derived by considering one level of magnitude higher than the single fatality (up to 10 fatalities) in line with IMO FSA risk matrix, as well as three levels of magnitude lower (to severity equivalent to 10 −3 fatalities). Hence, severities equivalent to 10 −3 , 10 −2 , 10 −1 , 1 and 10 1 fatalities were considered herein. This scaling is implemented to allow for the ranking of very serious accidents as well as minor accidents. For ships that carry a large number of passengers, the scaling up in terms of severity can increase further to include disastrous consequences (equivalent to 100 fatalities). The risk matrix scales in terms of frequency are derived by considering that the frequency increases two levels of magnitude up and decreases two levels of magnitude down compared to the FI that corresponds to . Therefore the respective FI values correspond to F i n t N F = 1 / 10 , F i n t N F = 1 / 1000 , F i n t N F = 1 / 100 , F i n t N F = 1 / 10 , F i n t N F = 1 . 10 F i n t N F = 1 In this way, the developed risk matrix has 5 × 5 cells. 10 cells are dedicated to intolerable risk, 9 to the tolerable and 6 to negligible. In cases where the higher severity scale is considered, the risk matrix consists of 30 cells, with 15 cells dedicated to the intolerable risk, 9 to the tolerable and 6 to negligible. 2.6 Phase 4: determining the safety equivalence The equivalence between the safety risks and the other risks is determined by using the 5th assumption from section 2.1. For the financial risks, the cost-benefit criteria, which support the identification of cost-effective control measures, such as Cost of Averting the Fatality (CAF), is used to determine the equivalence between the safety and financial risks. This is the only equivalence that is determined quantitatively. All the other equivalences are determined qualitatively based on the literature review. For the equivalence of the oil pollution, the relevant scales reported in the IMO FSA guidelines are employed ( IMO, 2018 ). The equivalence with other environmental and reputational risks is implemented by thorough comparison with similar risk matrices existing in the pertinent literature ( Ahluwaja, 2018 ; Bureau Veritas, 2019 ; EMSA, 2020, 2018 ). The psychological effects and political consequences were excluded from the scope of this study, although they can be important in particular cases as reported in ( Ball and Floyd, 1998 ; Vinnem, 2014 ). 3 Investigated case study This study investigates an Inland Water Ways (IWW) barge, considering its theoretical next-generation autonomous design including the ship and its systems as well as the Remote Operations Centre (ROC) (or the Remote Control Centre (RCC) which is a part of the ROC). The description of this integrated autonomous system is carried out based on information acquired from the pertinent literature ( Bolbot et al., 2019 ; Chaal et al., 2020a ; Eloranta and Whitehead, 2016 ; Geertsma et al., 2017 ; Höyhtyä et al., 2017 ; Rødseth and Burmeister, 2015a ; van Cappelle et al., 2018 ) and the AUTOSHIP project deliverables ( Wennersberg and Nordahl, 2019 ). The main particulars of the existing IWW ship (which will be used as a demonstrator in the AUTOSHIP project) are provided in Table 1 . It must be noted that whilst the demonstrator of the AUTOSHIP project and the case study autonomous system (ship and its RCC) share some similarities, they have different installed systems/sub-systems and levels of autonomy. The investigated case study considers an Autonomy Degree Three (or above) according to IMO guidelines ( IMO, 2020 ). This pertains to: “Remotely controlled ship without seafarers on board, whereas the ship is controlled and operated from another location”. According to some other definitions provided by CCNR ( Central Commission for the Navigation of the Rhine (CCNR), 2018 ), the investigated case study can be classified at level 3 which corresponds to constrained autonomous crewless ship operation. Conventional IWW barges are primarily operated at inland waterways within Belgium and the Netherlands. A potential expansion of future operations can include all waterways of member states of the European Union, as well as Switzerland, UK and Norway. This study considers the investigated barge operation under the Flemish authorities’ regulatory framework. In this study, the development of the risk matrix predominantly focuses on the third-party risks. Still some results for the first party risk are included to demonstrate the applicability of the proposed methodology. The emphasis is placed on those persons who exposed to safety risks. It should be noted that the particular ship operates outside the normative legislation of IMO and is covered by another set of national and international regulations ( Nzengu et al., 2021 ). However, the concepts and tools used in the presented methodology have a general validity, and therefore have applicability to the investigated case. 4 Results 4.1 Phase 1: estimation of single fatality frequency based on individual risk 4.1.1 Identification of person group's exposed to safety risks The parties that are involved in the risk taking for the investigated IWW ship are listed in Table 2 . These parties were identified with the support of the information provided in ( Chaal et al., 2020b ; Wróbel et al., 2018 ) and with the assistance of relevant questionnaires. The characterisation of each person (first or third party) is implemented considering whether the persons receive direct benefits from the relevant activity or not. For instance, the governmental bodies receive taxes from the operation of the IWW ship, cargo unloading/loading staff receive their wages. Not all these parties are exposed to safety risks and have the same control over safety risks. The cargo owner and ship owner are exposed to financial risks, but not to the safety risks. The persons exposed to safety risks are highlighted in bold in Table 2 . For these persons, the developed methodology can be implemented leading to the determination of the corresponding risk matrices. 4.1.2 Selection of tolerable and negligible individual risk level The pertinent IMO guidelines adapted the individual risk levels from the Health and Safety Executive ( IMO, 2018 ). A similar level of individual risk have been accepted in other industries, for example, the nuclear, and offshore ( EMSA, 2015 ). For novel designs, IMO recommends to reduce the acceptance criteria by one order of magnitude ( IMO, 2018 ), however, this contradicts to the assumption of the equivalence between crewless and conventional ships, considered herein (assumption 3). However, the Belgian authorities recommend more stringent criteria for the third parties broadly acceptable and maximum tolerable individual risks, due to the onshore activities ( Duijm and Universitet, 2009 ). Considering that the same criteria should apply for assessing the risks from inland waterway ships operating in Belgian waters, criteria from ( Duijm and Universitet, 2009 ) are selected for the third parties risk assessment. Hence, for the investigated crewless ship, the lower bound of individual risk can be set to 10 −6 fatalities per annum for the first parties, and 10 −7 fatalities per annum for the third parties. The respective upper bounds are set to 10 −3 fatalities per annum for the first parties and 10 −5 fatalities per annum for the third parties. These bounds are listed in Table 3 . 4.1.3 Exposure calculation The estimated exposure for different first parties (personnel involved in maintenance and cargo operation, ROC/RCC personnel) is illustrated in Table 4 . This estimation was based on the following assumptions: typical annual working period of 1768 h (8 h per day, 5 days per week and 40 days of holidays). It is expected that the risks associated to maintaining, loading/unloading operations of the conventional and the crewless ships will be the same. Therefore, the aggregated risk accumulated during work should not exceed the thresholds specified in Table 3 . It was assumed that the conventional IWW crew working hours are identical with the ones for other working personnel. It should be noted that the identified first parties are exposed to Backspace diverse safety risks. The ROC/RCC personnel will be exposed to all risks pertinent to operating and controlling a safety critical infrastructure (e.g., fires, evacuation or physical phenomena). The maintenance personnel will additionally be exposed to the potential injuries and death during the maintenance activities both ashore and on-board the ship. Similarly, the cargo loading/unloading personnel will be exposed to the risk of death or injuries due to the improper cargo handling. The crew of conventional ships are also exposed to much greater variety of risks, such as the risk of falling from the ship and drowning and occupational hazards, as has reported in several accident investigation reports, the authors confidentially received. The majority of third parties listed in Table 5 are exposed to the risk of collision with the IWW ship. The risk for the third parties does not change whether the ship is crewed or crewless. The intruders onboard the ship are exposed to the risks of incidences including fires, collisions, etc., whereas the ROC and RCC neighbours are exposed to generic risks associated with buildings of high value and critical importance for the economy. By using Eq. (2) , the encounter duration is estimated equal to approximately 1.7 min. The number of encounters between the investigated ship and a typical ship from each group is estimated based on the operator responses to the developed questionnaire and is provided in T E Table 5 . This questionnaire was part of the Environmental Survey Hazard Analysis method for Maritime applications (ESHA-Mar), a new method developed by the authors in the context of autonomous ships ( Bolbot and Wennersberg, 2022 ). The questionnaire includes questions to collect information for the ships and objects in the proximity of the investigated IWW ship, which was employed for the estimation of the encountered ship types and the associated frequencies. Considering that the encounter number involves high uncertainty due to the subjectivity of the operator, a conservative assumption for the daily encounters with the crewless IWW ship is used for the third parties exposure estimation and the fatality risk estimation in the next steps of this study. More accurate estimations could be generated if Automatic Identification System (AIS) data was used. However, such data was not readily available for the investigated ship. In addition, limitations exist for AIS data, as small recreational boats are not required to carry AIS transponder ( COLREGS, 1972 ), therefore the estimation of encounters with these ships would have to be based on operational experience. The estimated exposure for each person group is provided in Table 5 . 4.1.4 Estimation of single fatality frequency tolerable and negligible levels By considering the person's group with the highest exposure (calculated in the previous step), the single fatality frequency levels for first and third parties are calculated and presented in Table 6 . By comparing the results in Tables 5 and 6 , it is inferred that despite the lower exposure, due to more strict requirements, the limits for the third parties are not significantly higher than for the technical/ROC personnel involved in this particular activity. The last two rows of Table 6 are derived based on the results presented in the next section. 4.2 Phase 2: estimation of single fatality annual frequency based on societal risk Although the investigated IWW ship does not lay in the jurisdiction of the IMO regulatory framework, the pertinent guidelines (MSC 72–16) ( IMO, 2000 ) are used as a reference for deriving the risk matrix criteria from societal risk in this study. The number of occupational fatalities per year that occurred in several countries is provided in Table 7 , whereas the Gross National Product (GNP) in $B, and the Gross Domestic Product (GDP) for European Union (EU28) are provided in Table 8 . The number of fatalities was retrieved from ( EUROSTAT, 2020 ; statistics, 2020), the GNP from ( MacroTrends, 2020 ) and GDP from ( EUROSTAT, 2021 ). GDP is not the same as GNP, but it can be used as an approximation of GNP if GNP is not available. The Euro to USD exchange rate was assumed to 1.15 (approximate average value for 2016–2020). As it is demonstrated in the next sections, these approximations do not considerably affect the derived results. The calculated ratio of fatalities per GNP (q) for the considered countries are listed in Table 9 . The annual revenue for the manned IWW ship (one ship) ranges between $500 k and $720 k, as it was indicated by the ship operator. Based on these estimates, the PLL A and F A are calculated and provided in Table 10 . It should be noted that in the context of autonomous operations, the revenue needs to be estimated for the ROC/RCC operations, for cargo operations and for maintenance operations. Therefore, the estimated societal criteria herein have applicability only to the specific third parties (passenger ships, cargo ships, etc.). However, these societal criteria are identical for the crewed ship and can be used for the crew and the third parties exposed to the risk from this ship operation, whether crewed or crewless. The resultant F–N curve as well as the F–N curves from other shipping sectors are plotted in Fig. 2 (only the limit between ALARP and intolerable risk regions are plotted). It is observed that the estimated F–N curve for the IWW ships is lower than the other ship types F–N curves, as the IWW ship is relatively small (compared to the other ship types), and because more recent data were employed (compared the data used compared to the other ship types cases). The other ship types F–N curves exhibit lower safety levels, since they correspond to older time periods. The IWW F–N curve is still comparable with the F–N curves for bulk carriers. Estimations are also provided for the following values of the maximum number of fatalities: = 1 and N u = 30. For the manned IWW ships, the current accidents with the third parties involve either single fatalities (e.g., collisions with kayaks) or collisions with other ships, where the consequences can be severe (e.g., collision between Hableány and Viking Sigyn with 28 fatalities ( N u Wikipedia, 2021 )). Comparing minimum, mean and maximum values of the estimated metrics provided in Table 10 , it is deduced that in some metrics, the minimum is two times less than the mean and the mean is almost two times less than the maximum. The mean is still selected herein, as the employed (used for these metrics calculation) is closest to the EU 28 value reported in q Table 9 . It should be noted that these estimations are independent from the ship type and are applicable to both the conventional and crewless ships. 4.3 Phase 3: risk matrix and risk ratings development It is observed from Table 6 that the frequency criteria estimations based on the societal or individual risks are different. The societal risk-based frequency can be one order of magnitude more conservative compared to the individual risk-based frequency. This can be attributed to the fact, that the societal risk derived criteria incorporate information comparable to the current safety level in other industries and the financial benefits coming from the investigated activity. The individual risk acceptance criteria are also influenced by the operational context of the specific ship, as the jurisdiction of each country define different levels of intolerable and negligible risk and the exposure is dependent on the ship operating profile. In another operational context, the individual risks and exposure might have stronger influence on the selected frequency criteria. Therefore, based on the societal criteria using a single fatality ( = 1), the tolerable limit for a single fatality is 1.22 10 N u −3 fatalities per year. Consequently, a functional failure leading to single fatality is considered as intolerable when its frequency is higher than 1.22 10 −4 events per year based on assumption number 6. Considering maximum 30 fatalities ( = 30), the tolerable limit for a single fatality is 3.05 10 N u −4 fatalities per year. This corresponds to a tolerable limit of the functional failure frequency leading to a single fatality (or multiple fatalities) of events per year (the equivalent functional failure tolerable limit is 3.05 10 3.05 10 – 5 −6 events per year according to assumption 6). Based on the preceding considerations (using the most conservative value), the risk matrix and ratings are developed as illustrated in Table 11 . The multiple fatalities can be tolarated (considered ALARP) provided that they are very rare, or their potential frequency has been reduced to minimum (equivalent functional failure less than 3.05 10 −6 events per year). This is considered for the risk matrix development to depict potentially devastating, but extremely low frequency, accidents (black swans), which cannot be predicted or controlled. Accidents, such as the collision between Hableány and Viking Sigyn demonstrate an example of such a case ( Wikipedia, 2021 ). The developed risk matrix and ratings also satisfy the Cox arching assumptions ( AnthonyCox, 2008 ). The risk matrix cells with higher ranking denote higher risk, as a logarithmic relationship between the rankings and risks was employed (weak consistency satisfied). Moreover, moving from the green to red areas, yellow cells appear (betweenness axiom satisfied). Due to the logarithmic relationship between the ranking and risks and the use of risk neutral attitude to risk aversion/taking, the consistency criteria in colouring are also satisfied. It should be noted that the derived risk matrix is suitable for assessing third parties exposed to risks due to the operation of autonomous ships. The criteria for the first party safety risk can slightly vary, as the revenue for the ROC, cargo operator and maintenance personnel can be different. Nonetheless, the proposed approach in this study can be followed to derive the risk matrix for the other parties as well. It is also worth highlighting that if the decision-maker is risk-averse towards the large disasters involving multiple fatalities and treats them as unacceptable, the use of societal criteria with = 30 is no longer valid. In this case, societal criteria for N u = 1 can be used and the risk matrix will become as the one shown in N u Table 12 . This risk matrix considers all major accidents as unacceptable, however it allows for the use of less stringent requirements for ranking single fatalities. Although it is possible to apply this consideration, it is not aligned with assumption 2 (section 2.1); additionally, the consistency criteria provided in ( AnthonyCox, 2008 ) is also violated. The generated risk matrix and risk matrix ratings of Table 11 do not change whether it is used for conventional or crewless ships risk assessments. This can be attributed to the fact that the and F n e g N F = 1 for the first parties derived from the Individual Risk (IR) are still less conservative than the ones derived based on societal risk criteria ( F i n t N F = 1 Table 6 ). Therefore, the societal risk criteria that influence the risk matrix, and the societal risk criteria (as explained in section 4.2) can be used for both conventional and crewless IWW ships. This is influenced by the values of the following parameters: exposure of crew and the number of scenarios identified with severity index equal to 4. If the crew exposure increases, then the individual risk exposure will drive the selection of the risk matrix ratings, and therefore, the risk matrix will vary. If the crew exposure reduces, then the individual risk criteria will be of less importance. Additionally, higher number of safety critical scenarios can be anticipated on conventional ships due to the crew exposure to safety risks. This might challenge the validity of the sixth assumption according to which, the overall risk can be attributed to maximum 10 functional failures with severe consequences (leading to single fatality). 4.4 Phase 4: consequences types equivalence Considering the equivalence of consequences between the safety and other types of risks, the interrelation of the various consequences categories and the corresponding consequences are provided in Table 13 . The cost of averting the fatality was set at $3 million in 1999 ( IMO, 2018 ). By using a 5% inflation rate, as recommended by the FSA guidelines ( IMO, 2018 ), the cost of averting the fatality approximates to $8 million in 2021. The correlation between other types of risks and safety risks was derived from FSA (2018), BV ( Bureau Veritas, 2019 ), DNV GL RP A-203 guidelines ( Ahluwaja, 2018 ) and EMSA report ( EMSA, 2020 ). It should be noted that the small oil spills by IWW ships exhibit higher consequences on the environment, as the spillage will occur in a more confined environment and close to inhabited areas, compared to other ship types. It should also be noted that a hazardous scenario can exhibit diverse impact for different consequences categories ( Bolbot et al., 2021a ). A hazardous scenario can result in minor safety risks, but significant financial risks to the third parties, e.g., collision with a bridge. By using different consequences categories, such considerations can be captured in the risk assessment methodology. For the consequences ranking table, no difference between conventional and crewless ships should be considered. 4.5 Comparison with other risk matrices An exemplary risk matrix with its rating schemes from the DNV guidelines for the risk assessment of novel technology used in the oil exploratory industry ( DNV, 2011 ) is provided in Table 14 . The risk matrix of Table 14 constitutes an adaptation of the original risk matrix, modified suitably to allow for the comparison, as the frequency scales are different in the DNV guidelines risk matrix compared to the ones employed herein ( Table 11 ). In the DNV risk matrix, a range of frequencies is used for the risk rankings, in comparison to the crisp values of the FSA risk matrix and the employed methodology. For this reason, the risk matrix of Table 14 includes cells consisting of two different colours. Nonetheless, comparing this exemplary adapted risk matrix, it can be observed that the ALARP region is wider in the DNV guidelines compared to the current approach. This can be attributed to the fact that the third parties risk ratings were influenced by the societal risk acceptance criteria, which allows two orders of magnitude difference between the negligible and intolerable risk. If the ALARP region was set using the individual risk criteria for the first parties, potentially three orders of magnitude would be assigned for ALARP in the derived risk matrix in this study ( Table 11 ). Most importantly though, in the derived risk matrix, the ratings are approximately two levels more conservative. This can be attributed to the fact that the risk matrix of this study incorporates the safety levels from other industries, which have been improved over time and uses more stringent individual risk criteria set by the Belgian authorities. The risk matrix of Table 14 is also exemplary, however, the application area was not provided, and additionally it is not reported whether the ratings refer to the first or third parties. A similar risk matrix (shown in Table 15 ) compared to the one from DNV RP A203 ( DNV, 2011 ) was employed in ( EMSA, 2020 ). As it can be observed, the acceptable risk levels in the particular application were more stringent than the ones in Table 14 ; however, less conservative compared to Table 11 . It should be noted that the risk matrix of Table 11 has applicability to the ship as a whole, whilst the risk matrix of Table 15 was applied to specific system with crew present on the ship. This significantly limits the comparison. To determine which risk matrix indexes from Table 11 , Table 14 , Table 15 seems to effectively address the needs of autonomous technology, we have conducted a following simple comparison through the use of the corresponding Safety Integrity Levels (SILs) as reported in IEC 61508 ( IEC, 2010 ). It was assumed that the investigated crewless ship operates in its sailing or manoeuvring modes 70% of its annual operating time (taking into account that the use of autonomy will allow higher ship availability, since the crew workhours will not need to be followed and the ship will be able to sail during night). Bolbot et al. (2021a) reported that the severity index for the situation awareness system failure and the collision avoidance system failure was ranked as 4 (SI = 4) for the same IWW crewless ship. This corresponds to a different value of the maximum functional frequency failure based on the ALARP. From for risk matrix and ratings of Table 11 , it corresponds to FI = 2 or F = 10 −5 events per ship year. Similar frequency values can be found from the other risk matrices, and are depicted in Table 16 . These frequencies, in turn, correspond to different SILs, which are calculated and depicted in Table 16 . Intuitively judging, it would be anticipated that the investigated ship situation awareness and collision avoidance functions should have stringent safety requirements due to their importance for the ship safety. Hence, SIL = 3 that was derived based on Table 11 seems to be a more reasonable target level, compared to SIL = 1 or SIL = 2. 4.6 Influence of the assumptions on the derived risk matrix This section elaborates the impact of the made assumptions on the derived risk matrix. The first assumption has a fundamental influence on the structure of the developed risk matrix. For instance, if linear scales were used (instead of logarithmic) for the risk matrix development, the shape of the risk matrix would be more skewed, with more cells dedicated to particular areas. This would render the compliance with the Cox arching assumptions ( AnthonyCox, 2008 ) very challenging. Additionally, considering a different equivalence relationship between fatalities and injuries, the consequence type equivalence during Phase 4 would be different. If aversion to large accidents is considered, then the risk matrix ratings will be altered. This was demonstrated in detail from the comparison between Tables 11 and 12 in the preceding section. In this case, higher frequency for smaller accidents will be tolerated and more stringent frequency requirements for larger scale accidents will be provided. Therefore, the second assumption affects the “inclination” of risk matrix. If more stringent safety requirements are applied to autonomous ships compared to conventional ships (for instance one level of magnitude more stringent requirements for and I R ), the calculated P L L A and F i n t N F = 1 affected by F n e g N F = 1 and I R would also change accordingly, resulting in one order of magnitude more stringent requirements. This can be attributed to the linear relationship between P L L A F i n t N F = 1 , and F n e g N F = 1 , I R . P L L A With a different categorisation of risks, for example, when four categories were employed (instead of three categories), as for the London underground system ( EMSA, 2015 ), the risk matrix ratings and classification would obviously include four regions for risk ratings. By employing and alternative consideration to treat the different risk types (as per fifth assumption), the use of a single risk matrix would not be possible. It would be required to consider various risk matrices and acceptance criteria for different types of consequences. This would increase the complexity of the risk assessment process and the associated effort required for the safety assurance. The sixth assumption is highly influential on the risk matrix ratings. For instance, the assumption of 20 maximum functional failures with severe consequences (leading to single fatality) and = 1 results in a value of 6.11 10 N u −5 for the acceptable functional failure frequency (instead of 1.22 10 −4 ). Therefore, the selected acceptable frequency would have become one level more stringent in the risk matrix. The influence of on the derived matrix has been discussed in detail in the preceding section. N u It is challenging to quantify the influence of seventh assumption on the derived risk matrix. However, it could result in more or less stringent requirements for and I R based on the societal, political risk perception and trust. This would, in turn, influence the risk matrix ratings. The analysis of these aspects on P L L A and I R is recommended for future research. P L L A A daily encounter frequency between the investigated ship and a general cargo ship was considered in this study. This is a conservative estimation, as the investigated ship rarely operates in a specific area, and visits several locations. Therefore, the daily encounters between this ship and other ships are unlikely. More realistic estimations could be made if the AIS data was used as input. Nonetheless, even with such a conservative estimation, the individual risk criteria exhibit negligible effects on the derived risk matrix. 5 Discussion The main advantage of the developed methodology is that it directly interconnects the risk matrix ratings with the individual and societal risk acceptable criteria in a smooth pattern. The presented methodology is repeatable, and the results are correlated with the financial benefits of the selected activity and the current risk levels in other industries. The methodology can be applied for developing risk matrices for both conventional and crewless ships. It is also expected that the use of such a risk assessment matrix will support the implementation of the goal-based standards and development of novel designs demonstrating ALARP and equivalent safety, as it supports the ship design with both individual and societal risk acceptance criteria being determined early in the design process. The developed methodology can support the implementation of functional based design and analysis in the maritime industry, as well as the designation of safety integrity levels (SIL) to different functions, as already followed in aviation ( SAE, 1996a ). The ‘anchor points’ (the set levels of individual risk, as well as the compared industries and countries financial and safety levels) have an important influence over the methodology results, as the country overall safety level, economy size and acceptance criteria for individual risk, affect the resultant risk matrix. Therefore, this methodology allows for contextualisation of these factors. Moreover, this risk matrix and risk matrix ratings are valid only at a specific time snapshot. In case where the safety levels or revenue levels or the set acceptable individual risk levels vary, the proposed methodology need to be repeated to determine the updated risk matrix. As the risk matrix and ratings are also contextualised for a specific application, these ratings can be different in other ships (and ship types) and need to be re-estimated/selected. It is highly likely that in another operational context, due to different exposure of the individuals, the individual risk (not the societal risk) will drive the risk matrix development. The proposed methodology also requires the development of separate risk matrices for different person groups, due to the differences in the exposure/societal benefits, although it is expected that similar results may be obtained. The introduction of the factor of 10 when moving from the ship level to scenarios level is a critical assumption employed in this study. It must be crosschecked that there does not exist more than 10 scenarios with the selected frequency (e.g., 10 −5 for the investigated IWW ship) and severe consequences, so that this assumption or equivalent risk index is sufficient. In cases where such scenarios are only few, relaxation of the risk matrix ratings can potentially be investigated. Nonetheless, as it is not recommended to aggregate the different scenarios risk ( ISO, 2009 ), it should be finally checked and verified that the estimated risk levels comply with the individual and societal risk criteria by employing more detailed methods at a later design stage. However, the proposed methodology caters the preliminary risk matrix to facilitate the risk assessment at the initial design stages. It should be pointed out that the risk matrix was developed for use at a ship level and in a specific operational concept. For using the risk matrix at a system level, potentially even stringent requirements are required; for example, by dividing the acceptable frequency by another factor of 10 or by ensuring that the frequency of scenarios with severe consequences is adequately reduced. Based on the developed methodology results, some stringent criteria and risk matrix ratings are recommended for the investigated IWW crewless ship with respect to third party risks. The other compared risk matrices exhibit less conservative ratings. It seems that the proposed herein more stringent risk matrix ratings need to be followed, as they include information on both the societal and individual risks. However, it is important to investigate whether the current fleet of conventional IWW ships satisfies these criteria in order to avoid overdesigning of crewless IWW ships, which is expected to increase their costs associated to the design and building. Nonetheless, it is expected that by using these criteria, similar, if not enhanced safety levels will be achieved for the autonomous and crewless ships. Finally, it should be noted that the developed risk matrix incorporated primarily the safety and secondarily, other types of risks. The decision-making with respect to the introduction of the autonomous ships still depends on a number of additional factors, including the overall impact on the economy, sector competitiveness, emissions reduction, quality of life, as autonomous shipping has much wider implications. The aspects related to uncertainty in rankings and epistemic uncertainty were addressed in previous publications (for instance Goerlandt and Reniers (2016) ). Although these factors are important for decision-making for autonomous ships safety approval, they were left outside the scope of this study. For this reason, it is anticipated that the decision-making for autonomous ships should be made following a case-by-case scenario. Still, it is expected that this risk matrix and risk matrix development methodology will support the final decision-making and will constitute a useful tool for the decision-makers. 6 Conclusions In this study, a novel methodology for developing the risk matrix and risk matrix ratings based on individual and societal risk acceptance criteria was proposed. The applicability of the methodology was demonstrated for the theoretical case study of a crewless IWW ship. The main findings of the study are summarised as follows. • The proposed methodology allowed for developing the risk matrix based on a set of defined individual and societal risk acceptance criteria. • The use of the societal risk acceptance criteria allows the consideration of safety levels in other industries and the financial benefits generated by a specific activity during the development of the risk matrix, whilst the use of individual risk allows to consider the exposure of different individuals. • As the methodology results are case study dependent, the developed risk matrix will be capable of providing different acceptance criteria for different ship types operating in different areas with different operating profiles. • The methodology results are influenced by the anchoring points and assumptions of the decision/makers, therefore are highly dependent on the selected policy of each decision maker. • The societal risk acceptance criteria resulted in more stringent matrix ratings compared to the individual risk criteria for the investigated IWW ship, which can be attributed to the relatively small revenue for this ship. • The developed risk matrix ratings were also more conservative compared to the risk matrix ratings reported in the pertinent literature due to the influence of societal risk. Still, the selected safety integrity levels for some functions based on the risk matrix ratings are considered to be reasonable. It is anticipated that this methodology will constitute a useful tool for the involved industry stakeholders. Future research could focus on the determination of the current safety level for the fleet of conventional IWW ships as well as the adaptation of the proposed methodology for application in other industries and investigations for other ship types. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This study was carried out in the framework of the AUTOSHIP project, which is funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 815012 . The authors kindly acknowledge the comments and input provided by Sifis Papageorgiou from the European Maritime Safety Agency ( EMSA ), Antoon Van Coillie from Zulu Associates, Sárkány Gábor and Veres Gábor Tamás from Transportation Safety Bureau of Hungary, and Prof Rolf Skjong from DNV . The authors affiliated with the MSRC greatly acknowledge the funding from DNV AS and RCCL for the MSRC establishment and operation. The opinions expressed herein are those of the authors and should not be construed to reflect the views of EU, DNV AS , RCCL , EMSA , the AUTOSHIP partners or the acknowledged organisations and individuals. Appendix A Abbreviation and nomenclature list Abbreviation Definition. Annum. a AIS Automatic Identification System. ALARP As Low As Reasonably Practicable. CAF Cost of Averting the Fatality. Individuals exposure [-] E p F Frequency [per ship-year]. Frequency of single fatality per annum. F A Frequency of an undesired event. F u e FI Frequency Index. Intolerable single fatality rate [per ship-year]. F i n t N F = 1 Negligible single fatality rate [per ship-year]. F n e g N F = 1 FSA Formal Safety Assessment. GDP Gross Domestic Product. GNP Gross National Product. IMO International Maritime Organisation. IR Individual Risk. IWW Inland Waterways. Parameter k. k Ship length in [m]. L Number of the occupational fatalities per annum. N F number of people. N maximum fatalities number. N u Encounter number. N E PLL Potential Loss of Life. Probability of the loss of life [fatalities per annum]. P L L A Probability of event resulting in casualty. P p Ratio of fatalities to the gross national income (GNP) [$B q −1 ]. Economic value (revenue) in [$B per year]. R ROC Remote Operation Centre. RCC Remote Control Centre. SD Safety domain diameter [nm]. SI Severity Index. SIL Safety Integrity Level. The average duration of encounter. T E The annual time a person from a specific group is exposed to the considered risk [h]. T p the hours of one year (8,760 h). T a Ship speed [kn]. V | [
"ABAEI",
"ABAEI",
"AHLUWAJA",
"ANTHONYCOX",
"BV",
"BAKDI",
"BALL",
"BLOM",
"BOLBOT",
"BOLBOT",
"BOLBOT",
"BOLBOT",
"BUREAUVERITAS",
"CHAAL",
"CHAAL",
"CHANG",
"DEVOS",
"DNV",
"DOD",
"DU",
"DUIJM",
"DUIJM",
"ELORANTA",
"EUROSTAT",
"EUROSTAT",
"GARVEY",
"GARVEY",
... |
a2a413c710744bb08e7c65bbc4d4d7ce_Rumen microbial metagenomics and its application to ruminant production_10.1017_S1751731112000419.xml | Rumen microbial (meta)genomics and its application to ruminant production | [
"Morgavi, D.P.",
"Kelly, W.J.",
"Janssen, P.H.",
"Attwood, G.T."
] | Meat and milk produced by ruminants are important agricultural products and are major sources of protein for humans. Ruminant production is of considerable economic value and underpins food security in many regions of the world. However, the sector faces major challenges because of diminishing natural resources and ensuing increases in production costs, and also because of the increased awareness of the environmental impact of farming ruminants. The digestion of feed and the production of enteric methane are key functions that could be manipulated by having a thorough understanding of the rumen microbiome. Advances in DNA sequencing technologies and bioinformatics are transforming our understanding of complex microbial ecosystems, including the gastrointestinal tract of mammals. The application of these techniques to the rumen ecosystem has allowed the study of the microbial diversity under different dietary and production conditions. Furthermore, the sequencing of genomes from several cultured rumen bacterial and archaeal species is providing detailed information about their physiology. More recently, metagenomics, mainly aimed at understanding the enzymatic machinery involved in the degradation of plant structural polysaccharides, is starting to produce new insights by allowing access to the total community and sidestepping the limitations imposed by cultivation. These advances highlight the promise of these approaches for characterising the rumen microbial community structure and linking this with the functions of the rumen microbiota. Initial results using high-throughput culture-independent technologies have also shown that the rumen microbiome is far more complex and diverse than the human caecum. Therefore, cataloguing its genes will require a considerable sequencing and bioinformatic effort. Nevertheless, the construction of a rumen microbial gene catalogue through metagenomics and genomic sequencing of key populations is an attainable goal. A rumen microbial gene catalogue is necessary to understand the function of the microbiome and its interaction with the host animal and feeds, and it will provide a basis for integrative microbiome–host models and inform strategies promoting less-polluting, more robust and efficient ruminants. | null | [
"ACHENBACH",
"ACINAS",
"AMAYA",
"ARUMUGAM",
"ATTWOOD",
"ATTWOOD",
"BAAR",
"BALTER",
"BAYER",
"BELOQUI",
"BENNER",
"BERAMAILLET",
"BERGMILLER",
"BRULC",
"BRUMM",
"CALLAWAY",
"CHABAN",
"CHANG",
"CHAUCHEYRASDURAND",
"CLAUS",
"CLAUSS",
"CLAUSS",
"CLOKIE",
"CROSBY",
"DENG"... |
1c014aa531894fb58efb8286b3a9add9_Targeted upregulation of dMyc restricts JNK-mediated degeneration of dopaminergic neurons in the par_10.1016_j.neures.2023.10.005.xml | Targeted upregulation of dMyc restricts JNK-mediated degeneration of dopaminergic neurons in the paraquat-induced Parkinson’s disease model of Drosophila
| [
"Pragati, ",
"Sarkar, Surajit"
] | Parkinson’s disease is the second most common neurodegenerative disease characterized by the loss of dopaminergic neurons in the brain. Parkinson’s disease has both familial and sporadic cases of origin governed differentially by genetic and/or environmental factors. Different epidemiological studies have proposed an association between the pathogenesis of cancer and Parkinson’s disease; however, a precise correlation between these two illnesses could not be established yet. In this study, we examined the disease-modifying property of dmyc (a Drosophila homolog of human cmyc proto-oncogene) in the paraquat-induced sporadic Parkinson’s disease model of Drosophila. We report for the first time that targeted upregulation of dMyc significantly restricts paraquat-mediated neurotoxicity. We observed that paraquat feeding reduces the cellular level of dMyc. We further noted that targeted upregulation of dMyc in paraquat-exposed flies mitigates degeneration of dopaminergic neurons by reinstating the aberrantly activated JNK pathway, and this in turn improves the motor performance and survival rate of the flies. Our study provides the first evidence that improved cellular level of dMyc could efficiently minimize the neurotoxic effects of paraquat, which could be beneficial in designing novel therapeutic strategies against Parkinson’s disease. | 1 Introduction Neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson's disease (PD), Huntington’s disease (HD), etc. are devastating human disorders that show a progressive loss of specific brain neurons ( Gan et al., 2018 ). Amongst them, the second most common neurodegenerative disease PD is characterized by the loss of dopaminergic neurons causing dopamine deficiency and the onset of motor and non-motor symptoms ( Soto and Pritzkow, 2018 ). Also, the accumulation of α-synuclein-containing Lewy bodies, and tau-containing neurofibrillary tangles in brain neurons have been frequently reported in PD ( Soto and Pritzkow, 2018 ). The PD has both familial and sporadic cases of origin governed differentially by genetic and/or environmental factors. Interestingly, familial PD cases are rare (less than 10% of all cases) compared to late-onset sporadic cases ( Klein and Westenberger, 2012 ). The familial form of PD is usually instigated by mutation of a specific gene, while its sporadic form is associated with variants of several genes that require an environmental trigger for disease onset ( Chai and Lim, 2013 ). For instance, mutation(s) in genes such as Synuclein Alpha ( SNCA ), Leucine-rich repeat kinase 2 ( LRRK2 ) , Parkinsonism Associated Deglycase ( DJ-1 ) , Parkin RBR E3 ubiquitin protein ligase ( PRKN ), PTEN induced kinase 1 ( PINK1 ) , etc., and exposure to chemical(s) like paraquat (PQ, 1,1′-dimethyl-4 – 4′-bipyridinium), 1-methyl-4-phenyl tetrahydropyridine (MPTP), and rotenone are known to trigger PD etiology ( Kline et al., 2021 ). Exposure to PQ has been widely utilized in several model organisms to mimic various PD pathological symptoms such as selective death of dopaminergic neurons, compromised motor functions, cognitive impairments, etc. ( Zeng et al., 2018 ). PQ exposure induces enhanced production of reactive oxygen species (ROS) which causes oxidative stress and subsequently neurotoxicity ( Zeng et al., 2018; Maitra et al., 2019 ). Some epidemiological studies suggested a negative correlation between cancer and neurodegenerative diseases ( Plun-Favreau et al., 2010 ). On the contrary, cancer types such as melanoma and brain tumour have been noted to show a positive association with PD etiology ( Pan et al., 2011; Feng et al., 2015 ). Despite these epidemiological observations, it is difficult to establish a precise correlation between these two kinds of human illnesses due to a lack of inadequate experimental evidence. Our previous reports have demonstrated a modifying property of dMyc (a Drosophila homolog of human cmyc proto-oncogene) and human cmyc proto-oncogene in mitigating neurotoxic effects of pathogenic poly(Q) and tau aggregates ( Singh et al., 2014; Chanu and Sarkar, 2017 ). In view of this noted neuroprotective ability of dMyc, we investigated its rescue proficiency in the PQ-induced sporadic form of PD in Drosophila . We report for the first time that targeted upregulation of dmyc restricts PQ-mediated degeneration of dopaminergic neurons by reinstating the activity of the JNK pathway. In view of the conserved functional characteristics of Drosophila myc and human cmyc , our findings could be useful in designing novel treatment strategies against the sporadic form of PD. 2 Materials and methods 2.1 Fly stocks and genetics Drosophila stocks used in this study were reared in cornmeal/agar/yeast media at 25 ± 1 °C under a 12-h light/dark cycle. Different transgenic lines utilized in this manuscript were TH-Gal4 (BL-8848) ( Friggi-Grelin et al., 2003 ), UAS-dmyc (BL-9674) ( Johnston et al., 1999 ), and UAS-dmycRNAi (BL-25783) ( Ni et al., 2009 ). Oregon R was utilized as a wild-type strain in all experiments. + 2.2 Food preparation and fly culture PQ (Sigma-Aldrich, USA) containing food was prepared by adding its 5 mM concentration in 1% sucrose:1.3% agar solution under aseptic conditions ( Maitra et al., 2019 ). Similarly, the control food was also prepared with only 1% sucrose + 1.3% agar. Vials were stored at 4 °C and kept at room temperature for a brief period before transferring flies into them. Freshly eclosed adult flies from different genotypes were kept in PQ containing vials with up to 10 individuals per vial. In similar conditions, control flies were reared in food vials with 1% sucrose + 1.3% agar. 2.3 Survival and climbing assay The longevity and climbing assay in different genotypes were performed as described earlier ( Chanu and Sarkar, 2017 ). After completing the climbing assay, flies were transferred to their respective PQ vials, and the percentage of flies crossing the marked height was plotted using GraphPad Prism 8 Software. For survival and climbing assay, N = 150 flies were analysed from every genotype. 2.4 Immunostaining Adult flies of the desired genotype were decapitated followed by fixing in paraformaldehyde (PFA) and dissecting the whole brain. Thereafter, blocking buffer was added for 2 hrs followed by incubating tissues with desired primary antibody overnight at 4 °C. The primary antibodies used were anti-TH (1:1000, Immunostar, USA) and anti-p-SAPK/JNK (1:100, Cell signaling technology, USA). Subsequently, after adequate washing, tissues were incubated in appropriate secondary antibodies for 2 hrs at room temperature. The secondary antibodies used were Alexa 488 goat anti-rabbit and Alexa 488 goat anti-mouse (1:200; Invitrogen, USA). Tissues were then counterstained in DAPI (5 μg/mL; Roche Diagnostics, GmbH, Germany) and mounted in prolong gold antifade mountant (Molecular Probes, USA) for imaging. 2.5 Protein extraction and western blotting Adult flies from different genotypes were decapitated and protein extraction along with western blotting was performed as described earlier ( Pragati, 2021 ). The primary antibodies utilized for western blotting were: anti-dMyc (1:500, Developmental studies hybridoma, USA, P4C4B10), anti-p-SAPK/JNK (1:100, Cell signaling technology, USA, Catalogue No. #9251) and anti-β-tubulin (1:1000, Developmental studies hybridoma, USA, E7). Secondary antibodies used were goat anti-mouse and rabbit IgG-HRP (1:2000; Merck). Further, the blots were developed using ECL substrate SuperSignalR West Pico Chemiluminescent Substrate (Thermofisher Scientific, USA) and imaged utilizing FUJIFILM LAS400 image reader. The relative signal intensity of the bands was quantified by ImageJ software and was plotted on a graph using MS Excel 2010 and/or GraphPad Software for three independent blots. 2.6 Microscopy and documentation Fluorescently stained tissues were initially examined under the Olympus DP71 fluorescence microscope and subsequently scanned using Leica TCS-SP5 II confocal microscope by maintaining identical parameters. The number of dopaminergic neurons was calculated utilizing Image J and p-JNK puncta were counted across different z stacks using draw counter tool of LAS X software. An equal number of optical sections were selected for the construction of the projection images. Adobe Photoshop CS5 software was used to assemble the final comparative images of different genotypes and/or treatment groups. 2.7 Statistical analysis The bar graphs were represented as mean ± SD and were plotted and analyzed utilizing GraphPad Prism 8 Software 8.1.2. For calculating statistical significance between different genotypes one-way ANOVA followed by Tukey's post hoc test was performed and the values were considered significant when *p ≤ 0.05; **p ≤ 0.01, ***p ≤ 0.001, and ****p ≤ 0.0001. 3 Results 3.1 Targeted upregulation of dmyc prevents PQ-induced degeneration of dopaminergic neurons and behavioral deficits PQ exposure has been widely used to model sporadic forms of PD in Drosophila ( Maitra et al., 2019; Neves et al., 2022 ). Oral feeding of food supplemented with a sub-lethal concentration of PQ causes degeneration of dopaminergic neurons in the adult Drosophila brain ( Cassar et al., 2015 ). Drosophila dopaminergic neurons are spread in eight different clusters per hemisphere comprising of 4–13 individual neurons ( Cassar et al., 2015 ). Due to a noted correlation between cancer and PD ( Lee et al., 2022 ), we examined if an altered expression of dmyc , a protooncogene makes any impact on PQ-mediated degeneration of dopaminergic neurons in the adult brain. Dopaminergic neurons specific upregulation and downregulation of dmyc was achieved by driving the UAS-dmyc and UAS-dmyc-RNAi transgene respectively, by TH-Gal4 driver ( Friggi-Grelin et al., 2003 ). Immunostaining of the whole brain from age-matched control flies with anti-TH antibody showed the presence of normally distributed clusters of dopaminergic neurons in the dorsal region of the brain ( Fig. 1 A). Age-matched PQ-fed flies exhibited extensive degeneration of dopaminergic neurons, in PPM3, PPM1/2 and PPL1 clusters (compare Fig. 1 B with A; arrow in B; also see J). Interestingly, PQ-fed flies with elevated levels of dmyc in their dopaminergic neurons displayed a significantly reduced level of degeneration in different clusters (compare Fig. 1 C with B and A; arrow in C; also see J). On the contrary, targeted downregulation of dmyc in PQ-fed flies displayed neuronal loss (compare Fig. 1 D with B; also see J). Further, examination at higher magnification also showed a similar pattern ( Fig. 1 E-H). These observations clearly suggest that increased expression of dmyc in dopaminergic neurons play a protective role against PQ toxicity. 3.2 Increased expression of dmyc in dopaminergic neurons alleviates PQ-induced lethality and improves neuromuscular deficits in the surviving flies Sub-lethal dose of PQ feeding is known to cause partial lethality in adult Drosophila ( Maitra et al., 2019; Neves et al., 2022 ). Therefore, we next examined if dopaminergic neuron-specific overexpression of dmyc minimizes the PQ-induced deaths in adult flies. We noted that compared to the TH-Gal4/+ flies reared on control food, 5 mM PQ feeding for 50 hrs caused death of ∼30% of its population ( Fig. 2 A). Intriguingly, age-matched flies with enhanced expression of dmyc in dopaminergic neurons ( TH-Gal4 >dmyc ) exhibited significantly improved survivability on PQ-supplemented food ( Fig. 2 A); whereas its downregulation ( TH-Gal4 >dmycRNAi ) caused a notably increased mortality in the fly population ( Fig. 2 A). This observation implies that elevated expression of dmyc in neuronal cells helps in constraining the toxic effect of PQ. PQ administration mimics PD by causing neuromuscular impairments in Drosophila ( Navarro et al., 2014; Cassar et al., 2015 ). Therefore, we next checked if overexpression of dmyc also restrains PQ-mediated locomotory deficits by assessing the climbing ability of flies from different groups. We observed that compared to the control population, PQ feeding caused a significant locomotor impairment in flies ( Fig. 2 B). Remarkably, upregulation of dmyc in dopaminergic neurons of PQ-fed flies improved the climbing performance significantly, however, its downregulation had a more regressive effect ( Fig. 2 B). This observation further established that flies with increased expression of dmyc are much efficient in mitigating the toxic effect of PQ. 3.3 PQ feeding reduces the level of dMyc protein in dopaminergic neurons Since flies with enhanced expression of dmyc exhibited an improved ability to alleviate the neurotoxic effect of PQ, we next investigated if PQ feeding makes any impact on the cellular level of endogenous dMyc protein. Interestingly, western blot analysis revealed that compared to the control flies, PQ feeding caused a marked and significant decrease in the level of dMyc protein ( Fig. 3 A, row 1-compare lane 2 with 1). We observed that TH-Gal4 -driven overexpression of dmyc in the PQ-fed flies reinstate the level of dMyc protein ( Fig. 3 A, row 1-compare lane 3 with 2 and 1). On the other hand, targeted downregulation of dmyc in PQ-fed flies caused a further decrease in its protein amount ( Fig. 3 A, row 1- compare lane 4 with 1, 2 and 3). This observation indicated a critical role of dMyc in driving the PQ-mediated events of neurodegeneration leading to Parkinson’s disease. 3.4 Targeted upregulation of dmyc represses PQ-induced PD etiology by restoring the level of JNK PQ exposure in Drosophila induces degeneration of dopaminergic neurons by hyperactivating the c-Jun N-terminal kinase (JNK) pathway ( Peng et al., 2004 ). Here it is worth noting that increased activity of JNK has been well documented to trigger cellular apoptosis ( Dhanasekaran and Reddy, 2017 ). Therefore, we next investigated if an altered expression of dmyc in PQ-fed flies makes any impact on the cellular level of phospho-JNK, which signifies its active state. Western blotting with anti-phospho specific JNK (p-JNK) antibody revealed that compared to the control flies, PQ feeding has caused a notable increase in the phosphorylation of JNK, which signifies its hyperactive state ( Fig. 3 A row 2- compare lane 2 with 1; also see B; please refer Fig. S1 for the images of uncropped blots). Interestingly, PQ-fed flies with an enhanced level of dMyc in their dopaminergic neurons show a restored level of JNK phosphorylation ( Fig. 3 A row 2- compare lane 3 with 1 and 2; B). On the contrary, tissue-specific downregulation of dmyc in these flies enhanced the phosphorylation of JNK further ( Fig. 3 A row 2- compare lane 4 with 3 and 2; B). We next validated the findings of western blot analysis by immunostaining experiments. Immunostaining utilizing p-JNK antibody in Drosophila adult brain tissues produces a punctate staining pattern ( Cha et al., 2005 ). We noted that compared to the dopaminergic neurons (green) of TH-Gal4/+ flies raised on control food ( Fig. 3 C; also see G), PQ feeding significantly increases the frequency of p-JNK positive puncta in dopaminergic neurons (arrow in Fig. 3 D; G), and the surrounding areas. Intriguingly, PQ-fed flies with an increased level of dMyc exhibited a near-normal abundance of p-JNK positive puncta ( Fig. 3 E; G), which indicated towards the reinstated activity of JNK. In agreement with the earlier noted patterns, downregulation of dmyc aggravated the abundance as well as the size of the p-JNK positive puncta (compare Fig. 3 F to D; G), which implied the hyperactive status of the JNK pathway. Taken together, the above observations evidently suggest that overexpression of dmyc in dopaminergic neurons provides inherent protection from the neurotoxic effects of PQ by regulating the functional status of the JNK pathway. 4 Discussion The familial form of PD is generally caused by a certain gene mutation, while its sporadic form is multi-factorial and involves a complex interplay between a network of the variants of genes and environmental factors ( Klein and Westenberger, 2012; Chai and Lim, 2013 ). Intriguingly, certain cancer types such as skin, brain, and breast cancers show a positive correlation with PD cases ( Pan et al., 2011; Feng et al., 2015 ). Of note, PD has been reported more frequently in melanoma patients, and vice versa ( Bertoni et al., 2010 ). In view of this interesting but relatively unexplored association between cancer and PD, we investigated the impact of the varying expression level of dmyc, a conserved Drosophila homolog of human cmyc , a proto-oncogene on the PQ-mediated PD in Drosophila . The herbicide PQ is known to increase the incidence of PD in humans, and has been widely utilized to induce sporadic forms of PD in Drosophila ( Zeng et al., 2018; Maitra et al., 2019 ). For the first time, we noted that tissue-specific over-expression of dmyc suppresses PQ-mediated loss of dopaminergic neurons and other associated impairments. The dmyc is an evolutionarily conserved proto-oncogene that shows 26% sequence identity and significant functional similarity with its human counterpart c-myc ( Gallant et al., 1996 ). The dmyc / cmyc is evidently known to be involved in the regulation of key cellular events such as cell cycle, ribosome biogenesis, cell proliferation, growth, and apoptosis ( Trumpp et al., 2001; Grifoni and Bellosta, 2015 ). Of note, we have reported earlier that altered expression levels of dmyc do not make any noticeable impact on the rate of cellular proliferation in the third instar larval eye imaginal discs, and/or in the number of ommatidia in the adult Drosophila eyes ( Chanu et al., 2017 ). Therefore, it is unlikely that increased expression of dmyc in PQ-fed flies will alter the number of dopaminergic neurons in an adult brain. PQ exposure in Drosophila has been found to alter the expression pattern of several genes ( Maitra et al., 2019 ). We noted that PQ feeding decreases the cellular level of dMyc in dopaminergic neurons, and this indicates towards a potential involvement of dMyc in the disease PD etiology. We noted that a reduced level of dMyc hyperactivates JNK signalling in the dopaminergic neurons of PQ-fed flies. Here it is interesting to note that loss of dMyc has been reported to activate the JNK pathway and JNK-dependent cell death ( Huang et al., 2017 ). Also, activation of JNK has been consistently witnessed in various organisms exposed to PQ, which subsequently triggers cell death pathway in dopaminergic neurons (Peng at al., 2005; Yang et al., 2009 ; Choi et al., 2010 ; Maitra et al., 2019 ). Interestingly, an increased cellular abundance of dMyc restricts PQ-induced degeneration of dopaminergic neurons, which in turn aids in retaining the climbing performance of the adult flies. We observed that targeted overexpression of dmyc restricts PQ-mediated degeneration of dopaminergic neurons by reinstating the JNK activity. This suggests an imperative role of dMyc in regulating JNK-mediated neurodegeneration in PQ-mediated PD etiology in Drosophila ( Fig. 4 ). Here it is interesting to note that downregulation of dmyc is known to reduce the level of puckered or puc (a JNK phosphatase that antagonizes JNK signalling) transcription in Drosophila larvae ( Ma et al., 2017 ). Therefore, overexpression of dmyc perhaps increases the cellular pool of puc, which in turn decreases the levels of p-JNK to mitigate various PQ-induced toxic effects of PD. One of the complexities associated with PQ-induced PD is that all the exposed individuals do not develop the symptoms ( Vaccari et al., 2017 ). Recent genome-wide association studies (GWAS) also indicate towards the fact that the presence and/or absence of different mutations/ SNPs/ alleles can make a person susceptible or resistant to PD ( Polito et al., 2016; Tran et al., 2020 ). Our study provides evidence that a threshold expression level of dMyc could efficiently minimize the neurotoxic effects of PQ. Due to the conserved functional characteristics of dmyc / cmyc , it would be worth testing if humans with polymorphism(s) leading to increased expression of cmyc are better protected against the neurotoxic effects of PQ, and vice versa. Taken together, we report for the first time that tissue-specific upregulation of dMyc restricts PQ-mediated neurotoxicity by reinstating the aberrant activation of JNK, thereby controlling JNK-dependent neurodegeneration and motor deficits. Our findings could be beneficial in designing novel therapeutic strategies against PD. Author contributions Surajit Sarkar and Pragati conceived the project. Pragati performed the experiments. Pragati and Surajit Sarkar analyzed the data. Pragati and Surajit Sarkar wrote the manuscript. Funding acquisition: Surajit Sarkar. Competing interest The authors declare no competing interests. Acknowledgments We are thankful to the Bloomington Stock Center, USA, for providing some fly stocks used in this study. We thank Delhi University for supporting this research work under the Faculty Research Program ( FRP ) of the IoE scheme to SS. Ms. Pragati is supported by a Senior Research Fellowship ( SRF ) from Lady Tata Memorial Trust, Mumbai, India. We also thank Dr. Soram Idiyasan Chanu, and Mr. Ayush Goel for providing helps in performing some of the experiments. We are grateful to Ms. Nabanita Sarkar for technical supports. Appendix A Supporting information Supplementary data associated with this article can be found in the online version at doi:10.1016/j.neures.2023.10.005 . Appendix A Supplementary material . Supplementary material. | [
"CASSAR",
"CHA",
"CHAI",
"CHANU",
"CHOI",
"DHANASEKARAN",
"FENG",
"FRIGGIGRELIN",
"GALLANT",
"GAN",
"GRIFONI",
"HUANG",
"JOHNSTON",
"KLEIN",
"KLINE",
"LEE",
"MA",
"MAITRA",
"NAVARRO",
"NEVES",
"NI",
"PAN",
"PENG",
"PLUNFAVREAU",
"POLITO",
"PRAGATI",
"SINGH",
"SO... |
0984dc7ff4f5480c93cd6d70faea30ee_Environmental temperature and choline requirements in rats II Choline and methionine requirements fo_10.1016_S0022-2275(20)39580-8.xml | Environmental temperature and choline requirements in rats. II: Choline and methionine requirements for lipotropic activity | [
"Chahl, J.S.",
"Kratzing, C.C."
] | Young rats were fed choline-deficient diets and maintained at different environmental temperatures. The hepatic lipid level remained normal in rats at 2° when 25 mg of choline per 100 g of food was fed; 50 mg of choline per 100 g food was required at 21° and 100 mg of choline per 100 g food at 33° to prevent excessive lipid accumulation. These values were equivalent to a mean daily intake per rat of 3 mg of choline at 2°, 5.5 mg at 21°, and 7 mg at 33° respectively.
When the growth rate was slower owing to a slight inadequacy of histidine in the basal choline-deficient diet, normal hepatic lipid was maintained by supplements of 50 mg of choline per 100 g food at 21° and 33°.
Increasing the methionine content of the diet two- or three-fold from a basal value of 340 mg per 100 g food was as effective as 200 mg of choline per 100 g of food in lowering hepatic lipids at 2°, 21°, and 33°. | null | [
"CHAHL",
"STEKOL",
"BREMER",
"TUCKER",
"ECKSTEIN",
"RADOMSKI",
"HARPER",
"SELLERS",
"SELLERS",
"BEST",
"GRIFFITH",
"BEST",
"BEST",
"SNYDER",
"MILLS"
] |
c9c98bce815640bd830f0e5ca4bd156d_Global Patterns of Tissue-Specific Alternative Polyadenylation in Drosophila_10.1016_j.celrep.2013.03.022.xml | Global Patterns of Tissue-Specific Alternative Polyadenylation in Drosophila
| [
"Smibert, Peter",
"Miura, Pedro",
"Westholm, Jakub O.",
"Shenker, Sol",
"May, Gemma",
"Duff, Michael O.",
"Zhang, Dayu",
"Eads, Brian D.",
"Carlson, Joe",
"Brown, James B.",
"Eisman, Robert C.",
"Andrews, Justen",
"Kaufman, Thomas",
"Cherbas, Peter",
"Celniker, Susan E.",
"Graveley, Br... | null | (Cell Reports 1 , 277–289; February 23, 2012) In the original version of this article, the GEO accession number given in the Experimental Procedures section was written as GSE3390, but the correct number is GSE33905. The corrected paragraph appears below: One nanogram of the strand-specific RNA-Seq libraries were reamplified by PCR using a primer complementary to the 5′ adaptor and a second primer complementary to the 3′ adaptor with six T residues at the 3′ end. After 10 rounds of amplification, the 3′ primer with the T extension was replaced with a 3′ primer complementary to the adaptor with a 5′ extension containing a 6 nt index sequence and a sequence complementary to the flow cell primer. After an additional 15 rounds of amplication, the libraries were quantitated, 10–12 libraries were pooled together and sequenced on an Illumina HiSeq2000 using paired-end 100 bp and 6 bp index read chemistry. Reads were split into the respective samples using the index sequence and aligned as described above. All of the raw fastq data and alignments of poly(A)-spanning reads from the poly(A)-enriched libraries were deposited at NCBI Gene Expression Omnibus under Series GSE33905. The authors regret this error. | [] |
1ca364c077bb4166ba0f89e0af79b5de_Integrating Technology and Culture Smartphone Validation of a Food Frequency Questionnaire for Nutri_10.1016_j.cdnut.2025.107500.xml | Integrating Technology and Culture: Smartphone Validation of a Food Frequency Questionnaire for Nutrient Intake Estimates in the Adult Population of Trinidad and Tobago | [
"Foster-Nicholas, Lesley Ann",
"Dyett, Patricia",
"Heskey, Celine",
"Shavlik, David",
"Siapco, Gina"
] | Background
Trinidad and Tobago is home to a multiethnic population, each with distinct dietary traditions. Given this diversity, it is essential to validate a food frequency questionnaire (FFQ) that captures the local food items that contribute to the nation’s unique dietary culture.
Objectives
This study aims to assess the reproducibility and validity of a semiquantitative FFQ to estimate nutrient intake in the adult population of Trinidad and Tobago.
Methods
A 139-item semiquantitative electronic food frequency questionnaire (e-FFQ), developed using Google Forms, captured culture-specific foods commonly consumed in Trinidad and Tobago. The self-administered e-FFQ was distributed to 91 participants aged 18 and older, with 2 administrations 3 mo apart. The first administration of the e-FFQ was validated against the weighted mean of 4 food records with digital images as the reference method. Data were analyzed using SPSS Version 26 to assess validity and reproducibility through paired t-tests, correlations, and cross-classification.
Results
Participants had a mean age of 38 ± 9.6 y, with 22% male and 78% female. Correlations between the e-FFQ and food records ranged from moderate (r = 0.59 for vitamin C) to high (r = 0.83 for carbohydrates). Cross-classification agreements varied from 69% for cholesterol to 89% for fiber and vitamin A. Energy-adjusted correlations averaged r = 0.37, ranging from r = 0.22 for polyunsaturated fatty acids to r = 0.67 for cholesterol. Cross-classification indicated that 61% of e-FFQ estimates were correctly classified within ±1 quintile.
Conclusions
The culture-specific e-FFQ demonstrates strong reproducibility and validity, making it a valuable tool for assessing nutrient intake in Trinidad and Tobago’s adult population. | Introduction Recent advancements in technology have markedly transformed dietary assessment methodologies, integrating innovative digital tools such as smartphone applications, web-based programs, and automated systems [ 1 ]. These technological strides have enhanced the accuracy and efficiency of dietary data collection, processing, and analysis, addressing many limitations associated with traditional methods [ 2 ]. The shift toward digital and web-based food frequency questionnaires (FFQs) has revolutionized large-scale epidemiological studies by reducing costs and improving response completeness [ 1 ]. Automated FFQs, including those implemented on computer and web platforms, offer significant advantages such as expedited data collection, improved data quality, and reduced incidences of missing data [ 3 ]. These digital tools can also utilize visual aids to assist respondents in estimating portion sizes and provide enhanced privacy and confidentiality, which can reduce under-reporting [ 4 ]. Moreover, the flexibility of online FFQs enables continuous data collection from any location, with secure electronic storage, helping to minimize risk of data loss [ 4 , 5 ]. In addition to FFQs, technological innovations have facilitated self-monitoring of dietary intake via mobile apps such as MyFitnessPal and My Meal Mate, and automated systems like ASA24 for 24-h recalls and diet records [ 5 ]. Researchers are encouraged to consider participants' readiness for these automated methods and to adhere to best practice guidelines to ensure the appropriate use of dietary assessment tools. For culturally diverse populations, accurate dietary assessment is crucial due to the varying prevalence of noncommunicable diseases (NCDs) and dietary patterns linked to health outcomes. Effective dietary assessment tools are needed to capture culturally specific foods and dietary habits, which can inform public health policies and interventions aimed at reducing NCD risks. Designing a culture-specific FFQ is essential to accurately reflect the dietary practices of a given population [ 6 , 7 ], and it must be validated for reproducibility and relevance to the specific ethnic group [ 8 ]. On the basis of specific context and purpose for development, various researchers have successfully used particular approaches for evaluating validity and reliability of FFQs over the years [ 9–12 ], whereas others have identified the need to alter or improve on different aspects of the process for specific measurement outcomes [ 13 , 14 ]. Trinidad and Tobago, a multiethnic Caribbean nation and the industrial capital of the region, boasts a 99% adult literacy rate, with 80.1% of the population using the internet and over 900,000 active on social media [ 15 ]. Despite these advancements, the country faces a high prevalence of NCDs, which account for 80% of all deaths [ 16 ]. Given these health challenges, there is an urgent need for a dietary assessment tool that accurately reflects the diverse nutrient intake of the population. Thus, we developed a semiquantitative e-FFQ tailored to Trinidadian and Tobagonian diets incorporating local and street foods. This e-FFQ was previously evaluated for its reproducibility and validated in assessing habitual food/food groups intake and was found to be a valid tool for assessing and ranking food category intake estimates for the population [ 17 ]. Given that a culture-specific FFQ can be a cost-effective tool to examine the role of diet in health and disease, particularly in addressing key NCDs at the national level in Trinidad and Tobago, further evaluation of this e-FFQ is warranted to determine its effectiveness in estimating habitual nutrient intake. This study aims to evaluate the reproducibility and validity of a comprehensive culture-specific semiquantitative e-FFQ developed for adults in Trinidad and Tobago in estimating nutrient intakes. This e-FFQ has the potential to be an effective tool in measuring habitual nutrient intake across the population of Trinidad and Tobago to identify deficiencies and excesses as well as diet quality. When used alongside other data, for example, biochemical markers, anthropometric and/or other health parameters, this e-FFQ may help in determining optimal nutrient intakes associated with NCD risk prevention. Methods Study design A cross-sectional observational study was conducted to evaluate the reproducibility and validity of a culture-specific semiquantitative electronic food frequency questionnaire (e-FFQ) to assess habitual dietary intake over the past 3 mo among adults in Trinidad and Tobago. This study was described elsewhere [ 17 ]. Briefly, the e-FFQ was initially sent via email to the participants from March to April 2019 and readministered between June and July 2019. During the interval between the 2 administrations, participants recorded their food intake for 4 separate days using their smartphones. Four 1-d food records (FRs) that covered 2 weekdays and 2 weekend days were required from each participant during the 3-mo duration of the study. The period of FR collection (12 wk) was divided into 4 3-wk periods. During each 3-wk period, one FR was collected from each participant as illustrated in Figure 1 . Participants were divided into 3 groups of 35 ( n = 105) and each group was further divided into 6 subgroups and assigned a day in the week to record their food intake between Saturday and Thursday of each week of the 3-wk periods. A total of ∼ 105 FRs were collected each 3-wk period. The subgroups were alternated to ensure each subgroup recorded 2 weekdays and 2 weekend days over a 12-wk period. Participants had the choice of sending their FRs via WhatsApp or email. A Google calendar linked to an email account ( ttfoodfreq@gmail.com ) was used as an internal tracking system to monitor when to send out the day/night-before reminders to participants that they had to bring their fiducial marker and charge their smartphones for use on the next day. Study participants Participants were recruited through email, social media platforms, and professional associations. These platforms included Facebook groups such as Trinidad and Tobago Association of Teacher Educators, Trinidad and Tobago Police Service Social and Welfare Association, Trinidad and Tobago Registered Nurses Association, Foodie Nation, Caribbean Kitchen, and Trini Food and Recipes. Those interested completed a brief screening questionnaire via Google Forms to determine their eligibility. Adults aged 18 y and above with access to the internet and who own a smartphone were included. Those who have a medical condition requiring a therapeutic diet (such as Crohn’s disease, celiac disease, or end-stage renal disease) or being out of the country during the study period were deemed ineligible for the study. After 125 individuals completed the screening form, 9 were excluded for being out of the country during the study, 3 for not providing complete contact information, and 1 for having Crohn's disease. An information package and informed consent form were emailed to the 112 eligible participants. On returning the signed consent forms, participants received the e-FFQ for self-administration. Participants could ask questions via email or phone. Of the 112 eligible participants, 14 (12%) did not complete the first e-FFQ administration, and 7 (7%) of those who completed it did not finish the second administration, resulting in a final sample size of 91 (10 from Tobago and 81 from Trinidad). All 91 participants provided 3–5 1-d FRs. The study was approved by the Loma Linda University Institutional Review Board (IRB #5180409) and informed consent was obtained from each participant before administering the e-FFQ. Electronic food frequency questionnaire The semiquantitative e-FFQ, designed as an electronic survey using Google Forms, included detailed instructions and examples. The e-FFQ was adapted from an existing 146-item FFQ originally developed by Ramdath et al. [ 7 ]. Similar foods were grouped together (for example, coconut bread, sweet bread, and coconut drops). The final e-FFQ included 129 items from the original FFQ and 14 popular street foods, totaling 139 items [ 11 ]. The e-survey consisted of 2 parts: a 139-item FFQ and a demographic section. The first part contained 139 food items grouped according to the Caribbean’s Six food groups [ 18 ] (see Supplemental Table 1 ). Intake frequency for all food items except water was assessed using 8 categories: never or less than once per month, 1–3 times per month, once per week, 2–4 times per week, 5–6 times per week, once per day, 2–3 times per day, and ≥4 times per day. Consumption frequencies for water intake are reflected per daily intake [ 17 ]. Food items of similar nutrient composition were grouped under the following categories : 1 ) street foods, 2 ) staple with subcategories of i ) breads, cakes and cereals, ii ) rice and pasta, iii ) ground provisions, 3 ) vegetables, 4 ) legumes, 5 ) fruits, 6 ) food from animals with subcategories of i ) eggs, dairy and dairy substitutes, ii ) meat and meat substitutes, 7 ) fats and oils. Additional categories were created to capture consumption patterns of 8 ) water, 9 ) nonalcoholic beverages, 10 ) alcoholic beverages, and 11 ) sweets. The street food category was placed first to minimize duplicate reporting of foods on the e-FFQ because street foods are mainly mixed local dishes which may be comprised of individual food items on the FFQ. For example, bake and shark/fried fish is a common street food but its component food can be eaten individually with other food items. Instructions were provided to avoid reporting the same foods in multiple sections. The demographic and anthropometric section included items on anthropometric measurements (weight, height), including age, sex (male, female), marital status, religion, race (Afro-Trinidadian, Indo-Trinidadian, Mixed), education (some university or higher/secondary school), employment, physical activity, smoking, dietary supplements, and chronic disease (history one or more NCD, none). BMI was calculated from self-reported weight and height as seen in Supplementary material Trinidad and Tobago Dietary Assessment Questionnaire ( Table 1 ). FRs with digital photographs To validate the e-FFQ, multiple 1-d FRs with digital photographs were used as the reference method; this method is described elsewhere [ 19 ]. Briefly, the FR entails the use of a smartphone to digitally photograph foods/meals that were eaten and to text the name of the meal, time, and place of intake, and the names of foods and the corresponding amounts eaten. Participants were to take photographs and text the description of every eating occasion. Both the photographs and texts were sent via email or WhatsApp to a designated contact information. Participants are required to include a fiducial marker according to placement instructions before taking pictures of what they were to eat. A fiducial marker specifically designated for each participant was sent a week before the training on fiducial marker placement. The training was necessary to aid the researcher in portion size estimation and to make sure the participants are not sending internet images of foods. Each 1-d FR included text descriptions and photographs of meals/foods with fiducial markers as seen in Supplemental Figure 1 . A registered dietitian at the doctoral level trained in collecting FRs and the use of Nutrition Data System for Research (NDS-R) analysis, reviewed and clarified any incomplete or unclear data before collating the images and texts into a PowerPoint document where each slide includes an imbedded ruler. One PowerPoint document with 3 or more slides that records the meals eaten in a day represents a 1-d FR. Nutrient intake estimates Given the lack of a comprehensive food composition database for Caribbean countries, particularly Trinidad and Tobago, a database encompassing most commonly available foods was utilized. Dietary intake data from both the e-FFQ and FRs were analyzed using the NDS-R software version 2018, developed by the Nutrition Coordinating Center, University of Minnesota. This software features a database of over 18,000 foods and provides values for 174 nutrients, nutrient ratios, and other food components. Local composite dishes and street foods were analyzed by creating recipes based on traditional recipes from a well-established Trinidad and Tobago cookbook, the Naparima Girls’ High School Cookbook within the NDS-R software. Nutrient intake calculations were performed by multiplying the weight in grams of each food item by its consumption frequency, with e-FFQ responses converted to mean daily intake in grams. Data Analysis Sample size A sample size of 84 was required for this study with a medium effect size ( r = 0.3), a power of 0.80, and an alpha level of 0.05. To account for a 20% dropout rate, the adjusted sample size was 101 . Reproducibility analysis To evaluate the reproducibility of the e-FFQ responses, data distributions were assessed, and non-normal distributions were log-transformed. Descriptive statistics, including means and SDs, were calculated, and paired t -tests compared means. Correlation analyses assessed the agreement between nutrient estimates from the 2 administrations of the e-FFQ. Pearson correlations measured the linear association between both administrations of the e-FFQ. Energy adjustment was calculated using the residual method. Residuals were calculated in SPSS with total energy as the independent variable and nutrient intake as the dependent variable. Cross-classifications of nutrient intakes between e-FFQ1 and e-FFQ2 were analyzed by dividing intakes into quintiles to determine the proportion of participants categorized into the same quintiles or within ±1 adjacent quintile. Validation analysis For validation purposes, 11 participants were excluded due to implausible dietary intake values recorded in the FFQ [ 9 , 20 ]. For this study, we used a more conservative threshold that aligns with established practices. Implausible dietary intake values were defined as energy intakes below 900 kcal or above 5000 kcal for males, and below 600 kcal or above 3500 kcal for females. To validate the e-FFQ, 4 1-d FRs—2 from weekends and 2 from weekdays were collected to reflect habitual intake. The weighted mean nutrient intake from the FRs was calculated using the formula: where WD represents weekday intake and WE represents weekend intake. This weighted mean was compared with nutrient estimates from the first e-FFQ administration using paired Weighted mean of nutrient intake = [ ( 5 / 2 ( WD 1 + WD 2 ) + ( WE 1 + WE 2 ) ] / 7 t -tests. Crude nutrient intakes were log-transformed to approximate normal distributions, and Pearson correlations measured the linear association between e-FFQ nutrient estimates and FRs, adjusted for energy using the residual method. Residuals were calculated in SPSS with total energy as the independent variable and nutrient intake as the dependent variable. To correct for within-person variation due to repeated measurements in the reference method FR, 2 synthetic wk were created, which consisted of 1 weekday and 1 weekend day. The nutrient intakes for weeks 1 and 2 were then adjusted for total energy intake and compared with the FFQ. Cross-classifications of nutrient intakes between the e-FFQ and FRs were assessed by dividing intakes into quintiles to determine the proportion of participants correctly categorized into the same or adjacent quintiles. The Bland–Altman plot was used to evaluate the performance of the e-FFQ compared with the reference method (weighted mean of FRs). This analysis plotted the difference in intake between the e-FFQ and FRs on the y -axis against the mean of the e-FFQ and FRs on the x -axis, with limits of agreement (mean difference ± 1.96 SD) displayed to illustrate the degree of agreement between the 2 methods. Results The participants in the study were predominantly adults with a mean age of 38 y, indicating that the sample is largely in the middle stages of adulthood. On mean, participants weighed 86.3 kg with a BMI of 30 indicating that the majority were classified as either overweight or obese. Specifically, only 25% of the participants had a BMI within the normal range (18.5–24.9) whereas the remaining 75% exceeded this range, highlighting a significant prevalence of elevated body weight among the group. Educational attainment among the participants was notably high, with the vast majority (89%) having achieved at least some level of university education which suggests a well-educated group. Reproducibility of FFQ A total of 91 participants participated in both e-FFQ administrations. Table 2 shows the mean nutrient intakes in the first and second administrations of the e-FFQ and the mean difference between e-FFQ1 and e-FFQ2. The mean reported intake of nutrients was higher in the first administration (e-FFQ1) than in the second administration. The correlations for nutrients varied from moderate ( r = 0.59) for vitamin C to high ( r = 0.85) for choline ( Table 2 ). The mean correlation coefficient for nutrients was 0.7. Most nutrient showed strong crude correlations indicating good reproducibility between FFQ1 and FFQ2. The energy-adjusted correlations were slightly lower, reflecting variability independent of energy intake. Nutrients such as choline, cholesterol, and carbohydrates maintained high reproducibility across all adjustments. Cross-classification into quintiles showed that an mean of 46% of the responses in both administrations of the e-FFQ were in agreement. Agreement within 1 quintile (that is, exact ± 1 adjacent quintile) ranged from 73% for omega-3 to 89% for fiber and vitamin A. Gross misclassification across categories between e-FFQ1 and e-FFQ2 ranged from 0% to 3.2%. Validation of FFQ Weighted mean of 4 FRs was used as the reference method to assess the ability of the e-FFQ to estimate the intake of macronutrients and selected micronutrients. The mean intakes and difference in mean between the e-FFQ1 and the FR are shown in Table 3 . The mean nutrient intake estimates of the e-FFQ1 were lower than those of the reference method. Crude or energy-unadjusted correlations ranged from 0.11 for total fats to 0.62 for cholesterol. Energy adjustment increased the correlations which ranged from 0.21 for total fats to 0.67 for cholesterol (see Table 3 ). Correlations were moderately strong ( r 0.5) for cholesterol, fiber, trans-fat, choline, and potassium, and moderate (0.3 ≥ r < 0.5) for carbohydrates, protein, SFAs, and vitamins A, C, and E. Deattenuation of the correlation coefficients to correct for within-person variations in intake due to weighted mean of fFRs only improved some micronutrients such as (zinc, trans-fat, vitamin C, and vitamin B12). For the cross-classification analysis, the mean exact agreement ±1 adjacent quintile increased from 60% to 63% after adjusting for the total energy intake. Cross-classification showed that an mean of 28% was correctly classified into exact agreement. Cross-classification for exact agreement ±1 adjacent quintile between the FFQ and FR ranged from 48% for sodium to 76% for cholesterol after adjusting for total energy intake. Figure 2 shows the Bland–Altman plots for energy, carbohydrates, proteins, and fats. For all macronutrients and energy, most points were within the 95% CI limits, with few observations outside the CI limits. Discussion This study assessed the validity and reproducibility of a culture-specific e-FFQ within the adult population of Trinidad and Tobago. The e-FFQ featured foods commonly consumed by participants, as corroborated by their weighted mean of FRs. Compared with paper-based FFQs, which involve manual data entry and high labor costs, and online FFQs, which incur maintenance costs and have limited accessibility, the electronic FFQ offers advantages in distribution, administration, and response collation in a confidential manner [ 9 ]. This approach was well-suited to the Trinidadian and Tobagonian adults, who are highly educated and have internet and smartphone access [ 15 ]. Results of the study showed that the e-FFQ has a good validity when compared with the weighted mean of FRs. The mean estimated intakes of the evaluated macro and micronutrients did not differ significantly between the 2 methods. Mean intakes were lower in the e-FFQ indicating that there was underestimation of energy and nutrient intake by participants when completing the e-FFQ. In contrast, other culture-specific validation studies often find overestimation of intake on FFQs [ 9 , 21–23 ]. This discrepancy might be due to the anthropometric profile of our study population, where a high proportion of participants were overweight or obese. Previous research suggests that individuals with higher BMIs may under-report energy intake [ 24 , 25 ]. Correlation coefficients in our study ranged from 0.18 to 0.67, which aligns with validation studies in other Caribbean countries, where correlations ranged from 0.02 to 0.66 for Barbados and 0.17 to 0.86 for Jamaica [ 26 , 27 ]. Willett's criteria suggest that acceptable correlation coefficients for FFQ validation are between 0.4 and 0.7 [ 9 ]. However, some coefficients, especially for total fat, omega-3 fatty acids, and sodium, were not significant even after adjusting for energy intake. This might be due to subject recall error or variations in intake. The validity correlation coefficients for macronutrients are generally higher than those for micronutrients. However, in this study, the validation correlation coefficients for some macronutrients were lower ( r < 0.3), whereas those for certain micronutrients, such as vitamin D and choline, were higher ( r = 0.4–0.5). After adjusting for energy intake, the correlation coefficients for carbohydrates and proteins showed a slight increase, from ∼0.3 to 0.4, whereas cholesterol’s correlation coefficient rose to 0.67. However, the correlation coefficients for fat remained insignificant. This may be attributed to energy adjustments reducing between-subject variation, which could result in lower or smaller correlations [ 21 ]. Cross-classification was used to assess the strength of agreement between the 2 methods by ranking participants according to their intake. Although most studies use quartiles, this study ranked participants in quintiles. We found that 28% had exact agreement, 61% had exact or ±1 adjacent agreement, and only 3% were grossly misclassified. In reviewing the literature, most reproducibility studies administered the second FFQ between 1 mo and 1 y after the first. It has been suggested that the second FFQ should be administered within a time frame that prevents participants from recalling their previous answers, with a maximum of 1 y to avoid attenuating reproducibility [ 9 ]. For this study, participants were asked to complete the second e-FFQ after 3 mo. The correlation coefficients for reproducibility for all nutrients ranged from 0.6 to 0.8, which falls within the acceptable range of 0.5–0.7, indicating good reproducibility of the e-FFQ [ 28 , 29 ]. Cross-classification into quintiles showed that the e-FFQ demonstrated high reproducibility, with all nutrients having exact ±1 adjacent agreement ranging from 67% to 89%, and gross misclassification was minimal at just 1%. This strong reproducibility is likely due to the short interval between the first and second e-FFQ administrations, as shorter time frames typically result in higher reproducibility, which tends to decrease over longer periods [ 9 , 28 ]. Participants reported higher nutrient intake estimates on the first e-FFQ, which is consistent with findings from other reproducibility studies [ 9 , 28 , 29 ]. These higher estimates may be attributed to learning effects or participant fatigue after completing the first FFQ and 4 24-h recalls [ 9 , 29 ]. Limitations The sample was not fully representative, with only 19% male and 9% Indo-Trinbagonian participants. Although 91 participants completed both e-FFQ administrations and FRs, 11 were excluded due to energy intake outliers, reducing the sample size to 80 and potentially affecting the validity of the e-FFQ. The a priori power analysis indicated that 84 participants were needed, so this reduced sample size impacted the ability to detect between-person variability. Future research The dietary habits and nutritional patterns of individuals living in Trinidad & Tobago remain underexplored, with limited research available on their specific dietary intakes. Existing studies utilizing FFQs often lack cultural specificity or proper validation for the local population [ 30–32 ]. The current study’s use of e-FFQ and FRs presents an opportunity for further analysis, particularly in examining associations between diet and NCDs. Alongside dietary data, demographic information—including age, ethnicity, education, employment, physical activity, and self-reported NCDs—was collected, providing valuable context for understanding dietary influences on health. The study recommends several areas for deeper investigation, including: 1. The relationship between dietary trends and NCDs. 2. Macronutrient consumption and dietary patterns among the 2 major ethnic groups. 3. Socioeconomic disparities in diet. 4. Links between food groups and cancer risks in Afro and Indo Trinidadians. 5. Cultural and religious impacts on dietary habits. 6. Nutritional contributions of street foods. 7. Associations between street food consumption and cardiovascular/metabolic diseases. 8. Physical activity, dietary habits, and awareness of heart disease risks. 9. Impacts of nutrition policies on public health and NCD prevention. To enhance future research, it is suggested to expand sample sizes and improve representation of males and various ethnic groups, whereas also extending studies to include children, adolescents, and pregnant women. Additionally, refining the e-FFQ by removing low-intake items and including some more commonly consumed sweetened beverages could improve its relevance and accuracy. In conclusion, the culture-specific e-FFQ demonstrated good reproducibility and validity for assessing nutrient intake in Trinidad and Tobago, comparable with other Caribbean FFQs. It is an effective tool for evaluating dietary intake and its association with NCDs. The study also found moderate correlations for several micronutrients linked to NCDs and maternal health, highlighting the e-FFQ's potential for use in future research to determine dietary and health associations. Author contributions The authors’ contributions were as follows – LAF-N, GS: designed research; LAF-N: conducted research; DS, LAF-N: analyzed data; LAF-N, PAD, CH, GS: wrote the paper; LAF-N: had primary responsibility for final content; and all authors: read and approved the final manuscript. Data availability Data described in the manuscript, code book, and analytic code will be made available on request pending application. Funding The authors reported no funding received for this study. Conflict of interest The authors report no conflicts of interest. Appendix A Supplementary data The following are the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Multimedia component 2 Multimedia component 2 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.cdnut.2025.107500 . | [
"ZHAO",
"DAS",
"PANNEN",
"LUCASSEN",
"THOMPSON",
"SHARMA",
"RAMDATH",
"AHMADNEZHAD",
"WILLETT",
"PERREAULT",
"SYAUQY",
"GU",
"KIRKPATRICK",
"FRONGILLO",
"FOSTERNICHOLAS",
"SEGOVIASIAPCO",
"BANNA",
"ELKINANY",
"ATHANASIADOU",
"GARCIARODRIGUEZ",
"BURROWS",
"PAKSERESHT",
"JA... |
27de8718155048859ea5a67e7181a0c6_Hypothalamic hamartoma in paediatric patients Clinical characteristics outcomes and review of the li_10.1016_j.nrleng.2011.12.006.xml | Hypothalamic hamartoma in paediatric patients: Clinical characteristics, outcomes and review of the literature | [
"Castaño De La Mota, C.",
"Martín Del Valle, F.",
"Pérez Villena, A.",
"Calleja Gero, M.L.",
"Losada Del Pozo, R.",
"Ruiz-Falcó Rojas, M.L."
] | Objective
To describe the epidemiological and clinical-electroencephalographic characteristics, and associated morbidity of patients with hypothalamic hamartoma, as well as the treatment followed and outcomes.
Patients and methods
We have retrospectively reviewed the medical histories of 10 patients diagnosed with hypothalamic hamartoma by magnetic resonance imaging (MRI) over the last 20 years.
Results
The age of onset of epilepsy in patients with hypothalamic hamartoma in our series was between the first days of life and 2 years. Of the 10 total patients, 8 had epileptic seizures during its progress. All of them had gelastic seizures, in addition to other types of seizures, with the most common being partial simple seizures. The electroencephalographic findings recorded were highly variable. One of the patients developed epileptic encephalopathy. Five patients had some kind of conduct disorder. Five patients had cognitive problems. At least 2 different antiepileptic drugs (AEDs) were measured in 8 of the patients who had seizures, and in 6 of these some type of non-pharmacological treatment had been used with the objective of seizure control. Acceptable control over epilepsy has only been achieved in 3 of the 8 patients. Five patients of the series developed precocious puberty. The average time of follow-up of the series was approximately 6 years.
Conclusions
Epilepsy is the most frequent manifestation of hypothalamic hamartomas. Most cases were drug-resistant, which led to difficulties in the management of these patients, requiring surgery for their control on many occasions. Psychiatric comorbidity and cognitive impairment are common. | Introduction Hypothalamic hamartoma is a non-neoplastic malformation that appears in the hypothalamus between the infundibular recess and the mammillary bodies. It is associated with endocrine and neurological symptoms. The prevalence of this tumour in children and adolescents is approximately 1–2 cases/100 000 inhabitants. In most cases, it is a sporadic tumour, but in rare cases it may be associated with Pallister-Hall syndrome, an autosomal dominant disorder which includes additional congenital malformations such as polydactyly, imperforate anus and spina bifida or bifid uvula. 1 2 One of the main characteristics of hamartoma is its intrinsically epileptogenic activity, due to the presence of clusters of small GABAergic interneurons which fire spontaneously. 3–6 Gelastic seizures are one of the most characteristic and frequent symptoms in patients with hypothalamic hamartoma. These seizures appear in the early years of life (some cases have even been described in neonates), with brief, stereotypical and frequent episodes (sometimes in clusters) of unprovoked and automatic laughter, without any sense of joy or loss of consciousness, although there may be a brief decrease in consciousness. It is usually accompanied by autonomic signs (tachycardia, breathing disorders, flushing, pupil dilation, etc.). Some patients experience gelastic and dacrystic (crying) seizures at the same time. Patients having these seizures may exhibit groaning and flushing, followed soon after by crying. This may be accompanied by orofacial automatisms. 7–9 There are several descriptions of patients with status gelasticus in the literature. 10 11–14 Patients with hypothalamic hamartoma may suffer other kinds of epileptic manifestations, such as complex partial and generalised seizures. Their development is attributed to a process of secondary epileptogenesis. 9,12–14 15 The surface electroencephalogram (EEG) has a limited ability to show epileptiform activity in this pathology, given the deep location of this lesion and the complex connections of the hamartoma. During the early stages of the disease, intercritical EEG findings are usually normal and gelastic seizures show a diffuse depression of background activity. 16 17,18 The clinical spectrum of hypothalamic hamartoma is quite variable; patients may have asymptomatic tumours, isolated endocrine disorders such as precocious puberty, or suffer from the syndrome described by Berkovic in 1998 as early-onset gelastic epilepsy and hypothalamic hamartoma (precocious puberty). This syndrome is characterised by catastrophic epileptic encephalopathy and accompanied by cognitive problems and severe behavioural disorders. 19–22 Epilepsy associated with hypothalamic hamartoma is typically refractory to AEDs. Achieving good seizure control is rare, even when administering high doses of AEDs in polytherapy. 1,11 It has been shown that hamartoma resection is one of the best options for controlling seizures, since they are known to be resistant to AEDs. In addition, this approach produces improvements in cognitive problems and behavioural disorders. 23,24 Several surgical approaches have been proposed for resecting hamartomas (microsurgical resection or disconnection, endoscopic resection). However, all those procedures entail substantial surgical risks. For that reason, unconventional surgical procedures delivering acceptable outcomes have been developed recently (gamma-knife radiosurgery, radioactive seed implants, etc.). 25,26 Patients and methods We reviewed the clinical histories of the patients recorded in our databases as being diagnosed with hypothalamic hamartoma in the last 20 years (between 1990 and 2010). We obtained epidemiological data (age, sex, race, pregnancies, childbirth, neonatal period, family history), clinical data (age at diagnosis, symptoms, diagnostic delay, associated comorbidity), and complementary tests (EEG, video-EEG, brain magnetic resonance imaging), neuropsychological assessment, and any treatments received. In cases requiring surgery, we collected data regarding the type of surgery, age at surgery, and any surgical complications. All patients in our group were paediatric patients (age 0–12 years) at time of diagnosis. They received medical follow-up in our department for 6 years. All patients were treated in our paediatric neurology, neurosurgery, and endocrinology departments, as needed. No patients were lost to follow-up. The 5 patients who underwent neuropsychological assessment completed age-adapted cognitive and language tests. Results Epidemiological and perinatal data ( Table 1 ) Of the 10 patients in our series, 6 were male. Only one patient had a family history of epilepsy. Regarding personal medical history, the prenatal period was monitored in all but one of the patients. In another patient, the third-trimester ultrasound showed an intracranial mass anterior to the cerebellum and inferior to the thalamus and third ventricle. Two patients needed to be hospitalised upon birth. One had paroxysmal attacks (the patient mentioned above who was diagnosed prenatally by ultrasound); the other required ventriculoperitoneal shunting on the 6th day of life due to hydrocephalus secondary to an arachnoid cyst . From the earliest stages, 4 patients experienced psychomotor retardation. One female patient had a previous diagnosis of neurofibromatosis type 1. Types of presentation ( Table 1 ) Age at symptom onset was highly variable, ranging from the first days of life in 2 patients in the series, to 6 years in the patient who had a hamartoma that was incidentally discovered while using MRI to assess neurofibromatosis type 1. Age at referral, whether for purposes of beginning the study or for follow-up in the neurology department, also varied (2–14 years). Mean follow-up time in the series was 6 years. Three patients were transitioned from our department after reaching the age of 18. The initial clinical symptoms of patients with hypothalamic hamartoma were as follows: epileptic seizures in 7 patients, precocious puberty in 2, and psychomotor retardation in the last patient. Of the series total, 2 patients did not experience any seizures during the course of the disease. Age at onset of seizures ranged between a few days and 4 years, with a mean age of 10 months. Gelastic seizures were common in all patients suffering from epilepsy, and were the first type of seizures experienced by 5 of the patients. Seizures appeared during the first days of life in 2 patients. One patient presented status gelasticus at the age of 11. Dacrystic seizures also appeared in 2 patients; the initial critical episode in 1 of these patients was dacrystic. All patients in the series experienced at least one other type of seizure as well. Simple partial seizures were the most frequent, followed by complex partial and generalised seizures. One of the patients had atonic seizures. Progression ( Table 1 ) Five patients developed precocious puberty during the course of the disease. All but one was male. Among these patients, 2 had growth hormone deficiency and one also suffered from hyperthyroidism. In this series, 6 patients had some type of developmental delay or learning disorder. One patient's condition progressed to pervasive developmental disorder. We performed a neuropsychological assessment of 5 patients. The intelligence quotient was within the normal range in 2 patients, with borderline intellectual functioning in 1 patient, mild mental retardation in 1 patient and moderate mental retardation in the last patient. Five patients had behavioural disorders: there were 2 cases of attention deficit and/or hyperactivity disorder and 2 cases of aggressive conduct. One patient presented both disorders. Complementary tests ( Table 2 ) Different imaging tests were performed, including transfontanellar ultrasound in 3 patients due to different reasons. Computed tomography (CT) was carried out in only one patient. The presence of hypothalamic hamartoma was confirmed in all patients by using pituitary MRI ( Figs. 1 and 2 ). The tumour sizes measured by MRI ranged from 1.3 cm to 10 cm ( Table 2 ). The 8 patients who experienced seizures during the course of the disease underwent at least one conventional EEG and video-EEG monitoring study during sleep. All these patients showed abnormal results. The table lists the type of seizures recorded for each patient. Treatment ( Table 1 ) We first attempted to control seizures with pharmacological treatment in all patients with epilepsy. All patients required at least 2 different drugs for seizure control. The most frequently used drugs were oxcarbazepine (5), carbamazepine (4), topiramate (4), valproic acid (4), and levetiracetam (3). The patient with status gelasticus needed polytherapy with as many as 4 drugs. Of the 10 patients, 7 underwent surgical removal of the hamartoma. In 6 patients, this was due to poor pharmacological control of the epileptic seizures, and in the remaining patient, due to the tumour increasing in size and exerting pressure on neighbouring structures. We ruled out surgery in 2 patients, since their seizures were controlled adequately by AEDs. The other patient was not referred for surgery as he did not experience seizures and the size of his tumour remained the same. The initial technique used in 6 of the 7 patients who underwent surgery was resection. We achieved complete surgical resection in only one of these patients, while the 5 remaining patients underwent subtotal or partial resection. One of these last 5 patients needed 2 additional reoperations, which produced similar results. The seventh patient was selected to undergo gamma-knife surgery. We used this procedure as a second treatment approach for one of the cases with a subtotal surgical resection. As a result of a secondary hydrocephalus, 2 patients needed a ventriculoperitoneal shunt. The mean time between onset of seizures and surgery was 5 years. Control over epileptic seizures was acceptable in 3 of the 8 patients; in one case, this had to do with the hamartoma resection. Discussion Our study is a retrospective review of 10 paediatric patients diagnosed with hypothalamic hamartoma in the last 20 years in a paediatric tertiary referral centre. The literature includes series with varying numbers of patients, but some of these series also include adult patients. 17,27–29 The hypothalamus makes up less than 1% of the total brain volume, but it is a complex structure that includes numerous interconnections with both the cortex and the limbic neural networks. It regulates functions such as sleep, appetite, body temperature, reproduction, and sexual behaviour. It therefore plays an important role in modulating aggressive behaviours and in a wide variety of functions which are necessary for individual survival. 30 Hypothalamic hamartomas are non-neoplastic malformations of grey matter composed of hyperplastic neurons of different sizes. 31 They are usually small lesions, measuring between 0.5 and 2 cm in diameter and located at the base of the brain in the third ventricular floor, near the tuber cinereum and the mammillary bodies. These lesions may grow slowly in the interpeduncular cistern without displacing adjacent structures. It can take years before signs of compression appear. We did not find any single reason explaining why 2 patients’ hamartomas, measuring 4 32 cm and 10 cm respectively, were significantly larger than those described in the literature. However, a delay of approximately 4 years in diagnosing the lesion may explain the size of the largest hamartoma. This disease's overall frequency is low, but the condition becomes serious if accompanied by epilepsy, behavioural and cognitive disorders, and/or precocious puberty (of which it is a rare cause). This is shown in the literature and in our paediatric series. 1,2 There are two types of hypothalamic hamartomas, depending on their radiological imaging classification: pedunculated and sessile. Pedunculated or parahypothalamic hamartomas are connected to the third ventricular floor or suspended from the inferior hypothalamus by a peduncle. They are small or medium-sized, do not displace the hypothalamus, and are usually asymptomatic or provoke precocious puberty as the initial symptom. On the other hand, sessile or intrahypothalamic hamartomas surround and displace structures in the hypothalamus and the third ventricular wall. They are associated with epileptic seizures, cognitive disorders, psychomotor retardation, and psychiatric disorders. All patients in our series underwent a pituitary MRI scan to confirm the presence of a malformation. In 7 patients, this malformation was described as a sessile lesion causing a mass effect and/or displacing neighbouring structures (midbrain, third ventricle, optic tracts, optic chiasm, etc.). 33–37 MRI is much more sensitive than CT as a diagnostic test for this lesion. At times, it can even diagnose hamartoma when symptoms are not yet noticeable or even present, as was the case for patient 7, who was asymptomatic. The MRI scan should include examination of the hypothalamic and infundibular area and mammillary bodies. 37,38 As mentioned above, epilepsy is one of the markers of this disease, and this is especially true for intrahypothalamic hamartomas. The mechanism of epileptogenesis is found in the microarchitecture of the hamartoma, which consists of small GABAergic neurons generating clusters which fire spontaneously. In our series, 80% of the patients had epilepsy, which was the first manifestation of hypothalamic hamartoma in all but one patient. 3–6,39,40 Gelastic seizures are the most common seizures in patients with hypothalamic hamartoma, especially in childhood, and almost always constitute the first epileptic manifestation. Nevertheless, they are often underdiagnosed, both clinically and using electrical activity testing, since seizures may go undetected or be mistaken for smiling, colic, or sleep disorders, especially in newborn and young unweaned babies. Moreover, many patients experience these seizures during sleep. 12,17,21 This fact was confirmed in at least 2 of our patients who had been diagnosed initially with sleep disorders. All epileptic patients in our series experienced gelastic seizures at some point in the course of the disease. They were the first types of seizure to appear in more than half of the patients. In some descriptions of gelastic seizure cases, the seizures originated in the frontal lobe (cortical dysplasia). Therefore, where gelastic seizures are present, we should be aware that hypothalamic hamartoma is not the only possible aetiology. 41 In our series, the mean age at onset of gelastic seizures was 13 months. In 1 patient, these seizures appeared in the first days of life, and in 2 others, at the age of 2 years; these data coincide with other studies in the literature in which researchers mention this type of seizure in the first days of life. 42–44 In our series, 1 patient experienced status gelasticus (patient 4) and 3 patients experienced dacrystic seizures. These seizures have also been described in the literature. 8,9,45,46 10,14,47–50 Many other types of seizures may appear in these patients, either at the beginning or during the course of the disease. In our series, we also found an additional type of seizure in 100% of the patients with seizures. Simple partial seizures were the most common, followed by complex partial seizures and generalised seizures (in some cases). Atonic seizures were only present in 1 patient. Progression of gelastic seizures to partial epilepsy usually occurs between the ages of 4 and 10. At times it may be difficult to differentiate between gelastic and partial seizures, as they have similar characteristics (decreased level of consciousness and orofacial automatisms) and both types of seizures may occur concomitantly. 9,12–14 Generalised seizures also occur in patients with hypothalamic hamartoma, including tonic seizures, tonic–clonic seizures, and drop attacks. 15,16,25,51 Some surgical series report a prevalence of generalised epilepsy of about 70%. 15–17,52 Infantile spasms are rare in this disease, and did not appear in our series. However, they have been described in patients experiencing very early onset gelastic seizures during the neonatal period. 15,53,54 In our series, the initial baseline EEG showed abnormal results in all patients. The most common abnormalities were unilateral or bilateral polyspike and polyspike-wave discharges with a focal onset and propagation to the frontotemporal regions. Such findings are common throughout the course of a patient's disease. 21,55,56 Video EEG recorded gelastic seizure episodes in 7 patients and dacrystic seizures in 2 patients. EEG findings in one patient were described as typical of encephalopathy. 15,17,18,21,22,52,57 Only one patient had precocious puberty as the initial symptom, but as time passed, up to 50% of the patients developed that condition. This endocrine disorder has been described in several series of patients with hypothalamic hamartoma and gelastic seizures, with a frequency of 30%–40%. Its pathophysiology has not been fully clarified, but it is believed to involve an activation mechanism for the secretion of human luteinising hormone–releasing hormone (LH–RH). 58–60 On the other hand, hamartoma is rarely associated with other endocrine diseases (growth deficit, diabetes insipidus, hypogonadism, etc.), unlike other hypothalamic diseases such as astrocytoma, glioma, and craniopharyngioma, which show a high incidence of endocrine disease. Nevertheless, our series contained one patient of short stature and another with panhypopituitarism. 61 Cognitive impairment (language delay, learning disability) and behavioural disorders (attention deficit hyperactivity disorder or ADHD, aggressiveness, anxiety, oppositional defiant disorder, etc.) are common in patients with epilepsy associated with hypothalamic hamartoma. Cognitive impairment/behavioural disorders have been linked to seizure severity and frequency. Nevertheless, this subject is controversial since some series show cognitive deficits to be present in patients before the onset of seizures. 1,35,62,63 This is demonstrated by patients in our series, as they experienced associated comorbidities (psychomotor/mental retardation in 7 patients, learning disorders in 2, ADHD in 3 and aggressive/oppositional behaviours in 3). Multiple patients presented 2 or more of these problems. 28 As shown by several series of patients with epilepsy associated with hypothalamic hamartoma, gelastic seizures are resistant to AEDs and exert an effect on the cognitive and behavioural disorders present in these cases. This is true even when administering different drugs, high doses, and/or several drugs in polytherapy. We tried at least 2 AEDs with each patient in our series. Currently, only 2 patients have attained acceptable seizure control as a result of pharmacotherapy. One is taking oxcarbazepine monotherapy and the other, oxcarbazepine in combination with valproic acid. 20,56,60,64 For these reasons, patients with hypothalamic hamartoma who suffer from pharmacologically intractable epilepsy, progressive cognitive impairment, and/or behavioural problems are usually candidates for surgical treatment. This is their best option for eradicating seizures and improving cognitive and behavioural functions. 24,45,64,65 In any case, we must be mindful of the potential risks of surgery (hypothalamic damage, memory loss, polyphagia or diabetes insipidus, and vascular damage ). Over the past few years, different surgical and non-surgical procedures have been used in order to minimise potential risks. 55,57,28 23 Conventional surgical techniques, such as surgical resection and surgical excision through an inferior approach (craniotomy through transsylvian, subtemporal, or transfrontal approaches) or a superior approach (via transcallosal interfornical approach), deliver successful outcomes for seizure control, but surgical risks are high. Endoscopic disconnection is a safe alternative with good results. 34,56,64,66 In our series, 6 patients underwent conventional surgery via different approaches. The tumour was completely resected in one case without any serious complications. 67,68 Nevertheless, emerging techniques, such as stereotaxic radiosurgery (especially gamma knife surgery), are being promoted as the first line of treatment of the near future. It has been shown that they produce successful outcomes with regard to seizure control and improvement of related disorders, and involve fewer risks than conventional surgery. At present, this technique is used as a second option when tumours persist after the initial conventional surgery. The only disadvantage of this procedure is the delay in obtaining therapeutic results; in most patients, seizures begin to subside approximately 6 months after surgery. 26,69,70 In our series, 2 patients underwent surgery with this technique, one as an initial procedure and the other as a second attempt to remove the residual tumour after the initial surgery. We achieved acceptable seizure control in patient 2 only, the patient who underwent an apparently complete hamartoma resection. At present, patient 2 is on valproic acid monotherapy. However, progression of epilepsy in the other patients was not favourable; despite undergoing surgery, they required polytherapy in order to control their seizures. This contrasts with series of surgical patients in the literature which reported better results in the area of seizure control. This difference may be due to the fact that the hamartoma was not completely resected in most of our patients who underwent surgery. 44 In conclusion, we can state that hypothalamic hamartomas in our series have similar epidemiological and clinical characteristics to the hamartomas described in the literature. However, they present more difficulties for seizure control, whether by pharmacological or surgical means. Conflicts of interest The authors have no conflicts of interest to declare. | [
"KERRIGAN",
"CRAIG",
"PRIGATANO",
"FENOGLIO",
"WALDAU",
"BEGGS",
"DALY",
"TELLEZCENTENO",
"STRIANO",
"LOPEZLASO",
"NG",
"PALMINI",
"PUSTORINO",
"NG",
"HARVEY",
"OEHL",
"LEAL",
"BERKOVIC",
"CASTRO",
"MAIXNER",
"MULLATTI",
"STRIANO",
"CASCINO",
"NG",
"FRAIZER",
"SHIM"... |
6bb7496ea753457e9aec18207f4ffbe0_Demographics and the perception of psoriasis therapy adverse effects and treatment preference A cros_10.1016_j.jdin.2020.05.001.xml | Demographics and the perception of psoriasis therapy adverse effects and treatment preference: A cross-sectional survey of a convenience sample of people with psoriasis | [
"Bray, Jeremy K.",
"Feldman, Steven R."
] | null | To the Editor: Despite advancements in psoriasis therapy, undertreatment remains an issue. Misconceptions might explain poor treatment adherence and perceptions of treatments may vary in different subpopulations. 1 We assessed the association between demographics and peoples' perception of psoriasis therapy adverse effects and treatment preference. 2 After institutional review board approval, a survey was completed by 298 subjects older than 18 years with self-reported psoriasis. Subjects were recruited online from a broad, diverse population via MTurk ( Table I ). Subjects selected the drug class (topical, injectable, oral, or phototherapy) with the most severe adverse effects, ranked the most likely adverse effects of injectables, and identified the drug class they most prefer. Age, sex, race/ethnicity, education, treatment status, and diagnosis duration were assessed via χ 3 2 analysis. Age, race/ethnicity, and education were not associated with participants' perception of adverse effects and treatment preference. Sex and treatment status were significant factors in predicting the selection of injection site reactions as the most likely adverse effect of injectables, as well as the selection of injectables as the most preferred drug class ( Table II ). Diagnosis duration was a significant factor in predicting the selection of injectables as the drug class with the most severe adverse effects. Injection site reactions (22%), nausea, vomiting, and diarrhea (18%), and sun sensitivity (16%) were the most commonly selected adverse effects of injectables. More women (73%) chose injection site reactions as the most common adverse effect versus men (55%; P < .01). More men (30%) selected injectables as their most preferred drug class versus women (18%; P = .01). More individuals who were not currently being treated (73%) chose injection site reactions versus those receiving treatment (62%; P = .04). More individuals who were currently being treated (30%) selected injectables as their most preferred drug class versus those not receiving treatment (12%; P < .01). More individuals with a psoriasis diagnosis of greater than 5 years' duration (66%) selected injectables as the drug class with the most severe adverse effects versus those with one of less than 5 years' duration (51%; P = .02). Women and those not currently being treated for psoriasis appear more likely to not prefer injectables (versus men and those receiving treatment), likely in part because of concern about injection site reactions. A longer history of psoriasis may be associated with preconceived misconceptions, which might explain why subjects with a longer history were more likely to consider injectables to have the most severe adverse effects. Fear or misunderstanding of adverse effects might prevent people from initiating biologics and may lead to undertreatment. Documentation of diagnosis by a dermatologist, body surface area involvement, type of therapy used by participants, and reasoning behind responses were not reported. Subject-reported preference may not correlate with actual medication-receiving behavior. However, the study still provides information on how demographics are associated with peoples' perception of psoriasis therapy, which may help identify individuals at risk of undertreatment. In all demographic subgroups, there is a wide range of perceptions and preferences, such that although some subgroups may tend to fear certain adverse effects or prefer particular drug classes, physicians may want to prepare to address various concerns and preferences for all patients. Alleviating concerns of adverse effects and broadening preference for various treatment options may be associated with an increased willingness to try new or different forms of therapy, which may help prevent undertreatment. | [
"ARMSTRONG",
"BRAY",
"BUHRMESTER"
] |
35edc09834234aedb657265b376b3c39_Correlation between PD-L1 expression clones 28-8 and SP263 and histopathology in lung adenocarcinoma_10.1016_j.heliyon.2020.e04117.xml | Correlation between PD-L1 expression (clones 28-8 and SP263) and histopathology in lung adenocarcinoma | [
"García, Alejandro",
"Recondo, Gonzalo",
"Greco, Martín",
"de la Vega, Máximo",
"Perazzo, Florencia",
"Recondo, Gonzalo",
"Avagnina, Alejandra",
"Denninghoff, Valeria"
] | Lung cancer is the leading cause of cancer-related death worldwide. Recent advances in the management of non-small cell carcinoma are focused on the discovery of targeted therapies and novel immunotherapy strategies for patients with advanced disease. Treatment with anti PD-(L)1 immune checkpoint inhibitors requires the development of predictive biomarkers to select those patients that can most benefit from these therapies. Several immunohistochemical biomarkers have been developed in different technological platforms. However, the most useful and accessible for the daily clinical practice need to be selected. The objective of this study was to compare PD-L1 expression by automated immunohistochemistry in lung adenocarcinoma (ADC) FFPE samples with clones 28-8 and SP263 performed with the BenchMark GX automated staining instrument. To further determine interobserver agreement between two pathologists, and to correlate the results with histologic and pathology variables. FFPE tissue from 40 samples obtained from patients with lung ADC were reviewed retrospectively. Among all studied specimens, 53% of samples presented <1% of positive tumor cells with the 28-8 clone and 50% had <1% of PD-L1 expression in tumor cells with the SP263 clone; PD-L1 expression between ≥1 and <5% was observed in 18% and 24%; ≥5 and <50% PD-L1 expression in 18% and 21%; and ≥50% PD-L1 expression in 11% and 5% of samples, respectively. Similar results between antibodies were observed in 84% of cases for each of the four PD-L1 cutoff groups (Pearson's score 0.90, p < 0.00001). The interobserver degree of agreement calculated with Kappa was 0.75 (95%CI: 0.57–0.93), z = 7.08; p < 0.001. Lepidic, acinar and mucinous patterns had predominantly <1% PD-L1 expression, and the solid pattern subtype had high levels of PD-L1 staining using both clones. PD-L1 expression in less than 1% of tumor cells was similar in stages I/II compared to III/IV. No significant differences were observed in PD-L1 staining and quantification pattern between IHC antibodies 28-8 and SP263. | 1 Introduction Lung cancer is the leading cause of cancer-related death worldwide [ 1 ]. In Argentina, 10,296 new cases are diagnosed and 9,254 people die every year [ 2 ]. Non-small cell lung carcinomas (NSCLC) are divided in 3 main categories: adenocarcinoma (ADC), squamous cell carcinoma (SCC) and large cell carcinoma [ 3 ]. NSCLC is characterized by the presence of genetically distinct and dynamic subpopulations within the same tumor, which can have an impact on treatment outcomes. To select patients for targeted therapies like kinase inhibitors, we need test for driver alterations involving EGFR , ALK , ROS1 , and BRAF as standard practice for patients with advanced tumors [ 4 , 5 , 6 , 7 ]. The discovery of immune-checkpoints inhibitor blockade of CTLA4 and the PD-(L)1 axis has enabled novel treatments in a wide range of tumor types. Immune surveillance is essential to prevent the development of cancer and is associated with the expression of neo-antigens by tumor cells as result of somatic mutations in genes, viral antigen presentation [ 7 , 8 , 9 ]. The use of immunohistochemical analysis for the determination of PD-L1 has been proposed as a prognostic and predictive biomarker for anti-PD-1 and anti-PD-L1 monoclonal antibodies in the clinical scenario of advanced NSCLC. The Food and Drug Administration (FDA) requires the development of diagnostic tests, either as “companion” or compulsory for such a drug, or “complementary”, which means recommended (eg. PD-L1 28-8 antibody [Abcam] using the DAKO detection system). There are several anti PD-L1 antibodies in practice, which are being developed as biomarker tests including: 22C3 (Dako’ Platform), 28-8 (pharm Dx, Dako's Platform), SP142 (Spring Bioscience, Ventana's Platform), E1L3N and E1J2J (Cell Signaling Technologies, Ventana's Platform), SP263 (Ventana's Platform), 7G11 (Boston University), EPR1161-2 (Epitomics-Abcam); etc [ 10 ]. Available companion diagnostic tests use specific assays with different clones, staining protocols, automated platforms, scoring interpretation and target cells (tumor and/or immune cells). In addition, different PD-L1 cutoffs are being selected for anti PD-(L)1 treatment in the first or second line therapy, and PD-L1 expression is a dynamic marker subject to temporospatial heterogeneity. Given the diversity of testing platforms, worldwide efforts are made to “harmonize” PD-L1 testing to facilitate clinical decision-making. Thus, the National Cancer Institute in France developed a national validation study with different antibodies and platforms searching for technical equivalences [ 11 ]; the International Pulmonary Pathology Society [ 12 ]; the Colonia Score in Germany [ 13 ]; the Blueprint PD-L1 Assay Comparison Project [ 14 , 15 ] and the Harmonization study in Israel [ 16 ]. The objective of this study was to compare PD-L1 expression by automated immunohistochemistry in lung adenocarcinoma (ADC) FFPE samples in our country with anti PD-L1 clones 28-8 and SP263 performed with the BenchMark GX automated staining platform. Interobserver agreement between two observers was analyzed and results were correlated with pathological data. 2 Materials and methods We retrospectively studied forty non-matched biopsies from patients with lung ADC, fixed in 10% buffered formalin, paraffin embedded, and then cut into sections of 4 μm. These samples underwent immunohistochemistry testing using PD-L1 rabbit monoclonal antibody, clones 28-8 (Abcam, Cambridge, UK) and SP263 (Ventana Medical Systems Inc, Tucson, USA). Immunohistochemical staining was performed with BenchMark GX immunoautomate (Ventana Medical Systems Inc, Tucson, USA), OptiView DAB IHC Detection Kit and OptiView Amplification Kit (Ventana Medical Systems Inc, Tucson, USA). Staining was evaluated by two pathologists with expertise in thoracic pathology, IHC and PD-L1 assessment. Both pathologists blinded to clinical data scored the proportion of PD-L1 in tumor cells for each biopsy independently. For tumor cells, the proportion of PD-L1 positive cells was estimated as the percentage of PD-L1 positive tumor cells over the total tumor cells. Although ADC is a heterogeneous tumor type and several histological patterns may coexist in the same sample, PD-L1 staining was evaluated in the whole slide, irrespective of cell type. Interobserver agreement between two observers was evaluated using the Kappa test in each of the four groups where the results were divided based on cutoffs from recently published studies (<1%, ≥1 to <5%, ≥5 to <50% and ≥50%) [ 17 , 18 ]. Alpha significance level was p = 0.05. No binary limit was applied. The study protocol was approved by the Institutional Ethics Committee. This study was performed in compliance with the good clinical practice (GCP), as defined by the International Conference on Harmonization (ICH). This protocol fully complies with the International Declaration on human genetic data, approved unanimously and by acclamation, by the UNESCO General Conference 32nd session, October 16, 2003. All data obtained were handled with absolute confidentiality according to national legislation (Ley de protección de Datos Personales, Habeas Data), and could only be accessed by the researchers involved in the study or the members of the institutional Ethics Committee. 3 Results A total of 40 patients were included, 18 males and 22 females, with a mean age of 65 years (range: 31–84). Twenty-three were surgically resected specimens and 17 were biopsy specimens (endoscopic, core and fine needle aspiration). Cancer stage at diagnosis was: Ia 17%, Ib 25%, IIa 8%, IIIb 3%, and IV 47%. Regarding PD-L1 staining, 53% were classified as having <1% of PD-L1 positive tumor cells using the 28-8 clone, and 50% of the samples had <1% of PD-L1 positive tumor cells with the SP263 clone. Using the 28-8 clone, 18% of samples were classified as having ≥1% to <5% PD-L1 positive tumor cells and 24% of samples were classified in this cutoff using SP263 antibody. Similarly, 18% of samples were scored as having ≥5% to <50% PD-L1 positive tumor cells using the 28-8 clone, and 21% using the SP263 clone. In addition, 11% and 5% of samples had a ≥50% PD-L1 expression in tumor cells using the 28-8 and SP263 clones, respectively. Matching results were observed in 84% of cases in all four categories, showing a high level of correlation between assays (Pearson's score 0.90, p < 0.00001). Overall, 47% of lung ADC samples were PD-L1 positive (≥1%) with 28–8 antibody, and 50% with SP263 antibody. Using a cutoff of >5% PD-L1 positive tumor cells, 29% were positive using the 28-8 antibody and 26% of samples using SP263 antibody. The association between PDL1 and histological pattern is shown in Tables 1 and 2 . Lepidic, acinar and mucinous patterns predominantly showed low PD-L1 expression (PD-L1 TPS <1%); however, the solid pattern had high levels of PD-L1 staining with both clones. Table 3 showed the relationship between PD-L1 results and clinical stage. Interobserver degree of agreement calculated with Kappa was 0.75 (95%CI: 0.57–0.93), z = 7.08; p < 0.001. 4 Discussion The prescription of different anti PD-(L)1 drugs for the same disease depends on IHC PD-L1 testing with a specific antibody and platform. Pathologists have to face the challenge of working with different antibody clones, staining protocols, platforms, scoring systems and cutoffs [ 19 ]. Other issues with an impact on PD-L1 assessment include: tumor heterogeneity, dynamic nature of PD-L1 expression, which varies between anatomical sites, time of biopsy, type of treatments, epitopes with high sensitivity to fixation and composition of tumor microenvironment [ 20 , 21 ]. In the initial findings from the Blueprint Programmed Death Ligand 1 Immunohistochemistry Assay Comparison Project, three experts, independently, evaluated the percentages of tumor and immune cells staining positive in 39 NSCLCs using 22C3, 28-8, SP142 and SP263. This comparison revealed a similar percentage of stained tumor cells between 22C3, 28-8 and SP263 assays, whereas the SP142 assay exhibited overall lower PD-L1 staining [ 14 ]. Subsequently, the Blueprint 2 project corroborated these findings with a larger number of cases and observers. However, most specimens were obtained surgically, rather than by percutaneous biopsies, which are the most frequent type of diagnostic specimens in advanced NSCLC [ 15 ]. A collaborative study performed in Israel by Neuman et al. assessed the harmonization for the use of 22C3 clone on Ventana's platform. Neither in Argentina, nor in Israel is the Dako platform and/or the In Vitro Diagnostic (IVD) kit easily available. Therefore, Neuman et al. performed a comprehensive staining calibration on the BenchMark XT platform using Dako's prediluted 22C3 anti-PD-L1 primary antibody with two Ventana detection systems: Forty one 41 NSCLC cases independently evaluated by two pathologists proving that same PD-L1 IHC algorithm can be reliably applied to Ventana's BenchMark XT platform, and that all of the strongly positive cases had high interobserver and intraobserver agreement [ 16 ]. Adam et al. further showed that laboratory-developed tests (LDTs) have various levels of agreement when compared with three commercial assays. Those using SP263 clone had the greatest agreement across all platforms, whereas some LTDs with 28–8, 22C3, and E1L3N showed good correlation with the three commercial assays for tumoral cells only [ 22 ]. Assessment of tumor cells (TCs) score in NSCLC was highly reproducible using the SP263 assay, showing the accuracy of this assay in patient selection for anti-PD-1/PD-L1 therapy. The overall diagnostic sensitivity and specificity analyses indicated that the relative analytical sensitivities of the Food and Drug Administration-approved kits for tumor cell scoring, most specifically in non-small cell lung cancer, were as follows: Ventana PD-L1 (SP142) had the lowest sensitivity/specificity, followed by PD-L1 IHC 22C3 pharmDx, PD-L1 IHC 28-8 pharmDx and PD-L1 SP263 Ventana assay with the highest score [ 23 ]. In our study there was a concordance between the 28-8 and SP246 PD-L1 clones in 84% of the cases (Pearson's score 0.90, p < 0.00001). However, as Williams et al. , we adopted VENTANA PD-L1 (SP263) Assay in our clinical practice as a reliable and reproducible assay [ 24 ]. In our experience, especially in solid pattern tumors with abundant immune cells, 28-8 stains both populations too intensely, thus making quantification difficult. On the other hand, the SP263 assay stains sections delicately and allows more reliable identification of tumor cells. Nevertheless, our interobserver degree of agreement calculated with Kappa was 0.75 (95%CI: 0.57–0.93), z = 7.08; p < 0.001). Both clones performed adequately PD-L1 expression to the histological pattern. Regarding the relationship between PDL1 and histological pattern, it can be hypothesized that poorly differentiated ADC (solid pattern) could present a higher tumor mutational burden (TMB) which could result in enhanced immunogenic tumors. On the other hand, well differentiated tumors (lepidic, acinar, etc.) usually have less genetic alterations and consequently lower levels of neoantigens presentation. However, TMB and PD-L1 tumor expression correlate poorly and are considered independent biomarkers of treatment response [ 26 ]. In this study TMB assessment was not performed on tumor samples. Comparison of surgical (complete tumors) and biopsy specimens reveals that the focal solid pattern seen in surgical specimens fails to be seen in biopsy specimens. Since half of the cases with <1% stained cells were small samples (biopsies), these results should be interpreted as a consequence of the heterogeneous staining phenomenon. Ilie et al. reported on the possible difference in PD-L1 expression when comparing whole surgical tissue sections and matched lung biopsies using SP142. They found that PD-L1 expression was frequently discordant between both types of specimens (overall discordance rate = 48%, 95% confidence interval 4.64–13.24 and Kappa = 0.218) [ 25 ]. In all cases, biopsy specimens underestimated the PD-L1 status observed in the whole tissue sample. Their findings would indicate a poor association of PD-L1 expression in tumor and immune cells between lung biopsies and corresponding resected tumors. Moreover, the daily routine evaluation of PD-L1 expression in diagnostic biopsies can be misleading in defining the treatment with PD-(L)1 inhibitors26. PD-L1 expression <1% had a similar distribution between stages I/II, compared to stages III/IV. However, the expression ≥1 to <50 was 85% for I/II stage, ≥50% in 75% of stage III/IV cases. These results would reveal a trend that needs further confirmation with a larger number of samples. This is a real-life study in a developing country [ 22 ]. Therefore, PD-L1 assay selection is mainly based on both the platform and trained pathologists availability. Approaches to harmonizing testing methods are therefore crucial in ensuring appropriate treatment selection for our patients. Our study has several limitations. Firstly, the number of studied samples is rather low. However, this is the largest study presented in our region. Secondly, only two PD-L1 testing platforms were evaluated, and therefore the results of this study cannot be extrapolated to other PD-L1 antibodies (22C3, SP142). Thirdly, since this study was not performed in other subtypes of non-small cell lung cancers, these results should not be extrapolated to squamous-cell carcinomas. In conclusion, immunostaining with anti PD-L1 clones 28-8 and SP263 has high levels of correlation, in concordance with other studies. This correlation is maintained across different histological subtypes and clinical stage, however PD-L1 staining could be underestimated in small samples. PD-L1 testing needs to be cost-effective, developed with a holistic approach to be applied in multiple indications to meet patients' needs. However, it must be interpreted in the context of other tumor and patient immunologic factors with an impact on the response and prognosis with immunotherapy, such as tumor mutation burden, microsatellite instability, neoantigens, gene signatures and intratumoral inflammation [ 27 ]. Declarations Author contribution statement A. García and V. Denninghoff: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. G. Recondo, G. Recondo, M. Greco, M. de la Vega, F. Perazzo and A. Avagnina: Performed the experiments; Analyzed and interpreted the data. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Competing interest statement The authors declare no conflict of interest. Additional information No additional information is available for this paper. Acknowledgements The authors thank Flavia Cerrutti and Juan Gili for their contribution to this work; and Valeria Melia for proofreading of the manuscript. | [
"JEMAL",
"MINSAL",
"TRAVIS",
"KULESZA",
"PIRKER",
"CAMIDGE",
"FORDE",
"COOPER",
"BOUSSIOTIS",
"PARRA",
"ADAM",
"SHOLL",
"SCHEEL",
"HIRSCH",
"TSAO",
"NEUMAN",
"HERBST",
"ALI",
"KERR",
"SOO",
"TOPALIAN",
"IONESCU",
"TORLAKOVIC",
"WILLIAMS",
"ILIE",
"MCLAUGHLIN",
"FR... |
6a2536a71e64470f8b14771c09eed059_Analysis of risk factors for post-thrombotic syndrome after thrombolysis therapy for acute deep veno_10.1016_j.ijcrp.2024.200319.xml | Analysis of risk factors for post-thrombotic syndrome after thrombolysis therapy for acute deep venous thrombosis of lower extremities | [
"Zheng, Yi",
"Cao, Chunli",
"Chen, Gang",
"Li, Siming",
"Ye, Maolin",
"Deng, Liang",
"Li, Qiyi"
] | Objective
The purpose of the research is to explore post-thrombotic syndrome (PTS) after catheter-directed thrombolysis (CDT) treatment for acute lower extremity deep vein thrombosis (DVT) risk factors.
Methods
We retrospectively selected 171 patients with acute lower extremity DVT undergoing CDT treatment, collected clinical data of the patients, grouped them according to the follow-up results of 1 year after treatment, and included patients with PTS into the concurrent group and patients who did not develop PTS assigned to the unconcurrent group. Univariate analysis and Logistic regression were applied to analyze the risk factors of PTS after catheterization and thrombolytic therapy for acute lower extremity DVT. We applied R4.2.3 software to build three hybrid machine-learning models, including a nomogram, decision tree, and random forest with independent influencing factors as predictive variables.
Results
The incidence of PTS after CDT in acute lower extremity DVT was 36.84 %. BMI >24.33 kg/m2, disease time >7 d, mixed DVT, varicose vein history, stress treatment time>6.5 months, and filter category were independent risk factors for PTS after CDT treatment for acute lower extremity DVT. The AUC value predicted by the random forest model was higher than that of the nomogram model (Z = -2.337, P = 0.019) and the decision tree model (Z = -2.995, P = 0.003).
Conclusion
The occurrence of PTS after CDT treatment of acute lower extremity DVT is closely related to many factors, and the established random forest model had the best effect in predicting PTS complicated with PTS. | 1 Introduction Lower extremity deep vein thrombosis (DVT) is a disease of venous return disorders [ 1 ]. If treatment is delayed or fails, the thrombus will fall off, potentially triggering a pulmonary embolism, which can be fatal [ 2 ]. Thrombolysis is effective in the acute stage of lower extremity DVT, which can clear the thrombus early, maintain valve function, and reduce the occurrence of post-thrombotic syndrome (PTS) [ 3 ]. The ninth edition of the American College of Chest Physicians (ACCP) Guidelines [ 4 ] recommends catheter-directed thrombolysis (CDT) as the preferred option for acute central or mixed deep vein thrombosis. CDT has the advantage of complete thrombus removal. However, clinical practice has found that PTS still occurs in some patients with acute lower extremity DVT despite thrombolysis and anticoagulation therapy [ 5 ]. The main clinical manifestations of PTS are varicose veins, oedema, and sebum sclerosis of the lower extremities, which can form ulcers in severe cases, seriously affecting patients' quality of life and disease outcome, and cannot be ignored [ 6 ]. PTS usually has segment vascular disease, and local stenosis and occlusion are very serious. Studies have pointed out that several surgical treatment results are good, but there are problems with high long-term blockage rates and poor long-term treatment effects. Once DVT develops into PTS, its treatment is relatively limited. Therefore, the most effective response is to reduce the incidence of PTS at the root [ 7 ]. At present, the mechanism of occurrence of PTS is unknown. Identifying the risk factors of this complication and high-risk groups accordingly and implementing intervention are the keys to reducing the occurrence of PTS. The factors influencing the occurrence and development of PTS are multi-faceted and still in the exploration stage. Previous studies have failed to determine the factors affecting acute lower extremity DVT complicated with PTS, and most of the studies focused on lower extremity DVT, ignoring the possible role of acute stage and multi-factor status of CDT treatment on the formation of PTS. Based on this, this study determined the risk factors of PTS through statistical analysis of the baseline data of patients with acute lower extremity DVT undergoing CDT treatment and compared the prediction efficiency of different algorithm models (nomogram model, decision tree model, and random forest model) based on the influencing factors, to provide references for reducing the incidence of PTS and promoting the prevention and management of PTS. 2 Materials and methods 2.1 Subjects The research team adopted a retrospective analysis method to select 171 patients with acute lower extremity DVT who underwent CDT treatment in Beiliu People's Hospital and Guigang People's Hospital from January 2022 to December 2023. These patients should meet the following conditions: (1) Patients with varying degrees of pain and swelling in their lower extremities meet the diagnostic criteria for acute lower extremity DVT in the Guidelines for Diagnosis and Treatment of Deep Vein Thrombosis [ 8 ] and the diagnosis was confirmed by vascular ultrasound (96 cases) or CT examination (75 cases); (2) Duration of onset ≤14 days; (3) All patients had indications for thrombolytic therapy and were treated with CDT; (4) Over 18 years of age. We excluded the patients with the following conditions. (1) life expectancy <3 months or death during treatment; (2) bilateral acute lower extremity DVT; (3) The presence of blood system diseases such as iron deficiency anaemia and acute myeloid leukaemia; (4) Lack of clinical data. 2.2 Clinical data collection Collect patient clinical data through hospital Electronic Medical Records, including (1) baseline data: Gender, age, Body Mass Index (BMI), disease time, diabetes mellitus, hypertension, malignancy, smoking history, recent surgical history, DVT history, DVT classification, affected limb, iliac vein thrombosis, calf intermuscular thrombosis, varicose vein history; (2) Blood and coagulation indexes before CDT treatment: Platelets, high-sensitivity C-reactive protein (hs-CRP), activated partial thromboplastin time (APTT), and prothrombin (PT) time), FDP (fibrinogen degradation product), plasma viscosity; (3) CDT treatment indexes: thrombolytic time, operation time, urokinase dosage, stress treatment time, anticoagulation program, catheterization approach, filter category, and thrombus clearance grade. 2.3 Definition and evaluation criteria of relevant indicators BMI measures how fat or thin a person is and how healthy they are. Its calculation formula is BMI=Weight (kg) ÷ Height 2 (m 2 ) [ 9 ]. Smoking history refers to whether a person has an experience or habit of smoking. The DVT classification includes mixed type and central type. Lower extremity vein color ultrasound and deep venography showed that the iliofemoral vein thrombosis was the central type, and the whole deep vein thrombosis was the mixed type. After collecting 3 mL of fasting venous blood from the patient in the morning, the laboratory doctor took the supernatant after centrifugation. They determined the contents of APTT, PT, and FDP and platelet count by an automatic coagulation instrument (H1204, Hongen Medical Equipment Co., LTD.). They detected plasma viscosity by an automatic blood viscosity instrument. Pressure treatment time refers to the time of pressure treatment with elastic bandages or elastic socks after CDT treatment. The doctor evaluated the thrombus clearance grade by lower limb colour Doppler ultrasound. Grade I: The patients still had symptoms, such as pain and swelling of the affected limb, and the thrombi clearance rate was less than 50 %. Grade II: The symptoms of the affected limb disappeared basically, and the thrombi clearance rate was 50%–95 %. Grade Ⅲ: The symptoms of the affected limb disappeared, and the thrombus clearance rate was greater than 95 %. 2.4 CDT treatment After completing the coagulation routine and lower extremity venous ultrasonography, the doctor gave the patient inferior vena cava filter implantation and deep venous catheterization thrombolysis under local infiltration anesthesia. (1) They guided the patient to lie supine on the operating table. After successful local anesthesia, they punctured the common femoral vein on the healthy side and inserted a guide wire and catheter. Under the guidance of digital subtraction angiography (DSA), the doctor located the lower renal vein, transported the head end of the inferior vena cava filter to the level of the lower renal vein, and released it. (2) According to the scope of lesions evaluated by preoperative colour ultrasound, they determined the path of thrombolysis. The doctor guided the patients with popliteal vein thrombosis to be in the supine position and gave them catheterizing through the common femoral channel. At the same time, those without thrombus in the popliteal vein were placed in the prone position and then catheterized through the popliteal channel. For patients with thrombus intrusion into the distal popliteal vein, the thrombolysis catheter was turned over to the opposite limb through the healthy common femoral vein to bury the head end in the thrombus. For patients whose thrombus did not invade the popliteal vein, the operative punctured the popliteal vein under the guidance of colour ultrasound in a prone position, placed the vascular sheath, and placed the thrombolysis catheter anteriorly into the deep vein of the lower limb so that they can bury the head end in the thrombus. (3) The doctor fixed the vascular sheath and thrombolysis catheter. (4) They continuously pumped urokinase through a thrombolytic catheter, and the patient was given a subcutaneous injection of low molecular weight heparin for anticoagulation. (5) They detected the coagulation index of patients during treatment and performed regular venography to determine the thrombolytic effect. When achieving the ideal condition of thrombolysis, the doctor withdrew the thrombolysis catheter and vascular sheath and applied a sterile dressing to compress and bandage the puncture site. The thrombolysis ideal condition means that venography showed that the thrombus completely dissolved without residual thrombus. (6) After discharge, patients were treated with rivaroxaban or warfarin sodium tablets for anticoagulation for at least three months and were treated with elastic bandages or elastic stockings under pressure. 2.5 PTS criteria, follow-up results, and grouping PTS was evaluated according to the Villalta score scale [ 10 ]. A score of 0–4 was considered PTS-free, and a score of 5–33 is confirmed as PTS occurring (5–14 were mild, and 15 to 33 were severe or accompanied by ulcers). According to the results of follow-up one year after treatment, patients with PTS were included in the concurrent group and patients without PTS in the unconcurrent group. 2.6 Statistical methods We applied SPSS 23.0 software for statistical analysis. We expressed the statistical data rate (%) by the χ 2 test and the measurement data conforming to a normal distribution by Mean and standard deviation (Mean ± SD) and performed a t -test. We applied multivariate Logistic regression analysis to identify the risk factors of PTS after CDT treatment for acute lower extremity DVT. We analyzed correlations by Pearson (normal data) or Spearman (non-normal or rank data) correlation. |r| > 0.8 indicates high correlation among variables; 0.5 < |r| ≤ 0.8 indicates moderate correlation among variables; 0.3 < |r| ≤0.5 indicates low correlation among variables; |r| ≤0.3 indicates no linear correlation among variables. The standard of statistical difference was P < 0.05. With independent influencing factors as predictors, we use the “gbm”, “randomForest”, “e1071”, “neural net”, and “rpart” packages of R4.2.3 software and its “gbm”, “randomForest”, “svm”, “neural net”, “rpart”, and other functions to construct the nomogram, decision tree, and random forest Machine learning model. We calculated each model's accuracy, sensitivity, specificity, precision, recall rate, and F1 value of each model. The higher the value, the more accurate the model prediction was. We drew the ROC curve to analyze the predictive ability of the model for the risk of concurrent PTS in acute lower extremity DVT patients. The difference of the predicted area under the curve (AUC) values between the models was tested by Z. The significant criterion was P < 0.05. 3 Results 3.1 Univariate analysis of PTS after CDT treatment for acute lower extremity DVT There were 63 patients (concurrent group) with PTS and 108 patients (non-concurrent group) with PTS after CDT treatment, and the incidence of PTS was 36.84 %. Compared with the non-concurrent group, the concurrent group had higher BMI and shorter stress treatment time. In the concurrent group, the incidence time >7 days, mixed DVT, history of varicose veins, permanent filter, and thrombus clearance grade I/II were higher ( P < 0.05) ( Table 1 ). 3.2 Multivariate logistic regression analysis of PTS after CDT treatment for acute lower extremity DVT We took acute lower extremity DVT after CDT treatment complicated PTS as the dependent variable (1 = complicated, 0 = no complicated), and the index with P < 0.05 in the results in Table 1 as the independent variable. (the assignment of each index in the model as shown in Table 2 ), incorporated into the multivariate Logistic regression model for analysis. The results showed as follows: BMI, disease time, DVT classification, varicose vein history, stress treatment time, and filter category were independent influencing factors for PTS after CDT treatment for acute lower extremity DVT (all P < 0.05) ( Table 3 ). The Nagelkerke R2 of the goodness of fit test of the model is 0.770, indicating that the model has a powerful explanatory ability for dependent variables. ROC curve analysis suggested that the above independent influencing factors had a good predictive ability for patients with PTS, and the best cutoff values for BMI and Stress treatment time were 24.33 kg/m 2 and 6.5 months, respectively ( Fig. 1 , Table 4 ). 3.3 Correlation between influencing factors and Villalta score BMI and filter category had a low positive correlation with Villalta score (r = 0.338, 0.312, P < 0.05), while stress treatment time had a moderate negative correlation with Villalta score (r = −0.516, P < 0.05), as shown in Fig. 2 . 3.4 Construction of prediction model We used independent influence indicators (BMI, onset time, DVT classification, varicose vein history, stress treatment time, and filter type) as predictors. Then, we randomly divided the sample data into training set and validation set, with a ratio of 7:3. We applied the training set data to construct three hybrid machine learning models, namely, the nomogram ( Fig. 3 ), decision tree ( Fig. 4 ) and random forest ( Fig. 5 ), and used the verification set data to test the prediction effect of the model ( Fig. 6 ). In the training set, the AUC value predicted by the random forest model was higher than that of the nomogram model (Z = -2.337, P = 0.019) and the decision tree model (Z = -2.995, P = 0.003), and the accuracy, sensitivity, recall rate and F1 value predicted by the random forest model were the highest ( Fig. 7 A). In the validation set, the random forest model predicted the highest value of AUC (0.928). It was no statistical difference compared with the nomogram model (Z = 0.190, P = 0.849) and the decision tree model (Z = -0.791, P = 0.429), as shown in Table 5 and Fig. 7 B. 4 Discussion According to relevant statistics, 20%–50 % of patients with proximal DVT develop PTS of different degrees, and 5%–10 % of them are accompanied by a chronic venous ulcer, which is more serious [ 11 ]. The pathogenesis of PTS is unknown, its treatment is limited, and the optimization of DVT treatment is still a problem to solve urgently. To reduce the occurrence of PTS after CDT treatment for acute lower extremity DVT, the prevention of PTS is the key. According to the statistics of this study, the incidence of PTS after CDT treatment for acute lower extremity DVT was 36.84 %, which was higher than the 16 % reported by Nakamura et al. [ 12 ], which may be related to the different periods of DVT studied, suggesting that DVT in the acute stage may lead to more PTS, but more research evidence is needed to prove it. We explored the risk factors of PTS after CDT treatment for acute lower limb DVT from patients' three perspectives: baseline data, acute stage factors, and DVT treatment indexes. The results showed that BMI>24.33 kg/m 2 , time of onset >7 days, mixed DVT, history of varicose veins, time of pressure treatment >6.5 months, and permanent filter were independent risk factors for CDT complicated with PTS in acute lower extremity DVT. Siddiqui et al. [ 13 ] showed that BMI greater than 35 kg/m2 was closely related to PTS and was a significant risk factor for PTS development in patients with primary DVT. Our statistical analysis showed that the BMI cutoff value was low (24.33 kg/m 2 ), suggesting that PTS may be more common in acute lower extremity DVT than in non-acute DVT when BMI is abnormal. Abdominal circumference is generally larger in obese patients, and abdominal circumference is one of the objective indicators reflecting abdominal pressure [ 14 ]. The higher the abdominal pressure, the greater the pressure on the inferior vena cava, and the easier it is to obstruct the lower limb blood return [ 15 ]. Warming et al. [ 16 ] show that high intra-abdominal pressure is closely related to non-fatal pulmonary embolism. In addition, obese patients often lack exercise, and their lower leg muscle pump function is used less frequently, increasing the chance of the formation and development of PTS. The greater the BMI value, the less smooth the lower limb blood return, and the greater the risk of PTS after surgery. Clinically, patients with high BMI should be vigilant, and weight loss intervention (such as controlling diet and strengthening exercise, etc.) should be given according to the patient's situation. Compared with the subacute stage, PTS is more likely to be complicated with DVT in the acute stage [ 17 ]. The process of thrombosis formation is dynamic and complex. The longer the course of thrombosis, the more severe thrombus mechanization. At the same time, many wall-attached thrombi combine with vascular walls, which will damage blood vessels and restrict the activities of venous valves, thus aggravating valvular insufficiency and venous vascular malformation, resulting in an increased risk of PTS [ 18 ]. In mixed DVT, the higher the position of the thrombus, the greater the obstruction to lower limb blood return. Deep vein CDT can directly pump high-concentration thrombolytic drugs into the venous thrombus so that the thrombus can dissolve in a short time and relieve the venous lumen obstruction. The femoral vein approach on the healthy side or the popliteal vein approach on the affected side can effectively dissolve the venous thrombosis above the popliteal vein. However, the direct effect is weak for the thrombus far from the popliteal vein. The risk of PTS in mixed DVT is higher than that in central DVT, suggesting that vascular surgeons should pay attention to the treatment of thrombus far from the popliteal vein when performing CDT treatment in mixed DVT. Besides the popliteal vein approach, the tibial vein, small saphenous vein, and other routes can be considered [ 19 ]. The state of venous blood regurgitation induced by varicose veins of lower limbs may weaken the squeezing function of the muscle pump, affect the blood regurgitation of lower limbs, increase the venous pressure of limbs, aggravate the symptoms of limb oedema and skin pigmentation, and increase the risk of PTS. Mean et al. [ 20 ] found that previous varicose vein surgery was a predictor of PTS within 24 months after DVT, similar to the results of this study. Previous studies have shown that combined pressure therapy after hemolysis can promote postoperative recovery of patients and reduce oedema [ 21 ]. Long-term pressure treatment promotes the patient's body to drain blood veins, promote the blood circulation of the limb, and improve microcirculation. Early hemodynamic recovery helps improve prognosis [ 22 ]. Therefore, it suggested that patients should be encouraged to carry out pressure therapy after CDT treatment. Our study also confirmed that permanent filters increase the risk of PTS. The longer the placement time of the filter, the higher the probability of long-term complications such as prefilter thrombosis and inferior vena cava obstruction [ 23 ]. Chow et al. [ 24 ] showed that permanent filters led to a significant incidence of PTS, which was consistent with the results of this study. Therefore, we recommend that vascular surgeons prefer temporary retrievable filters for inferior vena cava filters placed before CDT therapy to reduce the incidence of PTS. In this study, we analyzed the prediction efficiency of each influence index for PTS. We found that the combined prediction efficiency of each influence index for PTS (AUC: 0.962) was higher than that of a single index. These results indicate that these indexes can predict CDT complicated with PTS in acute lower extremity DVT and guide clinical intervention to a certain extent. After that, we further analyzed the correlation between influencing factors and the Villalta score. The final data showed that BMI and filter category were positively correlated with the Villalta score, while the time of stress treatment was negatively correlated with the Villalta score. From this, we know that the changes in BMI, filter type, and stress treatment time can affect the occurrence of PTS and have a linear relationship with the Villalta score. This study confirmed that CDT complicating PTS in acute lower extremity DVT is related to a variety of factors, which involve the onset time of acute lower extremity DVT and the relevant indicators of CDT treatment, which is of great significance for effectively identifying high-risk patients in early clinical stage. Machine learning is a new program and topic in medical research in recent years. It uses computers to learn from research data and statistical information and is an important tool in data mining. Data mining technology can help clinical accurate prediction or decision and guide accurate and personalized diagnosis and treatment. We used six significance indicators as predictors to construct three hybrid machine-learning models, including the nomogram, decision tree, and random forest. It has verified that in practical application, the efficiency of the random forest model in identifying PTS occurred after CDT treatment for acute lower extremity DVT is still better than that of the nomogram and decision tree, and the prediction effect is consistent with its performance in the training set. The AUC, accuracy, sensitivity, recall rate, and F1 values of the random forest model are higher than those of the nomogram and decision tree model. The prediction efficiency of the decision tree model decreased in the verification set. It indicates that the decision tree model has an overfitting phenomenon, and the insufficient training set data may lead to the poor generalization effect of the model. However, there was no significant difference in the AUC values of the three models in the verification set, indicating that the substantial differences in predictive ability between the models were only reflected in the training set data of this study. In contrast, the predictive performance of non-significant differences may be more evident in practical applications. It suggested that clinicians can supplement the random forest, nomogram, and decision tree model according to needs in actual application. The advantage of this study is that it retrospectively analyzed the clinical information of acute lower extremity DVT patients treated with CDT, which can utilize fewer resources in a short time, cover a wide range of cases, and provide practical clinical information. We can take the occurrence of PTS as the clinical outcome, excavate the differences in various study parameters among patients with different outcomes, and finally highlight the impact factors of PTS and establish a prediction model based on it. However, there are some limitations in this study. We only analyzed compression with elastic bandages or elastic stockings and did not subdivide the types of auxiliary stress therapy, which may ignore the influence of stress treatment methods on the risk of PTS. It is necessary to include these factors in future exploration. In addition, this study is a single-centre retrospective study with small sample size and difficulty in avoiding bias, so further verification with large sample size and prospective randomized controlled studies is needed in the future. In summary, BMI, onset time, DVT classification, varicose vein history, pressure treatment time, and filter type are closely related to the occurrence of PTS after CDT treatment of acute lower extremity DVT. For patients with acute lower extremity DVT treated with CDT, clinicians should be alert to patients with abnormal indicators to reduce the risk of PTS and improve the prognosis of patients. The machine learning model constructed in this study has good predictive performance. The random forest, the nomogram, and the decision tree model can complement each other and have clinical reference values. Funding This study was supported by self-financing science and technology project of Guangxi ( Z-K20231791 ). Ethics approval This study was approved by the Medical Ethics Committee of Beiliu People's Hospital and and Guigang People's Hospital. CRediT authorship contribution statement Yi Zheng: Writing – original draft, Software, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Chunli Cao: Software, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Gang Chen: Validation, Software, Methodology, Formal analysis. Siming Li: Validation, Software, Methodology, Formal analysis. Maolin Ye: Validation, Software, Methodology, Data curation. Liang Deng: Software, Methodology. Qiyi Li: Writing – review & editing, Supervision, Resources, Project administration, Funding acquisition. Declaration of competing interest The authors declare that no conflict of interest is associated with this work. | [
"KIM",
"BASINDWAH",
"VEDANTHAM",
"GUYATT",
"KAHN",
"ENGESETH",
"GOLDHABER",
"ROGNONI",
"VANHAUTE",
"NING",
"MOUAWAD",
"NAKAMURA",
"SIDDIQUI",
"TIWARI",
"XU",
"WARMING",
"BIKDELI",
"JOHNSON",
"LIU",
"MEAN",
"AROKIARAJ",
"GUAN",
"ZHANG",
"CHOW"
] |
02bd4d49cd18476e971e5ed6d43c3a1b_TMS augmentation strategies reveal gender differences_10.1016_j.brs.2022.07.043.xml | TMS augmentation strategies reveal gender differences | [
"Kinback, Kevin M.",
"Johnson, Sean",
"Nguyen, Anh"
] | null | Background: Other studies showed differing results on the impact of gender in Transcranial Magnetic Stimulation (TMS) outcomes using standard major depression protocols. In our search for more effective augmentation strategies to improve TMS outcomes, we examined gender effects. Naturalistic, retrospective data analysis helped determine whether gender differences corresponded with iTBS augmentation treatment outcomes. Methods: All patients received 20 treatments of deep TMS using the H1 coil (dTMS) at 120% motor threshold (MT), frequency 18 Hz, using 55 trains. Patients lacking 50% improvement from baseline Hamilton Depression Scale (HAMD) scores by treatment 20 (Non-responders) received 10 additional trains, augmented with intermittent theta burst (iTBS). Gender differences were measured using the HAMD and the Beck Depression Inventory (BDI) in patients completing at least 30 treatments (n=47). Results: Overall HAMD and BDI response rates were 66% and 49%, with remission rates of 55% and 34%, respectively. Separation by gender (30 females, 17 males) revealed significantly better HAMD female response rates (77% vs. males 47%) and female remission rates (67% vs. males 35%). Data also showed significant gender differences in BDI remission rates (females 47% vs. males 12%). Conclusions: For Nonresponders at treatment 20, significantly better response and remission rates in women may suggest that augmenting with iTBS is more effective in women than men. Larger, prospective studies would help confirm the effects of gender differences on TMS treatment outcomes. Conflicts of Interests: None Funding: None Disclosures: Dr. Kinback is the Owner and Medical Director of Advanced TMS Center, a private group psychiatric practice in Ladera Ranch, CA. He is a board member, past Treasurer, and Fellow of the Clinical TMS Society. He has received stipends in the past for speaking from the Clinical TMS Society. | [] |
3534f0dc58da430f9cc3f059485ce169_Fusarium mycotoxin-contaminated wheat containing deoxynivalenol alters the gene expression in the li_10.1017_S1751731111001601.xml |
Fusarium mycotoxin-contaminated wheat containing deoxynivalenol alters the gene expression in the liver and the jejunum of broilers | [
"Dietrich, B.",
"Neuenschwander, S.",
"Bucher, B.",
"Wenk, C."
] | The effects of mycotoxins in the production of animal feed were investigated using broiler chickens. For the feeding trial, naturally Fusarium mycotoxin-contaminated wheat was used, which mainly contained deoxynivalenol (DON). The main effects of DON are reduction of the feed intake and reduced weight gain of broilers. At the molecular level, DON binds to the 60 S ribosomal subunit and subsequently inhibits protein synthesis at the translational level. However, little is known about other effects of DON, for example, at the transcriptional level. Therefore, a microarray analysis was performed, which allows the investigation of thousands of transcripts in one experiment. In the experiment, 20 broilers were separated into four groups of five broilers each at day 1 after hatching. The diets consisted of a control diet and three diets with calculated, moderate concentrations of 1.0, 2.5 and 5.0 mg DON/kg feed, which was attained by exchanging uncontaminated wheat with naturally mycotoxin-contaminated wheat up to the intended DON concentration. The broilers were held at standard conditions for 23 days. Three microarrays were used per group to determine the significant alterations of the gene expression in the liver (P < 0.05), and qPCR was performed on the liver and the jejunum to verify the results. No significant difference in BW, feed intake or feed conversion rate was observed. The nutrient uptake into the hepatic and jejunal cells seemed to be influenced by genes: SLC2A5 (fc: −1.54, DON2.5), which facilitates glucose and fructose transport and SLC7A10 (fc: +1.49, DON5), a transporter of d-serine and other neutral amino acids. In the jejunum, the palmitate transport might be altered by SLC27A4 (fc: −1.87, DON5) and monocarboxylates uptake by SLC16A1 (fc: −1.47, DON5). The alterations of the SLC gene expression may explain the reduced weight gain of broilers chronically exposed to DON-contaminated wheat. The decreased expressions of EIF2AK3 (fc: −1.29, DON2.5/5) and DNAJC3 (fc: −1.44, DON2.5) seem to be related to the translation inhibition. The binding of DON to the 60 S ribosomal subunit and the subsequent translation inhibition might be counterbalanced by the downregulation of EIF2AK3 and DNAJC3. The genes PARP1, MPG, EME1, XPAC, RIF1 and CHAF1B are mainly related to single-strand DNA modifications and showed an increased expression in the group with 5 mg DON/kg feed. The results indicate that significantly altered gene expression was already occurring at 2.5 mg DON/kg feed. | null | [
"ABRAHAM",
"AESCHBACHER",
"AWAD",
"AWAD",
"AZCONAOLIVERA",
"BONDY",
"BOSSERHOFF",
"CHEN",
"DEWALLE",
"DIESING",
"FRANKIC",
"GALE",
"HARHAJ",
"JIN",
"KANAI",
"KAWATA",
"LAMBERT",
"LIU",
"LUN",
"MARESCA",
"MARZOCCO",
"MELLO",
"MILLER",
"MUKHERJEE",
"NAKAUCHI",
"OBRIEN... |
a46514b0a50e43359d65e272e804814f_Advanced machine learning for real-time tibial bone force monitoring in runners using wearable senso_10.1016_j.measen.2024.101058.xml | Advanced machine learning for real-time tibial bone force monitoring in runners using wearable sensors | [
"Ambala, Srinivas",
"Agarkar, Aarti Amod",
"Raskar, Punam Sunil",
"Gundu, Venkateswarlu",
"Mageswari, N.",
"Geetha, T.S."
] | This study explores the innovative integration of machine learning (ML) with wearable sensor technologies for the real-time monitoring of tibial bone force in runners. Utilizing three distinct approaches—a linear regression model based on the Vertical Average Loading Rate (VALR), a physics-based method, and a sophisticated ML algorithm—this research was conducted with 10 participants equipped with wearable sensors. The results revealed the ML model's superior performance over both the physics-based and VALR techniques, achieving Mean Absolute Percentage Errors (MAPEs) of 6.7 percent and 11.3 percent respectively. This was accomplished through extensive training across various datasets. Additionally, the ML approach demonstrated remarkable reliability in 25 different running session simulations, underscoring its effectiveness in complex musculoskeletal analytics. These findings not only highlight the potential of ML in enhancing wearable technology for biomechanical data analysis but also emphasize the necessity for further comprehensive studies in this field. | 1 Introduction The explosion of wearable health technologies has created new opportunities for real-time biomechanical monitoring and analysis. However, there is still a significant gap in how well these sensors can predict internal musculoskeletal stresses, especially during dynamic exercises like running. This restriction is more evident in non-lab settings when various variables might affect measurements. By combining the power of machine learning with cutting-edge wearable sensor technology, our study intends to close this gap by providing accurate, real-time monitoring of musculoskeletal activities, with a particular emphasis on the calculation of tibial bone force during running. Understanding and tracking human musculoskeletal activity have undergone a revolutionary change because of machine learning (ML) integration with wearable sensor technologies. Using information from wearable inertial measurement units (IMUs) carefully positioned on the limbs and trunk, this research explores creating an advanced machine-learning model to identify complicated human motions automatically. Sports performance, clinical evaluation, and general activity monitoring are just a few areas where the categorization of human motions has a significant impact. Activity recognition utilizing wearable sensors has shown encouraging results in prior studies. The possibility of employing accelerometers for activity categorization was shown by Bao and Intille [ 1 ], who created a system for activity identification using user-annotated acceleration data. Bulling, Blanke, and Schiele [ 2 ] offered a thorough tutorial on body-worn inertial sensor-based human activity detection, emphasizing the significance of sensor positioning and data preparation methods. Chen et al. [ 3 ] developed a knowledge-driven method for activity identification, focusing on integrating domain knowledge and machine learning algorithms in the context of smart homes. The use of mobile phone accelerometers for activity identification was investigated by Kwapisz, Weiss, and Moore [ 4 ], demonstrating the possibility of ubiquitous sensing for activity tracking. Machine learning techniques are often used to categorize human physical activities. Mannini and Sabatini emphasized the significance of feature extraction and selection in their 2010 assessment of machine learning methods for categorising human physical activity using on-body accelerometers. They showed how well machine learning systems could reliably identify various activities. For long-term ambulatory monitoring of human movement, wearable sensors, such as accelerometers, have grown in popularity (Mathie, Coster, Lovell, & Celler, 2004). A study of classification methods for body-mounted sensor activity detection by Preece, Goulermas, Kenney, Howard, and Meijer (2009) shed light on the difficulties and prospects in this area. Due to its mobility, real-time sensing capabilities, flexibility, and reduced electronic waste and environmental effect, flexible and degradable pressure sensors have drawn much interest for possible usage in transitory electronic skins, flexible displays, and intelligent robots [ 5 ]. A non-invasive and immediate way of detecting cardiovascular disease, the world's leading cause of mortality, is the continuous monitoring of radial artery pressure [ 6 ]. The human cardiovascular system may be monitored for health and illness via real-time, continuous pulse signal monitoring followed by precise and efficient analysis [ 7 ]. In situ, sweat measurements that are low-cost, real-time, and possible with wearable sensors provide enormous potential for health status assessment analysis using individualized big data [ 8 ]. Real-time sweat biomarker monitoring by wearable ion sensors has the potential to substantially influence the development of customized healthcare (Shitanda, 2023). Real-time tracking of aberrant levels for early identification and long-term continuous observations is the pinnacle of physiological health monitoring [ 9 ]. The importance of various chronic and acute illnesses is rising, and the medical sector is undergoing a significant transformation owing to the need for point-of-care (POC) diagnostics and real-time monitoring of long-term health issues, particularly as the world's old population rises (Guk et al., 2019). IoT, AI, and machine learning all have a part to play in managing musculoskeletal pain since they may help diagnose and treat musculoskeletal pain [ 10 ]. Real-time applications for skin-attachable active-matrix tactile sensors include feeling an item, determining the size, shape, and orientation of an external object, picking up different things with a robot hand, and health monitoring, such as blood pressure and pulse readings [ 11 ]. Paper-based wearable electrochemical sensors can quickly and thoroughly assess the wearer's overall physiological state by combining on-body measurements with multiplexed biomarker detection [ 12 ]. Wearable sensors may quantify the shoulder load in wheelchair-related everyday activities using machine learning-based approaches in [ 13 ]. Real-time monitoring of bodily activity is possible because high-performance stretchable electromechanical sensors can convert stress and strain into a quantifiable electrical output [ 14 ]. Based on questionnaire surveys and data analysis, machine learning algorithms may forecast work-related risk factors among bus drivers [ 15 ]. In sports like tennis, wearable technology with real-time motion analysis capabilities may be utilized to avoid injuries [ 16 ]. On quantitative MRI, deep learning techniques have shown encouraging findings for disease diagnosis, with diagnostic performance better than traditional machine-learning techniques for diagnosing knee osteoarthritis [ 17 ]. Based on a scoping assessment of the literature, machine learning applications in imaging musculoskeletal malignancies have been discovered [ 18 ]. By providing real-time biomarker monitoring, wearable chemical sensors have the potential to fundamentally alter how our health and well-being are assessed [ 19 ]. Piezoresistive pressure sensors that are wearable, washable, and based on a 3D sponge network can detect various human and animal actions in real time, ranging from the powerful pressure of joint activity to the delicate pulse pressure [ 20 , 21 ]. For the categorization and preservation of biological images, machine learning and deep learning techniques may be applied (et al., 2022). Collagen fiber-based flexible wearable pressure sensors provide great sensitivity, quick reaction times, long-term stability, and remarkable repeatability [ 22 ]. Highly oriented carbon nanotube film-based breathing sensors may enable real-time health monitoring for early illness identification and affordable treatment [ 23 ]. Wearable haptics with flexible hybrid sensor systems with feedback capabilities might be used in virtual reality (Xu et al., 2020). Microelectromechanical technology-based wearable biofeedback sensors may deliver real-time biofeedback and encourage active posture modification [ 24 ]. Hip or knee osteoarthritis patients may be distinguished from asymptomatic controls with excellent accuracy using joint kinematics alone [ 25 ]. Managing the request and communicating the findings in musculoskeletal imaging are two aspects of the radiologist's workflow where artificial intelligence may have benefits [ 26 ]. Wearable fiber sensors make highly sensitive human micromotion tracking with long-term stability possible [ 27 ]. Flexible pressure/strain sensors made of carbon nanotubes have a wide range of applications, including artificial electronic skin, plant health monitoring, and monitoring of vital signs [ 28 , 29 ]. Effective manual text categorization may be achieved using discrete machine learning models [ 30 ]. Using 18FFDG PET-CT, decision trees with only two musculoskeletal locations may be utilized to identify polymyalgia rheumatica [ 31 ]. Acoustic and capacitive sensing are possible using transparent, flexible vibration sensors based on a hybrid thin membrane as a wheel [ 32 ]. Patients may benefit from a stroke rehabilitation system powered by the Internet of Things that uses intelligent wearable armbands and machine learning [ 33 ]. Gait phase detection may be performed using flexible insole sensors and securely linked electrodes [ 34 ]. Using physiological recordings from wearable sensors, wrapper feature selection algorithms may be employed for emotional evaluation [ 35 ]. Clinical use of machine learning techniques may be enhanced by clinician-tailored visual presentations of outcomes from black-box algorithms [ 36 ]. Wearable optical sensors may leverage strain-ultrasensitive surface wrinkles [ 28 , 29 ]. Infant sleep position sensors that newborns wear may monitor their postures in real-time and automatically inform carers when necessary (Yun et al., 2019). By detecting factors produced by human breathing, heartbeat, and movement, skin-like and stretchy optical fiber sensors with hybrid coding of wavelength-light intensity can thoroughly analyze human health [ 20 , 21 ]. Real-time gait pattern analysis may be done using intelligent insoles made of soft materials [ 37 ]. By giving the patient and the treating physician additional real-time information, wearable technologies might decrease the number of clinic visits by actively including the patient in evaluation and therapy [ 38–43 ]. 2 Proposed method based on ML The creation of a machine learning (ML) algorithm capable of precisely calculating tibial bone force in real-time while running, utilizing information from wearable sensors, served as the foundation of this study. This section explores the technique used for this task, emphasizing the method for gathering data, feature extraction, model choice, and validation strategy. Fig. 1 depicts the recommended model for participants wearing wearable sensors, which comprises runners wearing IMUs and pressure-sensing insoles. Normalizing raw sensor data and resolving anomalies are all parts of data gathering and preprocessing. Feature extraction is analyzing raw data to extract pertinent properties for the ML model. Compiling information into a dataset that can be used to train and test the model is known as dataset preparation. Model selection: evaluating and selecting the most effective ML model (e.g., kNN, SVM). Utilizing the available dataset to train the selected model is known as model training. Parameter tuning is the process of modifying model parameters to get the best performance. Validation is the internal assessment of the model's performance and correction (k-fold cross-validation). External validation and a review of the model's generalizability are two types of testing (leaving one person out). Comparing the ML model's performance against older, more established methods. Real-time estimation of tibial force: analyzing and forecasting tibial forces during practice runs using the trained model. Collecting and preparing data: Ten runners participated in the study, providing a variety of data points for various running scenarios (speeds and slopes). Embedded sensors: To collect biomechanical data, each participant wore pressure-sensing insoles and inertial measurement units (IMUs) on the foot and shank. These sensors were positioned with care to capture the movements' dynamics correctly. Data collection: The sensors continuously recorded accelerations and angular velocities during each session. Various preparation techniques, including normalization and handling of any missing or aberrant data, ensured the data's consistency and integrity. Extrapolation of Significative Features: From the raw sensor data gathered, significant characteristics required for the ML model training were extracted. The timing of the running motion, force application patterns, stride characteristics, and other factors were among these criteria. High-dimensional sensor data was transformed into a format that machine learning applications can understand using advanced signal processing techniques. Model Selection and Training: Several machine learning (ML) models, such as Support Vector Machines (SVM) with different kernel functions and k-nearest Neighbors (kNN), were evaluated for their performance in this application. The chosen model must demonstrate generalizability across various running circumstances and individuals in addition to accuracy. A large dataset with various running speeds, slopes, and personal biomechanics was employed for model training to provide a trustworthy learning process. Algorithm Validation and Testing: A rigorous validation method utilizing k-fold cross-validation within the training dataset was employed to verify the model's dependability and resistance to overfitting. The model's performance on untested data and its ability to generalize across new users were evaluated using a “leave-one-subject-out” testing technique. Performance Benchmarking: Using VALR, the ML algorithm's performance was compared to that of more well-known methods like the physics-based strategy and a typical single-variable linear regression model. The accuracy of the force estimations produced by each method was emphasized by focusing on the Mean Absolute Percentage Error as the main assessment criterion (MAPE). A comparison of Predicted Bone Damage Across Algorithms is shown in Table 1 . Comparison of Predicted Bone Damage During Simulated 10-Km Runs ": The table's primary goal is to convey information or findings demonstrating the anticipated harm to bones. Fig. 2 shows Models or algorithms are often used to predict values, implying that several procedures or approaches may be considered. Over Algorithms ": This stresses that multiple algorithms or procedures were used to generate the data or results. These algorithms use VALR, Physics-based, and Machine Learning techniques in the context of the given graph. This comparison enables an assessment of the potency or precision of each algorithm in foretelling bone deterioration. This gives the setting or circumstance in which these predictions were produced, such as during simulated 10-km runs. It informs the reader that the projections are a hypothetical scenario in which a 10-km run occurs. This is important because its performance or predictions may change depending on the circumstances or situations in which an algorithm is used. Algorithm AnalyzeTibialForce Image 1 Table 2 provides a summarized view of the tibial force exerted at different time intervals. While there are fluctuations, there's no clear increasing or decreasing trend in tibial force over time. This might suggest that the runner maintained a relatively consistent biomechanical posture and pace throughout the run, with no prolonged periods of acceleration or deceleration. Fig. 3 shows Tibial Force during a 10 km Run and highlights the graph's attention on biomechanical stresses, notably those to the tibia during a moderate-distance run. The creation of efficient training programs and the identification of injury risks depend on research of this nature. Specifications of the Graph: Consistent Fluctuations: The recurrent spikes visible throughout the graph probably represent individual steps or strides. Each peak marks the exact moment of foot contact when the force acting on the tibia is most significant. Although the force is generally consistent, variances may occur owing to changes in the runner's speed, the terrain, or even their degree of exhaustion. Regions with slight variation may represent steady-state running, in which the runner keeps up a nearly constant speed and force application. Data relevance: Injury prevention: To assess the stress on the lower leg, it is essential to comprehend tibial force. High forces, particularly in long-distance runners, may contribute to ailments like shin splints or stress fractures. Performance Optimization: Using this information, athletes and coaches may modify their running mechanics, gear, and training schedules to reduce unnecessary forces and improve performance. 3 Conclusion In conclusion, machine learning can completely transform wearable technology in the field of musculoskeletal analytics. This study has shown that an ML algorithm can outperform conventional techniques in precisely calculating tibial bone stress during dynamic activities like running when trained and verified. The MAPE findings show a considerable increase in accuracy that is not only important statistically but also from the perspective of health and rehabilitation. The accuracy of the ML model is crucial because even little errors in force prediction may result in significant errors in evaluations of bone injury. These results provide a robust platform for future work, but the constant development of wearable technology and machine-learning models emphasizes the need for further study. Increased participant diversity, improved algorithms, and testing of real-world applicability across a broader range of activities should all be prioritized. Although the road to the ideal fusion of ML and wearables is still young, the potential is a source of hope for athletes, healthcare professionals, and everyday people. Ethical approval Not applicable. Consent to participate Not applicable. Consent for publication The authors provide consent for publication in this journal. Funding This study did not receive any funding in any form. Author contributions Dr. Srinivas Ambala, the corresponding author, played a pivotal role in conceptualizing the study, developing the machine learning framework, leading the project, and revising the manuscript. Aarti Amod Agarkar focused on wearable sensor technology implementation, data acquisition, and analysis, and assisted in drafting and revising the manuscript. Dr. Punam Sunil Raskar contributed to the development of machine learning algorithms, provided biomechanical expertise, and was involved in manuscript preparation. Venkateswarlu Gundu assisted in software development for data processing, experimental setup, and focused on the manuscript's technical aspects. Dr. N. Mageswari provided input in sensor technology integration with machine learning and contributed to the technology-focused sections of the manuscript. Dr. T. S. Geetha offered substantial contributions in study design, data interpretation, and manuscript revision. All authors have read and approved the final manuscript. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | [
"BAO",
"BULLING",
"CHEN",
"KWAPISZ",
"GUO",
"YANG",
"SUN",
"HENG",
"GAO",
"HASAN",
"PARK",
"DEROCO",
"AMREIN",
"DINH",
"HANUMEGOWDA",
"KRAMBERGER",
"LIU",
"HINTERWIMMER",
"MAHATO",
"LI",
"LI",
"PENG",
"NGUYEN",
"KUO",
"EMMERZAAL",
"HIRSCHMANN",
"KE",
"MA",
"MA... |
2446ada1d74243bdb0be7eecd22724fc_Synthesis of functionalized tetrahydro-dispiropyrazolone 4 2-pyran-5 4-pyrazolone scaffolds via an O_10.1016_j.rechem.2025.102604.xml | Synthesis of functionalized tetrahydro-dispiro[pyrazolone 4, 2′-pyran-5′, 4″-pyrazolone] scaffolds via an Oxa-Michael-initiated cascade [4 + 2] annulation reaction using pyrazoledione-derived MBH-alcohols | [
"Zhou, Xiaoming",
"Du, Juan",
"Yuan, JiPeng",
"Wang, Xuehan",
"Yang, Hongli",
"Wang, Xuekun",
"Gao, Zhenzhen"
] | In this study, an efficient protocol was developed for synthesizing tetrahydro-dispiro[pyrazolone 4, 2′-pyran-5′, 4″-pyrazolone] scaffolds, using pyrazoledione-derived MBH alcohols and arylidene pyrazolones. The reaction proceeds through an oxa-Michael-initiated cascade [4 + 2] annulation under metal-free conditions, affording a range of novel tetrahydrodispiro[pyrazolone-pyran-pyrazolone] derivatives with yields between 50 and 89 % and diastereoselectivities from 2.5:1 to >20:1. As a novel oxa-Michael donors, pyrazoledione-derived MBH alcohols were employed for the first time as a key substrate in this annulation reaction. | 1 Introduction Polycyclic rings constitute fundamental frameworks that are widely present in numerous natural and synthetic compounds, exhibiting a broad spectrum of biological and pharmacological activities. These structures have attracted considerable attention from synthetic and medicinal chemists over the past few decades [ 1–4 ]. Among them, spiropyrazolone compounds are recognized as privileged scaffolds due to their diverse and potent bioactivities, including antitumor, antibacterial, anti-inflammatory, and insecticidal effects ( Fig. 1 , I V) [ 5–9 ]. Nonetheless, constructing more complex spiropyrazolone skeletons, such as those incorporating two pyrazolone units within a single spirocyclic framework, known as dispiropyrazolone-pyrans, featuring fully substituted stereogenic centers on the pyran ring ( Fig. 1 , VI)—remains a significant challenge due to issues related to steric hindrance and stereoselectivity [ 10 ]. To the best of our knowledge, only a few effective strategies for the construction of dispiropyrazolone-pyrans have been explored thus far [ [ 11] ]. Among Morita-Baylis-Hillman (MBH) adducts, MBH carbonates derived from aldehydes have emerged as versatile synthons that react with various electrophiles to promote [3 + 2] [ 2 , 12–21 ], [3 + 3] [ 22–27 ], [3 + 4] [ 28–31 ], [3 + 6] [ 32 ] and tandem annulation [ 33–39 ] reactions to access diverse spirocyclic compounds [ 40–42 ]. In stark contrast, MBH alcohols derived from aldehydes bearing hydroxyl groups exhibit significantly lower reactivity due to their poor leaving group ability and have been far less explored in annulation reactions compared to classical MBH carbonates. In 2019, Guo et al. were the first to employ aldehyde-derived MBH alcohols in phosphine-catalyzed tandem annulations with azomethine imines, demonstrating their latent synthetic potential following direct activation ( Scheme 1 a ) [ 43 ]. To enhance reactivity and enable the synthesis of structurally valuable polycarbo- and polyheterocycles, MBH alcohols have been integrated into bioactive heterocyclic scaffolds. In 2022, Kumarswamyreddy and coworkers first reported an oxa-Michael cascade [4 + 2] annulation between isatin-derived MBH alcohols and alkylidene pyrazolones for accessing tetrahydro-dispiro[indolinone-3,2′-pyran-5′,4″-pyrazolone] derivatives in an alkaline environment within a short time by utilizing isatin-derived MBH alcohols as oxa-Michael donors ( Scheme 1 b) [ 44 ]. To expand the application of MBH alcohol in the synthesis of spirocyclic compounds, the development of novel MBH alcohol-based heterocyclic skeletons is necessary. Motivated by this approach, we synthesized MBH alcohols bearing a pyrazolinone core, aiming to investigate their utility in annulation reactions for constructing structurally diverse dispirocyclic pyrazolinone derivatives. Herein, we describe the successful development of an oxa-Michael cascade [4 + 2] annulation between pyrazoledione-derived MBH alcohols and unsaturated pyrazolones, yielding biologically and pharmaceutically relevant functionalized dispiropyrazolone-pyrans that incorporate two tertiary and quaternary stereocenters ( Scheme 1 c). 2 Results and discussion The study initially focused on executing the oxa-Michael cascade [4 + 2] annulation using unsaturated arylidene pyrazolone 1a and pyrazoledione-derived MBH alcohol 2a as model substrate. The reaction was performed in CH 3 CN at room temperature with 1.0 equivalent of KO t -Bu. However, this transformation proceeded sluggishly and remained incomplete after 12 h. Nevertheless, the desired product 3aa was isolated in 26 % yield with 3:1 diastereoselectivity ( Table 1 , entry 1). Subsequent trials with inorganic bases such as K 2 CO 3 , KOH, and NaHCO 3 failed to significantly improve either the yield or diastereoselectivity ( Table 1 , entries 2–4). Next, three organic bases, DABCO (1,4-diazabicyclo[2.2.2]octane), DMAP (4-dimethylaminopyridine) and DBU (1,8-diazabicyclo[5.4.0]undec-7-ene), were evaluated at ambient temperature ( Table 1 , entries 5–7). Among them, only DBU exhibited catalytic activity, furnishing product 3aa in 45 % yield with an improved diastereomeric ratio of 15:1 ( Table 1 , entry 7). Various solvents, including chlorinated (DCM, DCE), polar aprotic and protic (DMF, MeOH, DMSO), and ethereal (THF) types, were subsequently tested while maintaining 1.0 equivalent of DBU ( Table 1 , entries 8–13). However, none of these alternatives surpassed the performance observed with CH 3 CN. To further increase the yield, an investigation into the impact of different temperatures on the reaction was carried out ( Table 1 , entries 14–16). Interestingly, temperature was found to exert a notable influence on the reaction outcome, with 65 °C identified as the optimal condition for achieving higher yields within a shorter reaction time. Under these conditions, a 68 % yield was obtained with a diastereomeric ratio of 15:1 ( Table 1 , entry 15). Notably, elevating the temperature to the boiling point of CH 3 CN led to a reduction in diastereoselectivity ( Table 1 , entry 16). Adjusting the 1a and 2a feeding ratios played a crucial role in increasing yields while preserving diastereoselectivity. By maintaining a ratio of 1a : 2a = 2.5:1, a more efficient reaction progress was achieved, resulting in an 85 % product yield with 15:1 diastereoselectivity ( Table 1 , entry 18). Therefore, the best reaction conditions were as follows: 1a : 2a = 2.5:1, CH 3 CN as the solvent, and 1.0 equiv. of DBU as the base at room temperature. Under these optimal conditions, the generality and limitations of unsaturated alkylidene pyrazolones 1 in the annulation reaction were systematically investigated. Most alkylidene pyrazolones 1 were successfully transformed into the corresponding cycloadducts 3 in moderate to excellent yields (up to 89 %) and with diastereomeric ratios of up to 20:1 ( Table 2 , entries 1–20). Notably, ortho -substitutions aryl groups such as 2-Me ( Table 2 , entry 2) and 2-OMe ( Table 2 , entry 5) on the alkylidene pyrazolones yielded products with higher diastereomeric ratios (dr > 20:1) compared to their meta - and para -substituted analogues ( Table 2 , entries 3–4, 6–7). Similarly, ortho -halo-substitued alkylidene pyrazolones like 2-F ( Table 2 , entry 8), 2-Cl ( Table 2 , entry 11) and 2-Br ( Table 2 , entry 14) substituted acceptors resulted in corresponding products with higher diastereomeric ratio (dr > 20:1) than their meta - and para - substituted counterparts ( Table 2 , entries 9–10, 12–13 and 15–16). Other electron-withdrawing substrates such as 4-CF 3 and 4-CN aryl-containing alkylidene pyrazolones, had excellent performance in this reaction, resulting in products in good yields with good to excellent diastereoselectivities ( Table 2 , entries 17–18). The naphthyl-substituted alkene effectively afforded the product in 89 % yield with 20:1 diastereoselectivity ( Table 2 , entry 19). In particular, alkyl group-substituted pyrazolone 1 t was also compatible with the reaction, affording the corresponding product in 77 % yield with 20:1 diastereoselectivity ( Table 2 , entry 20). However, no product formation was observed when a 2-thienyl-substituted substrate was employed, indicating a limitation of the reaction scope ( Table 2 , entry 21). The scope of pyrazoledione-derived MBH alcohols was also explored under the optimal conditions. The results showed that halogen substituents on aryl ring in the R 2 position were well-tolerated in the [4 + 2] annulation reaction, affording the desired products in moderate yields with excellent diastereoselectivities ( Table 3 , entries 1–4). In contrast, the substrate bearing a methyl group at the para -position of the benzene ring produced the corresponding product in 73 % yield with a diastereomeric ratio of 3:1 ( Table 3 , entry 5). It is also noteworthy that when R 2 was a methyl group, the yield was only 65 %, and brought a drop in the diastereoselectivity ( Table 3 , entry 6). In addition to unsaturated arylidene pyrazolones and alkylidene pyrazolones, additional electron-deficient alkenes, such as barbiturate- and thiazolone-derived alkenes, were examined under the standard reaction conditions. However, these substrates failed to react with pyrazoledione-derived MBH alcohol 2a ( Scheme 2 a, b ). Similarly, an MBH alcohol generated from acrylonitrile and pyrazoledione afforded only trace amounts of the desired product ( Scheme 2 c). To elucidate the structural and stereochemical characteristics of compound 3 , NMR spectroscopy, X-ray crystallographic analysis, and high-resolution mass spectrometry (HRMS) were employed ( Fig. 2 ), with comprehensive details provided in the supporting information. The crystallographic data for 3 ha have been deposited with the Cambridge Crystallographic Data Centre (CCDC 2361153; DOI: https://doi.org/10.5517/ccdc.csd.cc2k7z71 ). Considering the significance of dispiropyrazolone-pyran scaffolds in medicinal chemistry, a scale-up experiment was conducted to underscore the practical applicability of this protocol ( Scheme 3 a ). Under optimized conditions, the reaction between alkylidene pyrazolone 1a and pyrazoledione-derived MBH alcohol 2a proceeded smoothly to yield 3aa in 84 % yield with a 15:1 diastereomeric ratio. Additionally, further functionalization of the cycloadducts was explored to demonstrate the method's versatility. Compound 3o underwent Suzuki coupling with boronic acid 5 to yield the chiral-retentive product 6 . Likewise, the cycloadduct 3pa reacted with Pd(Ph 3 P) 4 in a Suzuki coupling to afford product 4 in 73 % yield with a diastereomeric ratio exceeding 20:1. Based on previous reports, Scheme 3 presents the proposed reaction mechanism. In this pathway, DBU serves as a base to abstract a proton from MBH alcohol 2 , yielding a deprotonated MBH alcohol that acts as a Michael donor. This donor undergoes a Michael addition to alkylidene pyrazolone 1 , forming a Michael adduct. The intermediate is stabilized via transition-state trans verification, followed by intramolecular cyclization, ultimately affording the stereoselective tetrahydro-dispiro[pyrazolone 4, 2′-pyran-5′, 4″-pyrazolone] derivative 3 . 3 Conclusions Overall, a highly efficient base-mediated [4 + 2] annulation reaction between pyrazoledione-derived MBH alcohol and arylidene pyrazolone has been developed. This robust strategy enables the synthesis of a series of potentially bioactive tetrahydro-dispiro[pyrazolone 4, 2′-pyran-5′, 4″-pyrazolone] scaffolds bearing two tertiary and quaternary stereocenters, affording moderate to good yields. The reaction exhibits broad substrate scope and demonstrates excellent functional group tolerance. 4 Experimental section Unless otherwise stated, all reagents were purchased from commercial suppliers and used without further purification. All solvents were filtered and dried according to standard procedures before use. All reactions were performed in dry glass vessels under nitrogen and magnetic stirring. The reaction was monitored by thin layer chromatography (TLC) on silica precoated glass plates. The chromatogram was viewed under 254 nm UV light. Qingdao Marine flash silica gel (100–200 mesh) (Qingdao, China) is used for flash column chromatography. The 1 H and 13 C NMR spectra of CDCl 3 were recorded using a 500 MHz NMR instrument. Melting point was measured using an X-4 digital micromelting point meter (Shanghai, China). Accurate mass measurements were made using Agilent instruments and ESI-MS technology (California, USA). X-ray crystallographic data were obtained using using a Bruker D8 VENTURE instrument (Billerica, Germany). 4.1 General procedure for the synthesis of 4-arylidene pyrazolone derivatives 1 Unlabelled Image The solution of ethyl ester (10 mmol, 1.0 equiv.) and hydrazine (11 mmol, 1.1 equiv.) in 20 mL of acetic acid was refluxed in an oil bath for 6–8 h and then allowed to cool. The crude pyrazolone was obtained by recrystallization using acetic acid as a yellow solid, which could be directly used in the next step without requiring further purification. The crude pyrazolone (8.0 mmol, 1.0 equiv.) was dissolved in 20 mL of acetic acid, followed by the addition of aldehyde (8.8 mmol, 1.1 equiv.) and sodium acetate (8.8 mmol, 1.1 equiv.). The resulting mixture was stirred at 30 °C for 1–3 h. Upon completion, 50 mL of water was added to the reaction mixture. The aqueous phase was then extracted with ethyl acetate (3 × 20 mL), dried over sodium sulfate, and concentrated under vacuum. The crude residue was purified by column chromatography on silica gel using a Hexanes/EtOAc (100:1) eluent. 4.2 General procedure for the synthesis of pyrazoledione-derived MBH-alcohols derivatives 2 Unlabelled Image Nitrosoarene (30 mmol, 1.0 equiv.) and K 2 CO 3 (6.0 mmol, 0.2 equiv.) were added to a solution of pyrazolone derivative (30 mmol, 1.0 equiv.,) in MeOH (0.6 M) at room temperature. The reaction mixture was then refluxed for 3 h. The solvent was removed under reduced pressure and the residue was dissolved in ethyl acetate. The organic layer was washed three times with water, once with brine and then dried over anhydrous MgSO 4 . After evaporation of ethyl acetate under reduced pressure, the crude product was purified by flash columnchroma-tography ( n -pentane/diethyether, 3: 1) to afford the ketimine products. The ketimine (20 mmol, 1.0 equiv.) was dissolved in THF (0.13 M) and a 2.0 N HCl solution (20 mL) was added to the reaction mixure at room temperature. The progress of the reaction was monitored on TLC. After completion of the reaction, the mixture was diluted with water. The organic layer was extracted three times with dichloromethane and the combined organic layers were dried over anhydrous MgSO 4 . The solvent was removed under reduced pressure and the crude product was directly purified by flash column chromatography ( n -hexane/EtOAc, 1:1) toafford the desired product pyrazolone-derived diketones. A solution of diketones (10 mmol, 1.0 equiv.) and methyl acrylate (11 mmol, 1.1 equiv.) in THF (30 mL) was added DABCO (2 mmol, 0.2 equiv.) at room temperature. After the reaction completed, the mixture was concentrated in vacuo and the resulting residue was purified by flash column chromatography using EtOAc and hexanes as the eluent to give the MBH alcohol. 4.3 General procedure for the oxa-michael-initiated cascade [4 + 2] annulation reaction To a stirred solution of 4-arylidene pyrazolone derivatives 1 (0.50 mmol, 2.5 equiv.) andpyrazoledione-derived MBH-alcohols 2 (0.20 mmol, 1.0 equiv.) in 4 mL of CH 3 CN was added DBU (0.20 mmol, 1.0 equiv.). The mixture was kept at room temperature and stirred until MBH-alcohols 2 were fully consumed (detected by TLC). Then the mixture was concentrated under vacuum and purified by silica gel column chromatography to provide products 3 . CRediT authorship contribution statement Xiaoming Zhou: Methodology, Data curation. Juan Du: Validation, Investigation, Formal analysis. JiPeng Yuan: Investigation, Data curation. Xuehan Wang: Supervision, Funding acquisition. Hongli Yang: Supervision, Resources, Project administration. Xuekun Wang: Supervision, Funding acquisition. Zhenzhen Gao: Writing – review & editing, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This research was funded by support by project ZR2021MB110 supported by Shandong Provincial Natural Science Foundation, the National Natural Science Foundation of China (No. 22101002), the Support Plan on Science and Technology for Youth Innovation of Universities in Shandong Province (2022KJ111)”, Guangyue Young Scholar Innovation Team Foundation of Liaocheng University (LCUGYTD2022-04), the Special Construction Project Fund for Shandong Province Taishan Scholars. Appendix A Supplementary data Supplementary material Image 5 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.rechem.2025.102604 . | [
"LIN",
"GALLIFORD",
"KALLEPU",
"KANDIMALLA",
"BONDOCK",
"CHANDE",
"MANDHA",
"WU",
"YOSHIDA",
"LIANG",
"XIE",
"LIAO",
"LIU",
"WANG",
"WU",
"ZHONG",
"ZHANG",
"ZHENG",
"DUAN",
"WANG",
"ZHANG",
"JIN",
"YANG",
"ZHANG",
"ZHANG",
"ZHENG",
"ZHOU",
"CHEN",
"CHEN",
"Z... |
2eb1716a624e46dd85827c8875a97792_Endoscopic Sciatic Nerve Decompression in the Prone PositionAn Ischial-Based Approach_10.1016_j.eats.2016.02.020.xml | Endoscopic Sciatic Nerve Decompression in the Prone Position—An Ischial-Based Approach | [
"Jackson, Timothy J."
] | Deep gluteal syndrome is described as sciatic nerve entrapment in the region deep to the gluteus maximus muscle. The entrapment can occur from the piriformis muscle, fibrous bands, blood vessels, and hamstrings. Good clinical outcomes have been shown in patients treated by open and endoscopic means. Sciatic nerve decompression with or without piriformis release provides a surgical solution to a difficult diagnostic and therapeutic problem. Previous techniques have used open methods that can now performed endoscopically. The technique of an endoscopic approach to sciatic nerve decompression in the prone position is described as well as its advantages and common findings. Through this ischial-based approach, a familiar anatomy is seen and areas of sciatic nerve entrapment can be readily identified and safely decompressed. | Deep gluteal syndrome (DGS) is an overlooked cause for chronic buttock and lower extremity pain. DGS, as opposed to piriformis syndrome, is a recommended terminology that encompasses any source of sciatic nerve entrapment as it courses under the gluteus maximus. Nerve entrapment can occur from the piriformis, gluteus, or hamstring muscles or fibrous bands surrounding the sciatic nerve. Post-traumatic scarring in the deep gluteal space can cause sciatic nerve entrapment as well. 1 2 Martin et al. reported on the results of 35 patients treated with an endoscopic approach to sciatic nerve decompression. This approach gained access to the deep gluteal space through the peritrochanteric space. The advantage of this approach is ease of the supine position and the ability to address any intra-articular or peritrochanteric pain that may coexist. 3 It is our experience in dealing with endoscopic hamstring disorders that the sciatic nerve can readily be visualized and decompression can be accomplished, if needed, from the prone position. This allows one to address the sciatic nerve at the same time as hamstring disorders, which can coexist. Hamstring disorders are treated with the patient in the prone position and endoscopic access achieved centered around the ischium. 4 In the prone position, the anatomic features of the deep gluteal space, including the ischium and overlying combined hamstring tendon origin, the deep external rotators, and the sciatic nerve, can be easily visualized with an ease of orientation. In this manner, the surgeon looks down at the posterior hip, making orientation in a complex anatomic space simpler. In this position, hamstring disorders can be addressed as well. 5 For patients who do not have intra-articular or peritrochanteric pain and may or may not have hamstring pathology, we prefer an ischial-based, prone approach to sciatic nerve decompression in the deep gluteal space. This compromises the most patients we treat for DGS. The purpose of this technical note is to describe an endoscopic approach to sciatic nerve entrapment in the deep gluteal space in the prone position; this is an ischial-based approach. Technique Positioning The patient is placed under general endotracheal anesthesia before positioning. The patient is then positioned prone on a radiolucent table that allows fluoroscopy to image the pelvis. It is our preference to use a Jackson table. All bony prominences are well padded, and the pelvis and lower extremities are positioned with the hip flexed approximately 30° ( Fig 1 , Table 1 ). Portal Placement and Access Before prepping, the gluteal crease is marked to guide the placement of the incision for portal placement. Fluoroscopy is used to assure the angle and location of insertion of the trocar and sheath. The 2 primary portals are a posterolateral (PL) portal and a posteromedial (PM) portal ( Fig 1 ). The PM portal is located just distal and slightly medial to the lateral border of the ischium and the PL portal 3 to 4 cm lateral to the PM portal. The exact location of the PL portal can be confirmed by fluoroscopy, judging the angle at which the trocar will strike the ischium ( Fig 2 A). The PL portal is created first, with insertion of the trocar aimed at the ischium. The trocar is inserted through the gluteus maximus muscle, at the inferior border of the muscle, before striking the ischium. Tactile feedback and fluoroscopic imaging confirm placement ( Table 1 , Fig 2 A). A 70° arthroscope is inserted and the fluid insufflated to 40 to 50 mm Hg (DualWave, Arthrex, Naples, FL). It is important to be patient and not begin using a shaver immediately, as fluid insufflation progresses to open the space under the gluteus maximus ( Table 1 ). Alternatively, a spinal needle and over-the-guidewire technique can be used before insertion of the trocar. The PM portal is then made by a similar method but with direct visualization of the trocar entering into the DGS. Ischial Bursectomy For visualization and elimination of any bursal component to pain, an ischial bursectomy is performed. The arthroscope and working instruments can be placed in whichever portal is ergonomically optimal. With the camera facing the ischium, the white background of the common hamstring origin can be seen and the overlying bursal tissue removed with a shaver (Incisor plus Elite, 4.5 mm, curved, Smith and Nephew, Andover, MA) ( Figs 2 B and 3 A , Video 1 ). It is important to keep pump pressures constant such that activation of suction on the shaver does not decompress the space and draw in muscle tissue. Electrocautery can be used to coagulate any bleeding that is encountered. Once the hamstring tendon is completely visualized, attention is turned to the sciatic nerve ( Video 1 ). Sciatic Nerve Decompression Once adequate visualization of the ischium and overlying tendons is performed, the 70° arthroscope is turned to face upward at the overlying gluteus maximus muscle. With this view, for orientation sake, keep the ischium to the side of the screen while looking proximal and posterior ( Video 1 ). Another helpful view is obtained by dropping your hand and putting the tip of the arthroscope more posterior and pointing the 70° camera anteriorly. This provides a view from posterior to anterior, keeping the ischium at the bottom of the screen. It is through this view that most dissection occurs ( Fig 3 B, Video 1 ). Begin by bluntly dissecting remaining loose connective tissue with a 50° curved radiofrequency (RF) device superficially (posteriorly) and laterally, away from the ischium. We now prefer use of a RF device that functions at lower temperatures to lessen risk of damage to neural structures (Multivac 50° XL, Smith and Nephew). Generally, with just light dissection, the posterior femoral cutaneous nerve can be visualized in this superficial tissue. Often this can be isolated with blunt dissection only. Should there be fibrous bands, short bursts of RF can further release these to completely visualize the cutaneous nerve ( Video 1 ). This nerve should not be confused for the sciatic nerve as it is quite thick in appearance through the magnification of the arthroscope. This can be followed distally to see that it exits superficially and does not track deep along the hamstring. After the superficial nerve is free, sweep the tissue deeper to it, in line with the course of the sciatic nerve. The sciatic nerve lies deep (anterior) and immediately adjacent to the ischium. Because the entrapment is often at the proximal extent of the hamstring origin or proximal to that, it is helpful to find the nerve more distal along the ischium ( 6 Table 2 ). From here, crossing fibrous bands overlying the sciatic nerve are released with short, careful activation of RF ( Fig 3 B). With a curved device, the electrocautery can be placed under the fibrous band and elevated away from the nerve before release ( Video 1 , Table 1 ). This proceeds from distal to proximal. Before release, ensure that the tissue that is to be released is not an arteriovenous structure. Invariably, a leash of vessels is encountered in this region. It is our preference to avoid damaging or purposely tying these off unless the nerve is entangled in the vessels ( Video 1 , Fig 3 C). To aid in visualization and orientation, alternate the view of the arthroscope from the 2 previously described to achieve orthogonal views of the nerve ( Table 1 ). After final release of all crossing fibrous bands, the space will open tremendously and allow for visualization of the sciatic nerve ( Video 1 ). These crossing fibrous bands run from the ischium to the gluteus maximus muscle, and after releasing them, the overlying gluteus maximus muscle lifts to open the DGS even further. At this point, further blunt dissection proximally will allow you to visualize the nerve coursing proximally toward the sciatic notch. A tendinous band of the piriformis muscle can be seen crossing posterior to the nerve. If this seems to be an impinging structure, a release of the tendinous portion can be done ( Fig 2 C). Often, there is a large amount of separation between the piriformis muscle and the nerve. Fluoroscopy can be used to ensure that you are proximal enough with the release ( Fig 2 D, Video 1 ). Complete decompression can be visualized from the hamstring to the sciatic notch with easy mobilization of the sciatic nerve with a blunt instrument ( Fig 3 C, Video 1 ). Discussion We have described a surgical technique to decompress the sciatic nerve through its course in the deep gluteal space. We believe this ischial-based technique to be highly reproducible and very safe to surrounding structures. This technique is optimal for patients who have concomitant hamstring pathology and patients who do not have concomitant peritrochanteric or intra-articular pathology. The advantages of this technique are seen mainly in the endoscopic versus an open approach. Improved visualization and less soft tissue and muscle damage are possible with the arthroscope. This has the potential to help in the recovery and amount of postoperative pain. Improved visualization can help surgeons appreciate subtle anatomic anomalies that can create sciatic nerve compression such as vascular and fibrous structures. The prone position allows surgeons to orient to the fairly complex anatomy in this region. This helps prevent damage to important arteriovenous and neural structures. The ability to treat hamstring pathology and ischial bursitis is another important advantage in those with hamstring pathology. Hamstring tendon tearing can lead to surround scar tissue, causing sciatic nerve entrapment. Without treating the hamstring pathology as well, residual pain from the hamstring could compromise clinical outcomes. 4 The ischial-based approach allows the surgeon to begin in normal anatomy and move to areas of entrapment or abnormal anatomy. This allows the surgeon to find the nerve and trace it proximally, simplifying the dissection ( 7 Table 2 ). The limitations of this technique are the inability to address peritrochanteric pathology that may occur concomitantly. It has been our experience that these do not often occur together so the limitation is not seen as major. Another limitation is a short length of increased sitting pain postoperatively from the location of the portals. The portal placement is in the gluteal crease and allows for less damage to the gluteus maximus with less risk of inferior gluteal nerve injury. This distal location allows for easy advancement from distal normal to proximal abnormal anatomy, but portal placement is in a location of sitting. Often patients have pain with sitting before the surgery, and simply telling patients that this pain may get worse for a few weeks before it gets better is enough to minimize this issue. Complications from positioning in the prone position is a risk; however, this risk is small with operative times of less than 1 hour. 8 Exclusion of discogenic pain or radicular etiology is critical before performing a sciatic nerve decompression. Nondiscogenic sciatic pain is a highly studied topic but remains controversial, and there is no gold standard for diagnosis. Many tools are available to make the diagnosis but it is often a diagnosis of exclusion. Pelvis magnetic resonance imaging, specific magnetic resonance neurography, electromyography/nerve conduction velocity, specialized physical exam tests, and diagnostic injections are tools for diagnosis. 9 The most critical component to success in this technique is assuring that the location of the sciatic nerve entrapment is within the deep gluteal space. 10-12 The fibrous bands discussed here and seen in Video 1 are present in a large number of the sciatic nerve decompressions performed by the author (T.J.J.). The exact etiology of these is unclear. They are presumed to be from scarring either from direct trauma or from repetitive trauma and create a compression of the nerve in the deep gluteal space. This “gluteal tunnel syndrome” has been previously described but deserves more recognition for causing sciatic nerve entrapment. 3 The prone, ischial-based endoscopic approach to sciatic nerve decompression is a safe, reliable technique for addressing the multiple causes of sciatic nerve entrapment and allows for concomitant treatment of associated hamstring pathology. Supplementary Data Video 1 With the patient in the prone position and fluoroscopic guided access, an ischial bursectomy is performed initially. The arthroscope is in the lateral portal and the shaver in the medial portal, and bursa tissue is removed to clearly identify the hamstring tendon. Next, the superficial connective tissue is dissected with blunt dissection and short bursts of the radiofrequency device to visualize the posterior cutaneous nerve. Working distal to proximal, transverse fibrous bands are clearly delineated and released with short bursts of radiofrequency. After traversing fibrous bands are all released, the sciatic nerve can be visualized. Continuing to work proximally, the sciatic nerve can be seen from its exit from the sciatic notch. The entirety of the nerve can be seen from the sciatic notch, over the external rotators and distally past the hamstring origin. Video 1 | [
"MCCRORY",
"BENSON",
"MARTIN",
"PURANEN",
"JACKSON",
"MILLER",
"BOWMAN",
"SHRIVER",
"YOSHIMOTO",
"FILLER",
"MARTIN",
"PECINA"
] |
5a0a5d5f0a574e99bdc3cd457b98c319_From blood to bile recent advances in hepatobiliary transport_10.1016_S1665-2681(19)32177-5.xml | From blood to bile: recent advances in hepatobiliary transport | [
"Arrese, Marco",
"Accatino, Luigi"
] | Transport of endogenous and exogenous substances from blood to bile is an essential function of the liver. In the last decade a still growing number of specific transport proteins present at the sinusoidal and canalicular membrane domains of hepatocytes and cholangiocytes have been cloned and functionally characterized. Studies assessing the molecular expression and function of these hepatobiliary transport proteins under different experimental conditions has helped to define the adaptive responses of hepatocytes to certain physiological states and to cholestatic liver injury and to a better understanding of the physiology of bile formation and of the pathophysiology of certain cholestatic diseases. Particularly relevant is the elucidation of the molecular bases of several forms of inherited cholestatic liver disease, which may help to the development of better diagnostic tools or to the design of new therapeutic strategies. In the present review we summarize recent experimental and clinical data involving hepatobiliary transport mechanisms. | This work was partially supported by grants No. 1990519, No. 1020641 and No. 1000563 from the Fondo Nacional de Ciencia y Tecnología de Chile, (FONDECYT). Bile secretion from the liver has a pivotal physiological role as an excretory route for endo-and xenobiotics and in the digestion and absorption of lipids from the intestinal lumen. Molecular identification and cDNA cloning of liver and intestine membrane transport proteins that determine bile formation have allowed a better understanding of the processes involved in bile formation as well as of the pathophysiology of human cholestatic diseases. In this review we summarize the current views on the nature of hepatic transport systems and their regulation under physiological and pathophysiological conditions. 1 The physiology of bile formation Hepatocytes, are polarized secretory epithelial cells with two distinct domains: the basolateral (sinusoidal) domain and bile canaliculus defined by the presence of junctional complexes that establish a sealed apical compartment. Bile is formed from the active secretion of osmotically active compounds by hepatocytes into the canalicular space followed by the passive movement of water through the tight junctions. Major osmotically active compounds include organic anions like BS, glutathione, glutathione-conjugated compounds, and glucuronide-conjugated substances and some inorganic anions like bicarbonate and chloride. These solutes are secreted against steep concentration gradients and thus require to be actively secreted. BS are the major solutes in bile and are considered to be the major osmotic driving force in the generation of bile flow. 2 3 , Once bile is within the canalicular space further modifications occur along the biliary tree due to the presence of active transport systems in the biliary epithelia. 4 In the biliary tree bile composition is modified basically by the addition of bicarbonate. Rapid regulation of bile volume and composition can also occur according to changing physiologic needs. BS and other biliary solutes such as undergo enterohepatic cycling due to the presence of active transport mechanism located in the apical pole of enterocytes at terminal ileum. 3 6 , This allows the retrieval of those substances from the intestinal lumen to the portal circulation and ultimately to the liver for uptake and re-secretion. 7 Molecular basis of hepatobiliary transport: function of hepatic transport proteins Several specific transport proteins for biliary constituents have been identified in both membrane domains of the hepatocytes, biliary epithelia and enterocytes. The number of proteins is continuously increasing and the picture is far to be complete. The study of genetically engineered rodents and the search of molecular defects in human cholestatic diseases had provided important clues on the physiological role of several hepatic transporters. An updated version of the transport systems located in the hepatocytes and biliary cells (cholangiocytes) is showed in figure 1. Available information on the properties of these transport systems is summarized below. Sinusoidal uptake of biliary solutes: The sinusoidal membrane of hepatocytes contains a number of carrier proteins that facilitate the entry of BS and other organic anions into the liver. The major cholephilic compounds in sinusoidal blood are BS (BS). Their uptake is mediated by Na+ -dependent and Na+ -independent mechanisms, being the sodium dependent pathway responsible for more than 80% of taurocholate uptake. Several polypeptides have so far been cloned from rat and human liver that are able to confer bile salt transport capacity when expressed in mammalian cells. A high affinity bile salt transporter named sodium taurocholate cotransporting polypeptide (Ntcp) and a growing family of multispecific organic anion transporters are the major proteins involved in this step of bile formation. 8 Ntcp is the major bile salt uptake system in the basolateral membrane of rat hepatocytes. 4 , Ntcp is exclusively expressed in liver and strictly localized on the basolateral membrane of hepatocytes. Transfection of the cDNA into mammalian cells confers the capacity to carry out saturable Na+ -dependent uptake of conjugated and unconjugated BS with similar kinetic parameters to those previously defined in liver basolateral plasma membrane vesicles. Ntcp transports mostly BS being estrone 3-sulfate the only non-bile salt substrate transported at a significant degree. 8 9 Several lines of evidence suggest that Ntcp is the predominant and probably the exclusive Na+ -dependent transporter BS on the basolateral membrane of the hepatocyte. Ntcp’s cDNAs have been identified and cloned from several species other than rat including mouse, rabbit, hamster and human (NTCP). In the mouse two isoforms resulting from alternative splicing have been identified and named Ntcpl and Ntcp2. Ntcp2 lacks the last 45 amino acid residues compared with the normal or “wild type” Ntcp and its physiological function is unknown. Recent studies on the sorting mechanisms of Ntcp suggest that truncated forms of the transporter, like Ntcp2, may loose the fidelity of basolateral membrane sorting and led to intracellular accumulation. 8 Human NTCP is a 349-aminoacid protein with similar substrate specificity than that observed for rat Ntcp but higher affinity for BS. 10 All Ntcp’s use the transmembrane inwardly directed sodium gradient, maintained by the N+/K+ ATPase located at the sinusoidal membrane of the liver cell. 8 Na+ -independent transport systems located at the sinusoidal membrane of hepatocytes have a wider substrate affinity than Ntcp and are able to transport a great variety of organic anions other than BS. 8 , Thus, in addition to fulfill the physiological need of taking up unconjugated BS and non-bile salt endogenous organic anions from sinusoidal blood, these transport systems play an important role in the uptake of xenobiotics by the liver. Sinusoidal Na+ -independent transport of BS and organic anions is mainly mediated by the so-called organic anion transporting polypeptides (Oatp’s). Oatp’s are a family of polyspecific transporters, with overlapping substrate affinity that is able to mediate the sodium-independent uptake of BS, particularly unconjugated species. In addition, Oatp’s mediates the uptake of a large number of other compounds differing in charge and structure. These compounds include bromosulphalein, thyroid hormones, cardiac glycosides, neutral steroids and numerous drugs. Several Oatp’s have been identified in both rat and humans (OATP’s). Carriers with predominant expression in the rat liver include Oatp1, 2 and 4 which are responsible for the majority of sodium independent sinusoidal bile salt transport in the rat liver. 11 Oatp4, is the most recently cloned family member. 8 Data on human liver OATP’s is also growing. 12 Several proteins with similar substrate characteristics have been cloned although some of them are not truly orthologues of the rat gene products. For that reason a different nomenclature has been adopted designing the identified human OATP’s with capital letters from A to E. Three members of the family are predominantly or exclusively expressed in the liver and have similar functions than rat Oatp’s (OATP-A, -B, and -C). 8 Information on bilirubin uptake from blood into hepatocytes is limited. Sinusoidal membrane transporters belonging to the Oatp’s family transport bilirubin monoglucuronide. However, a transporter of unconjugated bilirubin in the sinusoidal membrane has not as yet been identified. It has been suggested, from 13 in vitro experiments using artificial membranes, that unconjugated bilirubin cross the hepatocyte sinusoidal membrane by a diffusion process. The uptake of cationic compounds at the sinusoidal membrane of rat hepatocytes is thought to be mediated by the polyspecific organic cation transporter Oct1. Oct1 belongs to a superfamily of transporters that includes multidrug-resistance proteins, facilitative diffusion systems, and proton antiporters. Oct’s mediate electrogenic transport of small organic cations with different molecular structures, independently of sodium and proton gradients. 14 15 Finally, the mechanisms underlying the uptake of other biliary solutes are less clear. However, a major advance has been recently made in the elucidation of the uptake mechanism for high-density lipoproteins (HDL). Since biliary cholesterol originates predominantly from cholesterol present in this lipoprotein this process is relevant for bile formation. Experiments on cloning and characterization of a hepatocyte HDL receptor, the scavenger receptor class A type 1 or SRBI, have suggested that this protein may have a critical role in controlling both serum and biliary cholesterol levels. 16 Intracellular transport. The mechanisms by which biliary solutes are transported across cells remain poorly understood. 34 , Available information pertains mainly to the intracellular movement of BS. 33 These solutes undergo rapid monomeric movement to the canalicular pole of the hepatocyte probably bound to intracellular binding proteins. Several proteins have been identified as intracellular bile salt binders. In rat liver 3-hydroxysteroid dehydrogenase represents the major cytosolic bile salt binding protein, whereas in humans the predominant protein is a dehydrodiol dehydrogenase. The current body of evidence does not support a role of vesicular transport in the intracellular movement of BS. Sterol carrier protein 2 and phospholipid transfer protein may be involved in the intracellular transport of cholesterol and phospholipids, respectively but the exact details of their action are not well defined. 17 18 Canalicular transport of biliary solutes: Transport of biliary solutes across the canalicular membrane of the hepatocyte provides the primary driving force for generation of bile flow and is critical for the excretory function of the liver (i.e. body disposal of endo- and xenobiotics, including drugs). The elucidation of the important physiological role of the socalled ABC (for ATP Binding Cassette) transporters, which function as ATP-dependent export pumps and have a common ATP-binding motif in their protein sequences have led to the identification of at least four ABC transporters in the canalicular membrane which act as unidirectional, ATP-dependent export pumps for BS, amphiphilic anionic conjugates, lipophilic cations and phospholipids. Information on these transport systems is briefly summarized below. 19 Bile salt transport. Secretion of BS takes place against a high osmotic and chemical gradient (~5 μM inside the cell versus 1000 μM in the canalicular space). Gerloff, et al. provided convincing evidence that a novel canalicular ABC transporter, named sister of the P-glycoprotein (sPgp,40), effectively mediates ATP-dependent BS transport when overexpressed in the insect Sf9 cell line. Consequent to this work sPgp was renamed as Bsep (Bile Salt Export Pump, 20). The rat Bsep is a 160kDa protein closely related to the multidrug resistant (mdr 1 and 2) genes and exclusively located at the canalicular domain of hepatocytes. cDNAs from rat, mouse and humans have been recently cloned. 20 The identification of mutations in the BSEP gene 8 as playing a role in human cholestatic liver disease (see section on “Clinical implications”) provided further support to the concept that BSEP may be the predominant canalicular BS transporter. However, additional bile salt transporters may exist at the canalicular pole of liver cells as suggested by the phenotype seen in mice with targeted inactivation of Bsep gene. 21 Bsep null mice have a dramatic impairment in biliary secretion of taurocholate when challenged intravenously with this bile salt but do not have a major decrease in basal bile secretion. Thus, these data suggest that Bsep is the main export pump for hydrophobic BS and that alternative transport mechanisms exist at the canalicular membrane of hepatocytes. 22 22 Transport of non-bile acid-organic anions: the canalicular transport of non-bile acid organic anions (including conjugated bilirubin) as well as sulfated and glucuronidated BS, is carried out by a 190kDa protein member of the multidrug resistant protein family (MRP), MRP2. MRPs are hepatocellular ABC export pumps that transport amphiphilic substrates to the extracellular space. MRP2 (previously named canalicular Multispecific Organic Anion Transporter, cMoat) is the canalicular isoform which was first functionally identified in naturally occurring mutant strains of rats that lack the capacity to excrete several organic anions and conjugated bilirubin. 44 Physiologically relevant substrates of MRP2 include glutathione-S-conjugates such as leukotrienes, monoglucuronosyl-bilirubin, bis-glucuronosyl-bilirubin, 17-β-glucuronosyl-oes-tradiol and glutathione disulfide. 45 Evidence supporting an MRP2-mediated low-affinity transport of reduced glutathione, a major driving force for the so-called bile saltindependent bile flow, have been published. 23 25 MRP2 is the best studied representative of the so-far identified members of the MRP transporter family located in hepatocytes. MRP1 is expressed at a very low level in normal liver cells while MRP3 and MRP6 are located, in contrast with MRP2, mainly at the basolateral/lateral membrane of hepatocytes. 26 , In addition to MRP orthologs in mammals (human, rat, rabbit, and mouse), MRP family members have been identified in invertebrates such as the nematode Caenorhabditis elegans. 27 The MRP family of proteins may have a role in resistance against nucleoside analogues used in cancer chemotherapy. 27 In the liver, MRP’s seem to play a widespread role in detoxification and in the regulation of paracellular and/or transcellular solute movement from blood into bile. In addition, due to the transport capacity of glutathione conjugates and reduced glutathione they might play a role in the hepatocyte’s defense against oxidative stress. 28 Transport of biliary lipids. Biliary lipid secretion serves as an excretory pathway for body cholesterol disposal and play a major role in the intestinal absorption of dietary lipids through the formation of micelles from biliary phospholipids (mainly phosphatidylcholine). Moreover, a cytoprotective role of biliary lipids against bile acid-induced injury to hepatocytes and biliary cells has been suggested. Secretion of biliary cholesterol and phosphatidylcholine is a very complex process that involves lipid supply to the canalicular pole of hepatocytes from either preformed or neosynthetic hepatic sources, and probably the detergent action of BS in the outer leaflet of the hepatocanalicular membrane. The transport processes involved in this excretory route are only partially known (for a recent review see ref. 29 30 ). However, the development of mutant mice with a targeted inactivation (knockout) of another liver ABC transporter, the multidrug resistant 2 gene product mdr2, led to the identification of this protein as a phospholipid translocator. mdr2 knockout mice do not have detectable phospha-tidylcholine in bile and have significant liver pathology characterized by a nonsuppurative destructive cholangitis similar to that seen in some human cholestatic diseases. Moreover a clinical variant of progressive familial intrahepatic cholestasis is probably due to mutations in the human ortologue of mdr2 (see section on “Clinical implications”). Studies using mdr2 null animals have led to a working model. This model considers secretion of lipids into bile as a result of a coordinate interplay between secretion of BS, phospha-tidylcholine translocation from the inner to the outer hemileaflet of the hepatocyte canalicular membrane, and detergent lipid extraction by luminal BS. In this theoretical model, cholesterol would diffuse passively from the canalicular membrane into biliary vesicles. This view has been recently challenged by the observation that Bsep knockout mice display a significant increase in the secretion of cholesterol and phospholipid into bile in spite of a significant reduction in the biliary secretion of hydrophobic BS. 31 This suggests that intracellular rather than intracanalicular mechanisms are involved in cholesterol efflux from the hepatocyte. If specific transport proteins located at the canalicular membrane of hepatocyte participate in biliary cholesterol secretion remains unknown. Recent evidence suggests that members of a subfamily of ABC transporters (named ABCGs) may cooperate to promote biliary excretion of certain sterols. 22 32 Transport of cationic compounds. Information on canalicular transport of cationic compounds is less complete than for organic anions. It is possible that multiple organic cation transport systems with separate substrate specificity may be involved in the biliary excretion of amphiphilic substances. Current evidence suggests that P-glycoprotein (P-gp), the gene product of the multidrug resistance 1 gene (mdr1) acts as transporters of bulky cationic compounds and steroids. Mammalian P-gp’s are plasma membrane proteins belonging to the superfamily of ATP-binding cassette transporters 33 which are specifically located at the apical pole of polarized epithelial cells like the enterocyte and hepatocyte. P-glycoprotein seems to be important in hepatobiliary excretion of xenobiotics and eventually in limiting uptake of hydrophobic drugs from the gut. No endogenous substrates have yet been identified for these proteins. The rat liver express two mdr genes, mdr1a and mdr1b. 19 Genetic ablation of mdr1a gene do not result in significant pathology or changes in biliary composition but renders the animals hypersensitive to many drugs. 19 Double knockout mice (mdr1a/mdr1b-/-) also maintain normal bile flow, but have a marked reduction in biliary cation excretion. Thus, it is likely that these transporters could be important for excretion of xenobiotics and endogenous metabolites. 34 Transport systems in cholangiocytes. The organic and inorganic components of bile may be significantly modified by an array of absorptive mechanisms on the apical membrane of cholangiocytes. Biliary epithelial cells account for only 3-5% of the overall population of liver cells. However, they may contribute as much as 40% of the daily production of bile depending on the species by adding fluid and electrolytes to canalicular bile. Recently, a wealth of information is accumulating on the biology of cholangiocytes from normal adult rodent and human livers. 5 , Particularly relevant is the identification of several specific transport proteins in different membrane domains of biliary epithelia that seem to be relevant for ductal bile formation. 35 The cystic fibrosis transmembrane conductance regulator (CFTR) mediates secretion of chloride into the biliary tree. This process is tightly coupled to bicarbonate secretion. As chloride exits the cell, the cholangiocyte depolarizes and facilitates bicarbonate entry through the action of an electrogenic sodium-bicarbonate cotransporter. The rise in bicarbonate concentration stimulates the activity of an apical chloride/bicarbonate exchanger, in which luminal chloride is exchanged for intracellular bicarbonate resulting in the secretion of bicarbonate into bile. In addition to CFTR-coupled bicarbonate secretion the apical sodium dependent bile salt transporter (ASBT), recently shown to be present in cholangiocytes, 5 may contribute to ductular secretion through the so-called “colehepatic shunt”. This pathway involves reabsorption of unconjugated BS that first became protonated and once inside the biliary cell act as proton donors promoting the formation and secretion of bicarbonate from carbonic acid. An anion exchanger (AE isoform 2, AE2) can also contribute to bicarbonate secretion. 8 Finally, it has been demonstrated that cholangiocytes also express isoforms of the membrane water channels, aquaporins, at the apical and/or basolateral domains. This supports the concept that transcellular water transport in biliary epithelia takes place through pore-forming intrinsic membrane proteins. 36 Regulation of hepatobiliary transporters: insights for the pathophysiology of cholestasis Adaptive responses of hepatocytes to certain physiological states and to cholestatic liver injury includes changes in the molecular expression and function of the hepatobiliary transport proteins. Assessment of the expression of transport proteins in experimental models of cholestasis such as bile duct ligation (obstructive cholestasis) and ethinylestradiol or endotoxin administration (hepatocellular cholestasis) are the most commonly used models. Thus, down-regulation of the expression of Ntcp and Oatp1 has been reported in all three forms of cholestasis. 1 Decreased transcription rates of the Ntcp gene have been observed in obstructive cholestasis and decreased binding activity of a critical nuclear transcription factor required for basal Ntcp gene expression occurs upon the injection of endotoxin. 37 The expression and function of canalicular transporters such as MRP2 and Bsep is also down regulated in both extrahepatic and hepatocellular cholestasis. 38 However, decreased expression of MRP2 is more intense than Bsep. The relative preservation of Bsep expression may contribute to diminish the extent of liver injury produced by bile salt retention. In contrast, studies assessing the expression of other canalicular transport proteins such as mdr1b-P-glycoprotein have shown that this protein is up regulated in obstructive cholestasis. In addition, an up-regulation of a basolateral ABC transporter, MRP3 is seen in bile duct ligation. 39 40 Interestingly, MRP3 is also up regulated in a non-cholestatic experimental model such the Eisai hyperbilirubinemic rats, which may also be regarded as a compensatory mechanism occurring when function of MRP2 is impaired. 37 Liver regeneration is also often associated with cholestasis. The underlying molecular mechanisms seem to be related to down-regulation of hepatic transporters. Thus, several studies have assessed the expression of some transport proteins after partial hepatectomy in the rat. 91 - 93 Basolateral transporters including Ntcp and Oatp1 and Oatp2 are markedly down regulated during early stages of regeneration. In contrast, protein and mRNA expression of two ABC transporters Bsep and MRP2 remained unchanged. These modifications are transient and returned to control values 7-14 days after partial hepatectomy. The differential regulation of basolateral and canalicular organic anion transporters after partial hepatectomy provides a potential molecular mechanism for regenerating liver cells to protect replicating liver cells by reducing the uptake of BS and maintain biliary secretion of biliary constituents. 91 Injury that occurs after ischemia/reperfusion (I/R) of the liver is also associated to cholestasis and is a clinical problem in liver transplantation, hepatic surgery with inflow block for trauma and cancer, and various types of shock. Bile production is frequently diminished when livers are reperfused following cold ischemia in patients undergoing orthotopic liver transplantation and, in fact, bile flow appears to be one of the most reliable parameters of hepatic ischemic damage. 40 ATP depletion, alterations in intracellular calcium regulation and the activation of phospholipases and proteases participate as mechanisms of ischemic injury. The reperfusion of the ischemic organ may lead to the aggravation of ischemic injury 41 through the action of reactive oxygen species and other proinflammatory mediators produced by Kupffer cells and neutrophils. 42 In addition to hepatic injury by ischemia and reperfusion, other factors such as graft rejection, immunosuppressive therapy, biliary obstruction and sepsis can contribute to posttranplantation cholestasis. 43 Whether altered expression or function of hepatobiliary transport proteins is affected under I/R of the liver is currently under investigation in our laboratory. 44 In summary, the molecular expression and function of several hepatobiliary transport proteins is altered under different pathophysiological conditions. Collectively, information on changes of hepatobiliary transporters in experimental models suggests that a general pattern of response take place under conditions where bile secretion is impaired. This response involves down-regulation of the uptake of BS and other potentially toxic organic anions with a relative preservation of bile salt excretion and up-regulation of excretory proteins such as P-glycoprotein and MRP3. Thus, decreased expression of Ntcp and other sinusoidal uptake proteins could represent a protective mechanism to prevent further uptake of BS. On the other hand, preservation of Bsep and increased expression of P-glycoproteins could represent a secondary response in an attempt to eliminate potentially toxic substances into bile. Up-regulation of MRP3 may be regarded as a compensatory mechanism occurring when the canalicular secretion of anionic conjugates by MRP2 is impaired. The underlying molecular mechanisms of the abovementioned changes are unclear. However, recent evidence on the role of nuclear receptors as bile acid “sensors” inside the cell have important implications in understanding the regulation of both bile acid synthesis and transport. 37 Nuclear receptors are able to act in concert to turn on and off bile acid synthesis. They also seem to be key regulators of bile salt transport through controlling the expression of membrane transport proteins regulating the uptake and export of BA in the hepatocyte. This leads to an overall implication that hepatocyte can protect itself from excess of BS by reducing both bile acid synthesis and import. 45 Clinical implications Molecular studies using experimental models of is rapidly bridging basic science with clinical medicine. The concept that defects in gene expression of membrane transporters may result in cholestasis led to the search of genetic defects that cause or predispose to cholestatic disease. Thus, the identification of specific mutations in the so-called progressive familial intrahepatic cholestasis (PFIC) has established the molecular basis of a clinically important group of cholestatic disorders of infancy. On the basis of clinical findings, clinical-laboratory observations, morphologic studies and genetic analysis, three types of PFIC’s are now recognized. PFIC subgroups types 1 and 2 are characterized by cholestasis and a low to normal serum gamma-glutamyltransferase activity, whereas PFIC type 3 have an elevated serum activity of the latter enzyme. 46 PFIC type 1 is associated to mutations in a single gene named FIC1 were found. FIC1 gene product is the first human member of a recently defined subfamily of P type ATPases that are involved in ATP-dependent aminophospholipid transport. FIC1 is expressed in many epithelial cells including the liver, the biliary tree and the intestine. Its function is not yet known and could participate in the transport of aminophospholipids from the outer to the inner leaflet of liver or biliary cell membranes. This could be relevant to the regulation of BS transport or the maintenance of the lipid composition of the canalicular membrane but this assumption remains speculative. It has been also reported that some familial forms of recurrent intrahepatic cholestasis are also linked to specific mutations in the FIC1 gene. 47 PFIC type 2 is related to mutations in the canalicular bile salt transporter (BSEP) gene. 46 , Most of mutations result in undetectable BSEP protein on the canalicular membrane which is in line with the very low biliary BS levels seen in these patients. Thus defective canalicular transport result in ongoing liver injury through accumulation of BS inside the hepatocytes. 48 Patients with PFIC type 3 have different clinical and histological characteristics than those seen in other groups of PFIC’s. A markedly elevated serum gamma-glutamyl-transferase and extensive bile duct proliferation and portal fibrosis are hallmarks that resemble the phenotype seen in the mdr2 knockout mice. In fact, genomic DNA analysis of MDR3 gene (which encodes for the canalicular phospholipid transporter) in two PFIC3 patients showed gene mutations resulting in stop codons leading to complete absence of the gene product in the liver. 46 Since phospholipids have a cytoprotective role against bile acid-induced injury to hepatocytes and biliary cells MDR3 deficiency leads toxic damage of these cells because reduced formation of mixed micelles and high concentrations of monomeric BS into the bile. 49 Information on the occurrence of mutations of specific transporters in PFIC patients has prompted the search of such molecular defects in other forms of cholestatic diseases like adult and neonatal cholangiopathies. Interestingly, MDR3 gene expression is normal in patients with primary biliary cirrhosis suggesting that defective expression of this gene are not involved in the pathogenesis of this disease. It has been also reported that mutations of MDR3 can be associated to recurrent familial intrahepatic cholestasis of pregnancy. 37 50 Dubin-Johnson syndrome is another example of how mutations in hepatic transporter affect hepatic excretory function. This syndrome is a rare autosomal recessive disorder characterized by chronic conjugated hyperbilirubinemia and impaired hepatobiliary transport of non-bile salt organic anions which is associated to mutations in the MRP2 gene. As previously mentioned (see section on “canalicular transport”), MRP2 mediates ATP-dependent transport of a broad range of endogenous and xenobiotic compounds, including conjugated bilirubin, across the canalicular membrane of the hepatocyte. 46 In addition to hereditary alterations of hepatobiliary transport systems, data from experimental models of cholestasis suggest that decreased expression of membrane transporters may explain the impaired hepatic uptake and excretion of BS and organic anions seen in several cholestatic conditions. Clinical scenarios such as sepsis- and drug-induced cholestasis, intrahepatic cholestasis of pregnancy and obstructive cholestasis are examples where hepatic transporters can be defective. 37 Defects in cholangiocyte transport protein may also play a role in some human diseases. Thus, it has been shown that the expression of AE2 is reduced in patients with PBC but not with other forms of cholestasis or liver cirrhosis, suggesting that AE2 alteration is not a secondary effect due to inflammation or cholestasis. In addition, mutations and/or functional defects of CFTR might have a role in acquired cholangiopathies as it occurs in cystic fibrosis. 51 37 An interesting and still under-explored area of research is the potential existence of genetic polymorphisms in xenobiotic transporters such as P-gp (MDR1) and MRP2. These polymorphisms may have great impact as cholestasis susceptibility factors as well as factors modulating body drug disposal. Genetic epidemiological evaluation of candidate genes that may predispose to cholestasis is needed. Concluding remarks The identification and functional characterization of a growing number of specific transport proteins present at the sinusoidal and canalicular membrane domains of hepatocytes and cholangiocytes represent a great advance in the understanding of the pathophysiology of certain cholestatic diseases. Continuous progress on the field is expected in the next several years which may help to the development of better diagnostic tools or the design of new therapeutic strategies for human liver cholestatic diseases. | [
"JANSEN",
"ERLINGER",
"SUCHY",
"ARRESE",
"STRAZZABOSCO",
"CRADDOCK",
"WALTERS",
"MEIER",
"SUZUKI",
"SUN",
"KULLAKUBLICK",
"CATTORI",
"KAMISAKO",
"GRUNDENMANN",
"MEIJER",
"TRIGATTI",
"AGELLON",
"COHEN",
"HOOIVELD",
"GERLOFF",
"STRAUTNIEKS",
"WANG",
"KONIG",
"PAULUSMA",
... |
0eb6e45f92024833844bc7ffe1fad1c1_Assessment of fluorescent protein candidates for multi-color flow cytometry analysis of Saccharomyce_10.1016_j.btre.2022.e00735.xml | Assessment of fluorescent protein candidates for multi-color flow cytometry analysis of Saccharomyces cerevisiae
| [
"Perruca-Foncillas, Raquel",
"Davidsson, Johan",
"Carlquist, Magnus",
"Gorwa-Grauslund, Marie F."
] | Transcription factor-based biosensors represent promising tools in the construction and evaluation of efficient cell factories for the sustainable production of fuels, chemicals and pharmaceuticals. They can notably be designed to follow the production of a target compound or to monitor key cellular properties, such as stress or starvation. In most cases, the biosensors are built with fluorescent protein (FP) genes as reporter genes because of the direct correlation between promoter activity and fluorescence level that can be measured using, for instance, flow cytometry or fluorometry. The expansion of available FPs offers the possibility of using several FPs - and biosensors – in parallel in one host, with simultaneous detection using multicolor flow cytometry. However, the technique is currently limited by the unavailability of combinations of FP whose genes can be successfully expressed in the host and whose fluorescence can be efficiently distinguished from each other.
In the present study, the broad collection of available FPs was explored and four different FPs were successfully expressed in the yeast Saccharomyces cerevisiae: yEGFP, mEGFP, CyOFP1opt and mBeRFPopt. After studying their fluorescence signals, population heterogeneity and possible interactions, we recommend two original combinations of FPs for bi-color flow cytometry: mEGFP together with either CyOFP1opt or mBeRFPopt, as well as the combination of all three FPs mEGFP, CyOFP1opt and mBeRFPopt for tri-color flow cytometry. These combinations will allow to perform different types of bi-color or possibly tri-color flow cytometry and FACS experiments with yeast, such as phenotype evaluation, screening or sorting, by single-laser excitation with a standard 488 nm blue laser. | 1 Introduction Biobased production of fuels, chemicals and pharmaceuticals has become a key strategy in the development of more sustainable industrial processes. In this context, the yeast Saccharomyces cerevisiae has commonly been used as a platform organism due to its robustness to process conditions and the engineering opportunities offered through the numerous genetic tools available for this species. It is now possible to quickly generate thousands of enzyme and pathway combination variants to test for the production of a given compound in S. cerevisiae . However, when the best performing variants cannot be selected on the growth pattern, screening for the best performing strains remains a cumbersome task that slows down the overall process of obtaining optimal strains for microbial biobased production. One way to facilitate such screening is to use transcription factor-based biosensors [1] whose response is designed to be proportional to the production of the target compound. For example, Skjoedt et al., showed the possibility of real-time monitoring of cis,cis -muconic acid (CCM) production in yeast by introducing a transcription factor-based biosensor in which the transcriptional activator BenM controlled GFP expression [2] . Similarly, the production of acetic acid could be monitored by the development of a biosensor based on the transcription factor Haa1 [3] . Transcription factor-based biosensors have also been implemented in S. cerevisiae for measuring different cellular properties. One of the first examples was the DNA damage biosensor developed by Walmsley et al., in which GFP expression was controlled by the promoter of RAD54 , a gene induced by exposure to UV, radiation or chemical agents such as methyl methanesulfonate [4] . Since then, a variety of biosensors have been developed to study cellular properties such as growth [5] , redox state [ 6 , 7 ], sugar sensing [8] or the unfolding response in protein production [9] . A recent compilation of developed biosensors in yeast can be found in [1] . In transcription factor-based biosensors, the expression of a reporter molecule is controlled by a promoter whose induction or repression depends on the binding of one or several transcription factors to the promoter region. Fluorescent protein (FP) genes are often used as reporter genes because the induced or repressed state of the promoter of interest can be directly correlated to fluorescence that is detectable using, for instance, flow cytometry or microscopy. Among them, the yeast enhanced green fluorescent protein (yEGFP) is the most commonly used reporter molecule in S. cerevisiae , due to its high fluorescence levels [10] . Following the discovery of the green fluorescent protein (GFP) [11] , many other fluorescent proteins offering a wide range of colors have been identified and/or developed. These fluorescent proteins are often classified based on their emission wavelength range; thus, they are defined on the whole visible spectrum as (i) blue fluorescent proteins (BFP), (ii) cyan fluorescent proteins (CFP), (iii) green fluorescent proteins (GFP), (iv) yellow fluorescent proteins (YFP), (v) orange fluorescent proteins (OFP) and (vi) red fluorescent proteins (RFP) [12] . FPs have been used in a variety of applications from studies on host-parasite interactions – e.g., by obtaining GFP-expressing pathogenic bacteria and following their interactions with their hosts during infection- [13] to labeling of subcellular structures in mammalian cells [14] . Overall, a lot of FP studies still focus on biomedical applications with the use of in vivo imaging applying fluorescence microscopy. The expansion of available FPs offers the possibility of using several FPs - and biosensors - in one host, with simultaneous detection using multicolor flow cytometry. If successful, the implementation of multicolor flow cytometry opens the possibility to investigate several cellular properties of interest at once as well as possible interactions between such properties. However, a major challenge of multicolor flow cytometry is the selection of the appropriate combination of FPs. Due to the limited space in the visible spectrum, the emission fluorescence of FPs overlap can make it difficult to distinguish between the emission signals [15] . Operational adjustments such as the selection of appropriate emission filters are also necessary to optimize the simultaneous detection of multiple fluorescence signals [16] . In the present study, the broad collection of available FPs was explored to identify and assess FPs of potential use as reporters in biosensor systems for the yeast S. cerevisiae . Several constructs carrying different FPs as reporter molecules under the expression of constitutive promoter TEF1p were generated and one copy of each of them was integrated into S. cerevisiae genome. The fluorescence activity of the obtained strains containing one of the selected FPs was analyzed using flow cytometry. Finally, possible novel combinations of FPs for multicolor flow cytometry were suggested and further evaluated. 2 Materials and methods 2.1 Strains and cultivation media The S. cerevisiae strains and shuttle plasmids used and developed in the study are listed in Tables 1 and 2 , respectively. For sub-cloning experiments, Escherichia coli NEB5α competent cells from New England Biolabs (Ipswich, MA, USA) were also used. Liquid cultures of E. coli were performed in Lysogeny Broth (LB) medium containing 10 g/l tryptone, 5 g/l yeast extract, 5 g/l NaCl, pH 7.0. Selection of successful transformants was done in LB agar plates (LB + 15 g/l agar) supplemented with ampicillin (50 mg/l) and incubated overnight at 37 °C. Yeast strains were grown in Yeast Peptone Dextrose (YPD) medium containing 20 g/l peptone, 10 g/l yeast extract and 20 g/l glucose. Cultivations were performed at 30 °C and 180 rpm. Selection of transformants was done at 30 °C in YPD agar plates (YPD + 15 g/l agar) supplemented with geneticin (200 mg/l) and nourseothricin (100 mg/l) to select for the Cas9-kanMX and the gRNA-natMX plasmids, respectively. 2.2 Plasmid construction For the plasmids carrying the tetrameric yeast-adapted GFP gene (yEGFP; [17] ), the TEF1p-yEGFP3 fragment was obtained by overlap extension PCR with primers TEF1_f and TEF1p-yEGFP_r_OE (using pNM001 as template) and the primers TEF1p-yEGFP_f_OE and yEGFP_r_SfaAI (using YIplac211+yEGFP3 as template) (See Supplementary Table 1 for primer list). Plasmids pRP001 and pRP005 were then obtained by introducing the TEF1p-yEGFP3 amplified fragment into pCfB2903 and pCfB2904, respectively, using PstI/SfaAI restriction sites. These plasmids were then used as a backbone in which the yEGFP encoding gene was exchanged by other candidates using XhoI/SfaAI restriction sites. The gene encoding for the monomeric yeast-adapted GFP (mEGFP; [18] ) was amplified from mEGFP-FKBP(M)x4 plasmid that was a gift from Benjamin Glick (Addgene plasmid # 85,004) using the TEF1p-mEGFP_f_OE and mEGFP_r_SfaAI primers (Suppl. Table 1). The obtained fragment was digested with XhoI and SfaAI and ligated into the linearized pRP001 using T4 DNA ligase (Thermo Fisher Scientific, Waltham, MA, USA). The same approach was used for the genes encoding mHoneydew [19] and yoTagRFP657 [20] . In the case of mHoneydew, the gene was amplified from the plasmid pNCS mHoneydew, which was a gift from Erik Rodriguez & Roger Tsien (Addgene plasmid # 91,760) using the YFP_f_XhoI and YFP_r_SfaAI primers (Suppl. Table 1). yoTagRFP657 was amplified from the plasmid pFA6a-link-yoTagRFP657-Kan, which was a gift from Wendell Lim & Kurt Thorn (Addgene plasmid # 44,955) using the RFP_f_XhoI and RFP_r_SfaAI primers (Suppl. Table 1). The genes encoding CyOFP1 [21] , mBeRFP [22] and smURFP [23] were codon-optimized and purchased from GenScript ( Piscataway, NJ, United States ). Codon-optimization was designed using the GeneArt tool from ThermoFisher and XhoI and SfaAI restriction sites were added at 5′ and 3′ ends respectively. The genes were extracted from the GenScript plasmid by cleaving with XhoI and SfaAI and further ligated into linearized pRP005. 2.3 Yeast strain engineering S. cerevisiae strains were generated using the CRISPR/Cas9 system developed by Jessop-Fabre et al., [24] . The cells were prepared and transformed using the high-efficiency LiAc protocol [25] . The background strain, a CEN.PK113–7D containing pCfB2312, was transformed with the plasmids pRP001, pRP002, pRP003, pRP004, pRP008, pRP009 and pRP013 linearized with NotI. This led to the generation of seven strains, each carrying one chromosomally integrated copy of the gene encoding for the respective FPs. For the strains containing two different FPs, the gene encoding mEGFP was introduced in the already existing TMBRP004 (CyOFP1opt) and TMBRP005 (mBeRFPopt) by linearization of pRP002 with NotI. The verification of transformants was done by colony PCR. Verification of proper integration of the FP in the XI-2 locus was performed using a primer annealing downstream of the integration site and an internal primer annealing on the FP. Primers yEGFP_f_ver and XI-2_ver_r were used to verify CEN.PK+pRFP1 transformants, primers mEGFP_f_ver and XI-2_ver_r were used to verify mEGFP integration in TMBRP014, TMBRP008 and TMBRP009, primers YFP_f_ver and XI-2_ver_r were used to verify mHoneydew integration in TMBRP015, primers RFP_f_ver and XI-2_ver_r for verification of yoTagRFP657 integration in TMBRP016 transformants and primers smURFPopt_f_ver and XI-3_ver_r for verification of smURFPopt integration in TMBRP012 transformants. In the case of integration of CyOFP1opt and mBeRFPopt, the verification was performed with two combined PCRs. First, the presence of the FP was verified using primers CyOFP1_opt_f and CyOFP1_opt_r for TMBRP004 and TMBRP008 whereas primers mBeRFP_opt_f and mBeRFP_opt_r were used for TMBRP005 and TMBRP009. Then, primers XI-3_ver_f and XI-3_ver_r, which anneal upstream and downstream of the integration site XI-3, were used to verify the location of the integration. Two positive transformants per generated strain were saved in glycerol stock and used for further experiments. 2.4 Flow cytometry experiments All strains were grown in 250 ml baffled shake flasks containing 25 ml YPD and with a starting OD 620 of 0.5. Samples were taken after 2, 3, 4, 5, 6, 7 and 24 h of cultivation. For in vivo stability studies, all strains were cultivated as mentioned above and the protein synthesis inhibitor, cycloheximide (10 µg/ml) or nourseothricin (200 µg/ml), was added after two hours of cultivation. Flow cytometry measurements were performed using a BD Accuri C6 flow cytometer equipped with a BD Csampler (BD Biosciences, Franklin Lakes, NJ, USA). The detection filters 510/15 nm, 585/40 nm, 610/20 nm and 675/25 nm were used to collect fluorescence emissions. All detection filters were purchased from BD Biosciences (Franklin Lakes, NJ, USA). Phosphate-buffered saline (PBS) was used to dilute the samples to OD 620 < 1.0 when necessary. For each sample, 10,000 events were recorded at medium speed (35 μL/min). A threshold of 80,000 in forward scatter height (FSC H) was applied to avoid background noise. A washing step was performed between each sample to avoid cross-contamination. The data analysis was performed using FlowJo™ v10.8.1 software (BD Life Sciences). Compensation was performed for FL1-H, FL2-H and FL3-H parameters using FlowJo's compensation tool. Samples used as references for the compensation were TMBRP014 (mEGFP) for FL1-H, TMBRP004 (CyOFP1opt) for FL2-H and TMBRP005 (mBeRFPopt) for FL3-H. The background strain was used as negative control for all three parameters. The compensation matrix obtained was applied to the samples when applicable. 3 Results 3.1 Selection of fluorescent protein candidates A literature search was carried out to find fluorescent protein candidates that could be combined in S. cerevisiae with the well-known yEGFP to perform multiple fluorescence combinations. The criteria for the selection of the most suitable candidates were (i) the ability to be excited with the commonly found in flow cytometer lasers of 488 nm or 640 nm, (ii) the compatibility of the emission spectrum with GFP and preferably other candidates on the list and (iii) the suitability for recombinant expression in S. cerevisiae ( Table 3 ). Six candidates were identified from FPbase [26] (Table 3). In addition to the well-known yeast-enhanced GFP, yEGFP, a monomeric variant of GFP, mEGFP, was selected. In the emission spectrum for YFPs, the protein mHoneydew was selected based on its potentially differentiable emission wavelength (562 nm) from EGFP (507 nm). CyOFP1 was selected among the OFPs because of its high brightness and its broad excitation spectrum (see Fig. 1 B) that gives the possibility to be excited with a 488 nm laser with 96% of peak efficiency. Finally, three different RFPs were selected. The first one, mBeRFP, has its excitation peak at 446 nm making its excitation with a 488 nm blue laser non-optimal, but still achievable with a 47% excitation efficiency. The other two RFPs, TagRFP657 and smURFP, can be excited using a standard 640 nm red laser instead. Both of them have the emission peak in the far-red region of the spectrum. No BFP was selected, due to the absence of a suitable laser and filter configuration in the in-house flow cytometer. 3.2 Assessment of in vivo fluorescence in S. cerevisiae In order to assess the fluorescence of the selected FPs in vivo in yeast, seven laboratory strains carrying each one FP were constructed: TMBRP013 (yEGFP), TMBRP014 (mEGFP), TMBRP015 (mHoneydew), TMBRP016 (yoTagRFP657), TMB RP004 (CyOFP1opt), TMB RP005 (mBeRFPopt) and TMB RP0012 (smURFPopt). Each strain contained one copy of the respective FP-encoding gene integrated into the genome under the expression of constitutive TEF1 promoter, to ensure a comparison of fluorescence between FPs that are produced from similar expression levels. Two different loci were used for integration of the FPs, yEGFP, mEGFP, mHoneydew and yoTagRFP657 were integrated into XI-2 whereas CyOFP1opt, mBeRFPopt and smURFPopt were integrated into XI-3. To ensure that the integration site had limited impact on the observed fluorescence signal strength, yEGFP was integrated in parallel in both loci as a control. The results showed no significant difference in fluorescence since a 140.4-fold and 142.2-fold increase was detected for the integration in XI-2 and XI-3, respectively. Fluorescence was measured after 3 h in yeast cells grown on YPD medium using the following non-standard detection filters configuration on the flow cytometer instrument: 510/15 nm on FL1 position, 585/40 nm on FL2 position, 610/20 nm on FL3 position and 675/25 nm on FL4 position. This configuration was designed to minimize spillover between recorded signals and it enabled the visualization of potential overlapping signals from the same FP on different detection filters. The strains TMBRP015, TMBRP016 and TMBRP003, carrying mHoneydew, yoTagRFP657 and smURFP respectively, showed very low or no fluorescence (data not shown) and were not further considered. The results for the other four strains, TMBRP013 (yEGFP), TMBRP014 (mEGFP), TMBRP004 (CyOFP1opt) and TMBRP005 (mBeRFPopt), are presented in Fig. 1 A. The strains TMBRP013 carrying yEGFP and TMBRP014 carrying mEGFP showed high fluorescence emission at 510/15 nm (green channel) with a 134.3-fold and 5.3-fold increase in fluorescence intensity, respectively, as compared to the background strain without any FP. Due to the very high fluorescence level for yEGFP, non-negligible fluorescence also spilled over to the neighbouring 585/40 nm (yellow channel) and 610/20 nm (orange channel) detection filters. Fluorescence from the yeast codon-optimized version of CyOFP1 expressed in TMBRP004 could be detected both at 585/40 nm (yellow channel) and 610/20 nm (orange channel) with a wide dynamic range. An 18.1-fold and 15.5-fold increase in fluorescence were observed at 585/40 nm and 610/20 nm, respectively. In the case of the yeast codon-optimized version of mBeRFP (strain TMB RP005), a clear signal was observed at 610/20 nm (orange channel) with a 5.2-fold increase in fluorescent intensity, with limited spill-over in the neighbouring detection filters. The selected equipment configuration allowed the measurement of four FPs, yEGFP, mEGFP, CyOFP1opt and mBeRFPopt. Despite not being optimal for all the FPs, the four FPs could be excited by the 488 nm blue laser ( Fig. 1 B), thus making only such excitation laser necessary. The excitation efficiency for mBeRFP at 488 nm was non-optimal with a 47% excitation efficiency and further improvements could be achieved if desired, i.e., with the addition of a second blue excitation laser at 458 nm which would increase its excitation. Nevertheless, the excitation obtained with the 488 nm laser was enough to obtain a distinct signal. 3.3 Combination of several fluorescent proteins in the same strain For combinations of FPs to be suitable, the fluorescence intensity observed in all channels had to be considered to avoid significant overlapping of signals from different FPs in the same channel. The yEGFP signal was so strong that it gave fluorescence recorded at 585/40 nm (FL2, yellow channel) and at 610/20 nm (FL3, orange channel), thereby potentially masking the signals obtained from CyOFP1opt or mBeRFPopt that deliver a much weaker signal than the yEGFP one. Instead, the lower mEGFP only gave a significant signal at 510/15 nm leaving the other channels free to use for measuring the other FPs. Therefore, mEGFP was considered to be a better option for combinations as compared to yEGFP, even though the signal at 510/15 nm (FL1, green channel) was much stronger for yEGFP. CyOFP1opt could be detected both at 585/40 nm and at 610/20 nm, but there was no signal at 510/15 nm ( Fig. 1 A). Thus, a combination of mEGFP and CyOFP1opt was of interest. Similarly, mBeRFPopt showed high fluorescence increase at 610/20 nm but no signal at 510/15 nm enabling the combination of mBeRFPopt with mEGFP. Consequently, two new strains were constructed in which these two combinations of FPs were attempted: TMBRP008 (mEGFP+CyOFP1opt) and TMBRP009 (mEGFP+mBeRFPopt). The signals, obtained under the same conditions as above, for TMBRP008 (mEGFP+CyOFP1opt) were consistent with those of TMBRP014 (mEGFP) and TMBRP004 (CyOFP1opt) separately (see Fig. 1 A). For mEGFP, a 5.3-fold increase was detected in the single FP strain CEN.PK+pRPF2 (mEGFP) at 510/15 nm ( Fig. 1 A) compared to a 5.8-fold increase in the case of the double FP TMBRP008 (mEGFP+CyOFP1opt) ( Fig. 1 A). For the orange FP (CyOFP1opt), a similar 18.1- and 18.9-fold increase in fluorescence was detected at 585/40 nm, for TMBRP004 (CyOFP1opt) and TMBRP008 (mEGFP+CyOFP1opt), respectively. With the other detection filter (610/20 nm), close values of 15.5-fold versus 16.8-fold respectively were also recorded for the same two strains, whereas no significant fluorescence signal was detected at 675/25 nm (red channel). Due to the broad excitation of CyOFP1opt, the emission from mEGFP could be a source of excitation for CyOFP1opt when expressed together (see Fig. 1 B), however, this was not observed confirming that no interference was produced between both fluorescence proteins. A similar pattern was observed when assessing the combination of mEGFP with the red FP mBeRFPopt. In this case, the values observed were slightly higher with a 7.9-fold increase at 510/15 nm (green channel) for TMBRP009 (mEGFP+mBeRFPopt), compared to the 5.3-fold increase showed by TMBRP014 (mEGFP). At 610/20 nm (orange channel), TMBRP005 (mBeRFPopt) showed a 5.2-fold induction whereas TMBRP009 (mEGFP+mBeRFPopt) reported a 7.4-fold increase in fluorescence. 3.4 Differentiation of FP signals To further investigate the interaction between the FPs mEGFP, CyOFP1opt and mBeRFPopt, a study of the distribution of the populations was performed. A sample containing a mixture of three strains, TMBRP014 (mEGFP), TMBRP004 (CyOFP1opt) and TMBRP005 (mBeRFPopt), was analyzed ( Fig. 2 ). First, the strains were analyzed separately and their concentration was quantified. This allowed generating a mixture of the three strains with the same number of cells for each strain. An initial selection of cells in the sample was made based on size by plotting forward scatter height (FSC H) vs side scatter height (SSC H). The fluorescence of the cells was analyzed by plotting FI at 510/15 nm vs FI at 585/40 nm which allowed the clear identification of the CyOFP1opt-carrying population due to its high FI at 585/40 nm. Then the rest of the cells, designated as mEGFP+mBeRFP, were plotted FI at 510/15 nm vs FI at 610/20 nm to make the distinction between both populations clearer. The mBeRFPopt-carrying population was identified by its high FI at 610/20 nm whereas a high signal at 510/15 nm was detected for the mEGFP-carrying population. The population distribution obtained by the gating strategy was 31.8% for TMBRP014 (mEGFP), 33.2% for TMBRP004 (CyOFP1opt) and 30.4% for TMBRP005 (mBeRFPopt), which was in agreement with the expected 33% for each of the strains. Although the strains' fluorescence emissions were distinguishable from each other and a gating strategy for identification of each of the FPs signal was designed ( Fig. 2 ), further improvements were attempted. Distributions shaped diagonally were observed, especially in CyOFP1opt for detection filters 585/40 nm vs 610/20 nm ( Fig. 3 A); this happens when spillover occurs i.e., CyOFP1opt signal being recorded in the red channel (610/20 nm filter). To avoid this problem, compensation of FL1-H, FL2-H and FL3-H parameters was performed using FlowJo's compensation tool. Samples containing individual FPs were used as positive controls: mEGFP for FL1-H (510/15 nm), CyOFP1opt for FL2-H (585/40 nm) and mBeRFP for FL3-H (610/20 nm). In contrast, the background strain was used as a negative control for all parameters. As a result, the fluorescences were corrected and the diagonal appearances of the FI distributions were no longer visible ( Fig. 3 B). The outcome of the compensation was further tested by recalculating the fold-change in FI observed in the different strains with the compensated values for FI after applying the compensation matrix (Suppl. Table 2). This led to only one signal that was clearly visible in the strains expressing one FP TMBRP014 (mEGFP), TMBRP004 (CyOFP1opt) and TMBRP005 (mBeRFPopt) in their corresponding detection filters, 510/15 nm, 585/40 nm and 610/20 nm, respectively ( Fig. 3 C). Likewise, TMBRP008 (mEGFP+CyOFP1opt) showed two clear signals at 510/15 nm and 585/40 nm whereas TMBRP009 (mEGFP+mBeRFPopt) showed them at 510/15 nm and 610/20 nm. As a consequence of compensation, a reduction in fold-change FI in the red channel was observed for the strains carrying CyOFP1opt, TMBRP004 and TMBRP008 as opposed to the values before compensation ( Fig. 1 A). These results confirmed the compatibility between the fluorescent proteins making the combination of mEGFP, CyOFP1opt and mBeRFPopt a good candidate for simultaneous measurement of three fluorescence emissions in flow cytometry. 3.5 Fluorescent protein impact and properties Possible interference caused by the expression of the introduced FPs was assessed by growth experiments on YPD medium. The addition of the fluorescent reporters showed no effect on the growth pattern for all the constructed strains, TMBRP013 (yEGFP), TMBRP014 (mEGFP), TMBRP004 (CyOFP1opt) and TMBRP005 (mBeRFPopt), as compared to the background strain ( Fig. 4 A). This highlighted the non-invasiveness of the introduced reporter(s). Morphological changes in the cell population were studied by looking at the forward scatter (FSC H), which correlates with the size of the cells. A pattern corresponding to the budding processes was observed where the FSC H initially increased in exponential phase and decreased towards the stationary phase (Suppl. Fig. 1 ). This pattern was observed for all strains, i.e. independently from the used FPs, indicating that the expression of FPs did not infer any morphological changes. Next, the response of the four tested FPs, yEGFP, mEGFP, CyOFP1opt and mBeRFPopt expressed under the constitutive TEF1 promoter, was followed over time to map the dynamics of each fluorescence signal. From the study of Peng et al., in which TEF1 promoter activity was followed over time by GFP expression in similar conditions (20 g/l of glucose) [9] , an initial increase in fluorescence was expected during the fermentative phase, followed by a decrease initiated during the diauxic shift which continued during the ethanol consumption phase, reaching low levels after 24 h. In the present experiment, a similar pattern was observed for all four FPs during the first seven hours, with an increase in fluorescence that reached its highest point at 3 h of cultivation ( Fig. 4 B); from this point, the fluorescence decreased progressively. However, a main difference could be observed after 24 h. In strains carrying GFPs, mEGFP and yEGFP, an abrupt decrease of the fluorescence was observed whereas the fluorescence in strains carrying CyOFP1opt and mBeRFPopt stabilized and remained high after 24 h of cultivation. To further assess the in vivo stability and half-life of the different FPs, a protein synthesis inhibitor was added to the medium and the evolution of the fluorescence signal was recorded from that point. Since new proteins could not be synthesized and no growth was possible either, it was assumed that the decrease in fluorescence would correspond to the degradation of the available protein. Cycloheximide was first used as a protein synthesis inhibitor as it is commonly used for half-life determination [27] . Both GFPs, mEGFP and yEGFP, showed a decrease in fluorescence after the addition of cycloheximide ( Fig. 5 A). Unexpectedly, CyOFP1opt and mBeRFPopt showed an increase in fluorescence intensity after cycloheximide addition ( Fig. 5 A). To elucidate whether the response was substance dependent or not, nourseothricin was tested as an alternative protein synthesis inhibitor. Nevertheless, the response was consistent with that of cycloheximide: both GFPs decreased in fluorescence and CyOFP1opt and mBeRFPopt fluorescence signals increased over time ( Fig. 5 B). Both GFPs, yEGFP and mEGFP, showed similar degradation patterns that were further confirmed with the estimation of a half-life of ca. 22 h in both cases by obtaining the slope of the linear regression fitting the linearized fluorescence curve. No calculation of half-life was possible for CyOFP1opt and mBeRFPopt, due to the unexpected behavior for these proteins that remains to be solved. 3.6 Fluorescent proteins for population heterogeneity studies One of the strongest advantages of using flow cytometry, as compared to fluorometry, is the possibility of obtaining single-cell measurements that can give information on the heterogeneity of the studied population. In the present study, the distribution of fluorescence was gaussian like with a slight skewness to higher fluorescence, and no subpopulations with low fluorescence level arising from dead cells or poor expression was observed. Distribution of FP expression was assessed by the coefficient of variance (CV), which is the ratio between the standard deviation of the distribution and its mean fluorescence. Similarly, the robust coefficient of variance (rCV), that is less skewed by possible outliers in the population, was used. All strains expressing the different FPs showed similar rCV values, ranging from 31% to 46% rCV, which suggest that the expression of the fluorescent proteins yEGFP, mEGFP, CyOFP1opt and mBeRFPopt were equally distributed within the population. The fluorescent protein mBeRFPopt showed a slightly higher rCV (p-value < 0.05) than the rest of the FPs ( Fig. 4 C). Furthermore, an increase in distribution level was observed at the end of the cultivation (24 h), especially in the case of yEGFP and mEGFP ( Fig. 4 C). 4 Discussion The present study identifies original combinations of fluorescence proteins that can be used in the yeast S. cerevisiae for the simultaneous assessment of different cellular properties or to simultaneously screen for several phenotypes, for instance the production of different compounds that are recognized by corresponding biosensors. Many FPs are available and databases such as FPbase offer a vast compilation of data from their FP collection including properties like emission spectrum, brightness or molecular weight [26] which facilitates the selection process of a FP of interest. However, most of the available information comes from in vitro studies from native proteins or proteins produced with a limited host range. This makes in vivo characterization of the FP in the intended microorganism a necessary step to study the suitability of said FP [28] . In our study, we demonstrate the need for this step, as seven types of FPs were selected for multicolour detection based on their emission spectra, but only four of them resulted in detectable fluorescence intensity when one copy of the corresponding gene was integrated and expressed in S. cerevisiae . The absence of a distinct signal in strains carrying mHoneydew (YFP) and the RFPs yoTagRFP657 and smURFP could be due to different factors. In the case of mHoneydew and yoTagRFP657, their previous expression in S. cerevisiae in a multicopy plasmid had been successful yielding detectable fluorescence intensities (Hagman, personal communication). However, the integration of one copy was not sufficient here to give a distinct fluorescence signal. This could arise from the low brightness, 2.04 and 6.4 for mHoneydew and yoTagRFP657, respectively, compared to the 33.6 of GFP. In the case of the RFP smURFP, on the other hand, the comparable brightness (32.4) to that of GFP makes it unlikely to be the cause of the absence of fluorescence. smURFP is evolved from a cyanobacterial phycobiliprotein that uses biliverdin as cofactor. Since smURFP has been successfully expressed in E. coli and mammalian cells [29] , the challenges observed in S. cerevisiae may come from the absence of this cofactor. However, our first trials to add a multicopy plasmid carrying the corresponding HO-1 gene [29] to increase the biliverdin supply did not help in increasing the fluorescence signal in S. cerevisiae (data not shown). Using flow cytometry and several FPs, we demonstrate that different populations can be detected within the same sample based on their fluorescence type. This is of interest in fluorescence microscopy, where tagging of multiple cellular components or fusion proteins using FPs can be followed; in that case, properties such as small size or high stability are desired for the chosen FPs to obtain translational fusions with minimal imprint and prolong their study over time due to their long half-lives. Instead, when developing biosensors for population screening or to follow the dynamics of phenotypic properties, the used FPs should have sufficient fluorescence intensity to offer a wide range of detection but shorter half-live times that could be used to report dynamic changes. Ideally, a dynamic biosensor should be able to report both induction where an increase of fluorescence would be observed and repression where a decrease in fluorescence would be expected. From our results, we can observe that the four FPs tested are too stable for dynamic reporting of down-regulation events. In the case of both GFPs, an estimated half-life of ca. 22 h was obtained. This result that differs from the previously reported half-life of ca. 7 h for GFP [ 30 , 31 ] could be resulting from the calculation method used or experimental conditions. In the case of CyOFP1opt and mBeRFPopt an estimation of their half-lives was unfortunately not possible to obtain due to the unexpected and unexplainable increase in fluorescence when exposed to protein synthesis inhibitors ( Fig. 5 ). Initial hypotheses for the observed increase in FI were (i) the monomeric nature of these proteins, (ii) possible interactions with the inhibitor or (iii) the origin of the fluorescence in those detection filters (585/40 nm and 610/20 nm) being from cellular components and not the fluorescent protein. However, these were later discarded as (i) it was not the case for mEGFP, also a monomeric variant, (ii) two different inhibitors were tested and showed a similar pattern and (iii) no increase in those detection filters was observed for the strains carrying yEGFP or mEGFP. Further work to elucidate this behavior is, however, beyond the scope of the present study. Nevertheless, the fluorescence intensity remained constant after 24 h of cultivation ( Fig. 4 B), thus suggesting a much longer half-life than that of GFP. The long stability of FPs is a known issue that has been addressed by generating destabilized variants like yEGFP3-Cln2 PEST [31] . Although the half-life of yEGFP3-Cln2 PEST has been greatly decreased to ca. 30 min, it relies on the ubiquitin-dependent degradation system in yeast, which could potentially interfere with the cell metabolism rendering it unsuitable for non-invasive monitoring biosensor applications. In our study, four different FPs were successfully expressed in S. cerevisiae and their fluorescence signals were detected with the objective of finding suitable combinations for simultaneous detection. The first one was the FP yEGFP that is widely used in yeast research, and specifically in transcription factor-based biosensors [10] since it offers a broad dynamic range due to its high levels of fluorescence. However, we show that yEGFP is not optimal for multicolour flow cytometry purposes due to its high spillover signal into the rest of the detection filters. Instead, its monomeric variant, mEGFP, still shows a strong green fluorescence signal without spilling over other detection filters, making it a better candidate for multicolour flow cytometry. The other two FPs, CyOFP1opt and mBeRFPopt, also gave reasonably high signal in their respective emission channels. Since a key aspect of multicolour flow cytometry is to minimize the overlapping of the different fluorochromes [32] , we recommend two original combinations of FPs for bi-color flow cytometry: mEGFP together with either CyOFP1opt or mBeRFPopt. It is also possible to use the combination of all three FPs mEGFP, CyOFP1opt and mBeRFPopt for tri-color flow cytometry; in the latter case, the additional use of compensation methods is recommended to obtain much cleaner signals and avoid spillover. Combinations of up to four FPs have already been achieved in S. cerevisiae , including the use of FPs such as mTagBFP, mCherry, TagRFP-T, CFP or YFP; however, these were all optimized for live-cell imaging studies [ 20 , 33 , 34 ]. While compiling the present study, a parallel study was also published with a combination of up to three FPs, mTurquoise2, mCherry and YmPET whose genes were co-expressed in S. cerevisiae and the fluorescence recorded using a biolector [15] . All these approaches have in common the need for several excitation lasers, including a violet (405 nm) or yellow (561 nm) laser which are not commonly available in commercial flow cytometers. In contrast, our approach uses the fluorescent proteins mEGFP, CyOFP1opt and mBeRFPopt, which are all excited with a 488 nm blue laser. This opens up the possibility of performing multicolour flow cytometry by single-laser excitation with a 488 nm excitation laser that is provided in all simple flow cytometers as the primary excitation source [35] . To develop combinations of multiple FPs in flow cytometry to its full potential, the secondary excitation source should also be used. Since a red laser is commonly used as secondary excitation source [35] , the far-red region of the visible spectrum should be further exploited. Declarations of Competing Interest None. Acknowledgements We would like to thank Arne Hagman and Nina Muratovska for providing the plasmids mEGFP-FKBP(M)x4, pNCS mHoneydew and pFA6a-link-yoTagRFP657-Kan and pNM001, respectively. This work was financially supported by the Swedish National Energy Agency (Energimyndigheten, Project No. P46581–1 ). Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.btre.2022.e00735 . Appendix Supplementary materials Image, application 1 | [
"DAMBROSIO",
"SKJOEDT",
"MORMINO",
"WALMSLEY",
"CARLQUIST",
"KNUDSEN",
"ZHANG",
"BRINK",
"PENG",
"ADENIRAN",
"PRASHER",
"SHANER",
"KREDEL",
"TORELLOPIANALE",
"PIATKEVICH",
"CORMACK",
"ZACHARIAS",
"SHANER",
"LEE",
"CHU",
"YANG",
"RODRIGUEZ",
"JESSOPFABRE",
"GIETZ",
"LA... |
c250cf24cd6144f59ac4cc31ca5e05fd_Extended-spectrum antibiotics for community-acquired pneumonia with a low risk for drug-resistant pa_10.1016_j.ijid.2022.09.015.xml | Extended-spectrum antibiotics for community-acquired pneumonia with a low risk for drug-resistant pathogens | [
"Kobayashi, Hironori",
"Shindo, Yuichiro",
"Kobayashi, Daisuke",
"Sakakibara, Toshihiro",
"Murakami, Yasushi",
"Yagi, Mitsuaki",
"Matsuura, Akinobu",
"Sato, Kenta",
"Matsui, Kota",
"Emoto, Ryo",
"Yagi, Tetsuya",
"Saka, Hideo",
"Matsui, Shigeyuki",
"Hasegawa, Yoshinori"
] | Objectives
The potential hazards of extended-spectrum antibiotic therapy for patients with community-acquired pneumonia (CAP) with low risk for drug-resistant pathogens (DRPs) remain unclear; however, risk assessment for DRPs is essential to determine the initial antibiotics to be administered. The study objective was to assess the effect of unnecessary extended-spectrum therapy on the mortality of such patients.
Methods
A post hoc analysis was conducted after a prospective multicenter observational study for CAP. Multivariable logistic regression analysis was performed to assess the effect of extended-spectrum therapy on 30-day mortality. Three sensitivity analyses, including propensity score analysis to confirm the robustness of findings, were also performed.
Results
Among 750 patients with CAP, 416 with CAP with a low risk for DRPs were analyzed; of these, 257 underwent standard therapy and 159 underwent extended-spectrum therapy. The 30-day mortality was 3.9% and 13.8% in the standard and extended-spectrum therapy groups, respectively. Primary analysis revealed that extended-spectrum therapy was associated with increased 30-day mortality compared with standard therapy (adjusted odds ratio 2.82; 95% confidence interval 1.20-6.66). The results of the sensitivity analyses were consistent with those of the primary analysis.
Conclusion
Physicians should assess the risk for DRPs when determining the empirical antibiotic therapy and should refrain from administering unnecessary extended-spectrum antibiotics for patients with CAP with a low risk for DRPs. | Introduction Pneumonia is one of the most common and life-threatening diseases worldwide ( World Health Organization, 2020 ). Previous studies have demonstrated that the administration of inappropriate initial antibiotics can lead to adverse outcomes, including death ( Kumar ; et al. , 2009 Shindo ; et al. , 2009 Tumbarello ). Nevertheless, physicians often suggest the administration of unnecessary extended-spectrum antibiotics, such as antipseudomonal and antimethicillin-resistant et al. , 2013 Staphylococcus aureus (MRSA) drugs, for patients with pneumonia to avoid any potential delays in appropriate antibiotics administration ( Klompas, 2020 ). Some studies have revealed that the use of extended-spectrum antibiotics for patients with community-acquired pneumonia (CAP), including healthcare-associated pneumonia (HCAP), was associated with increased mortality ( Attridge ; et al. , 2011 Jones ; et al. , 2020 Webb ). et al. , 2019 To accomplish the appropriate initial antibiotic treatment for patients with pneumonia, risk assessment for drug-resistant pathogens (DRPs) is essential ( Aliberti ; et al. , 2012 Aliberti ; et al. , 2021 Kobayashi ; et al. , 2018 Shindo ; et al. , 2013 Shorr ; et al. , 2012 Webb ). Recent international guidelines on CAP have highlighted the importance of evaluation for the selection of initial antibiotics ( et al. , 2016 Metlay ). The 2019 American Thoracic Society (ATS)/Infectious Diseases Society of America (IDSA) CAP guidelines recommend the following strategy to determine the initial antibiotics to administer: an initial assessment of disease severity and evaluation of previous respiratory isolation of DRPs, including et al. , 2019 Pseudomonas aeruginosa and MRSA, followed by the assessment of the risk factors for DRPs in patients with severe CAP ( Metlay ). In terms of DRP risk assessment, differences in regional prevalence should be taken into consideration ( et al. , 2019 Metlay ; et al. , 2019 Shindo and Hasegawa, 2017 ). In the last decade, several research groups have proposed different prediction models for DRPs ( Aliberti ; et al. , 2012 Prina ; et al. , 2015 Shindo ; et al. , 2013 Shorr ; et al. , 2012 Webb ). However, their prediction of patients at high risk for DRPs may not be sufficient, whereas their predictive performance to identify patients at low risk was high ( et al. , 2016 Kobayashi ; et al. , 2018 Webb ). Although the guidelines suggest the previously mentioned treatment approach, the association between adherence to this strategy and patient outcomes is relatively unclear. Furthermore, to the best of our knowledge, evidence of the detrimental effects of the unnecessary use of extended-spectrum antibiotics in patients with CAP with a low risk for DRPs is scarce and requires further evaluation. et al. , 2016 We hypothesized that the unnecessary use of extended-spectrum antibiotics for patients with CAP with a low risk for DRPs is associated with increased mortality. This study aimed to clarify the effect of extended-spectrum antibiotics use in patients with CAP with a low risk for DRPs according to the treatment strategies of the 2019 ATS/IDSA CAP guidelines. Patients and methods Study design and setting This study was a post hoc analysis based on a prospective observational study that was performed at four medical institutions in Japan (one 1000-bed university hospital and three major community hospitals with more than 500 beds) from April 1, 2013 to March 31, 2014. This study was approved by the ethics review committee of Nagoya University School of Medicine (number, 2019-0312) and the respective institutional review boards of the participating institutions. The study was registered with the University Hospital Medical Information Network (registration number UMIN000009837). The protocol of this study was in accordance with the Declaration of Helsinki and the Japanese Ethics Guidelines for Epidemiological Studies. Although the need for the participants’ written informed consent was waived, the opt-out method was adapted according to the ethics guidelines. Eligible patients were provided with information about the study through the internet, brochures, and bulletin boards at the participating institutions and were given the opportunity to withdraw from the study if they wished to. Participants The study methods used in this study were as previously described ( Kobayashi ; et al. , 2018 Shindo ). Briefly, all adult patients who were hospitalized (aged ≥20 years) with newly developed CAP (including HCAP) were enrolled and followed up after 1 month. Patients with a low risk for DRPs according to the 2019 ATS/IDSA CAP guidelines were considered eligible for this study ( et al. , 2013 Metlay ). The following patients were excluded: those with previous isolation for DRPs and those with severe CAP with a high risk for DRPs using locally validated prediction rules in Japan ( et al. , 2019 Kobayashi ; et al. , 2018 Shindo ). et al. , 2013 Definitions of severity, the prediction rules for DRPs, and classification of antibiotics The 2007 IDSA/ATS criteria were followed to assess disease severity ( Mandell ). The severe CAP was considered if a patient satisfied one of the two major criteria (requiring mechanical ventilation or experiencing septic shock with the need of vasopressors) or three or more of the minor criteria (respiratory rate >30 breaths/min, arterial oxygen partial pressure to fractional inspired oxygen ≤250, multilobar infiltrates, confusion, blood urea nitrogen level >20 mg/dl, leukopenia resulting from infection, thrombocytopenia, hypothermia, or hypotension requiring aggressive fluid resuscitation) ( et al. , 2007 Mandell ). et al. , 2007 Prediction rules for DRPs that were derived and validated in our previous studies were followed ( Kobayashi ; et al. , 2018 Shindo ). In these studies, CAP-DRPs were defined as identified pathogens that are not susceptible to the antibiotics commonly administered for CAP, including nonantipseudomonal β-lactam (ceftriaxone or ampicillin-sulbactam), macrolides (azithromycin or clarithromycin), and fluoroquinolones (moxifloxacin, levofloxacin, or garenoxacin). The risk factors of CAP-DRPs included the use of antibiotics within the previous 90 days, hospitalization for ≥2 days during the preceding 90 days, immunosuppression, use of gastric acid–suppressive agents, tube feeding, and nonambulatory status. In addition, MRSA-specific risk factors included chronic dialysis during the preceding 30 days, congestive heart failure, and positive MRSA history within the previous 90 days. Patients were defined to be at low risk for DRPs when they presented with no or one risk factor of CAP-DRPs or when they presented with two risk factors of CAP-DRPs and no MRSA-specific risk factors ( et al. , 2013 Kobayashi ; et al. , 2018 Webb ). et al. , 2016 Antibiotics were classified into two categories: standard and extended-spectrum therapy. Standard therapy for patients with nonsevere CAP involved a nonantipseudomonal β-lactam plus a macrolide (or minocycline) or a respiratory fluoroquinolone, whereas that for those with severe CAP involved a nonantipseudomonal β-lactam plus a macrolide or a nonantipseudomonal β-lactam plus a respiratory fluoroquinolone. Extended-spectrum therapy was defined as any antibiotics with antipseudomonal activity, such as piperacillin-tazobactam, ceftazidime, cefepime, cefozopran, cefoperazone-sulbactam, meropenem, imipenem-cilastatin, doripenem, or aztreonam, or with anti-MRSA activity, including vancomycin, teicoplanin, or linezolid ( Shindo ; et al. , 2013 Shindo ). Fluoroquinolones were excluded from extended-spectrum antibiotics because they were recommended as a therapeutic regimen for patients with CAP at low risk for DRPs. et al. , 2015 End points The primary study end point was the 30-day all-cause mortality, defined as death within 30 days of admission. Patients who were discharged or transferred to other hospitals within 30 days of admission with improvement in pneumonia were considered alive for this analysis ( Fine ; et al. , 1997 Kobayashi ; et al. , 2018 Shindo ; et al. , 2013 Shindo ). The effect of the extended-spectrum therapy on 30-day mortality was also evaluated according to the severity of illness as a subgroup analysis. et al. , 2015 Statistical analyses Demographic and clinical characteristics were described. All categorical data were summarized as frequencies and presented as percentages. All tests were two-tailed and considered statistically significant if P -values were <0.05. To assess the effect of the extended-spectrum therapy on 30-day mortality, a multivariable logistic regression analysis was performed. Five factors (nonambulatory status, respiratory rate ≥30/min, albumin <3.0 g/dl, pH <7.35, and blood urea nitrogen ≥20 mg/dl) were selected as covariables associated with 30-day mortality in patients with CAP undergoing appropriate initial antibiotic treatment ( Shindo ). The odds ratio (OR) and corresponding 95% confidence interval (CI) were calculated by setting standard therapy as the reference. et al. , 2015 To confirm the robustness of the results, three sensitivity analyses using different analytical approaches to covariable adjustment were performed. Initially, a multivariable logistic regression analysis including other covariables as potential confounders (age ≥80 years, nonambulatory status, body temperature <36.0°C, respiratory rate ≥30/min, white blood cell count ≤4,000 cells/μl, hematocrit <30.0%, albumin <3.0 g/dl, and arterial carbon dioxide partial pressure ≥50 Torr) was conducted, which were associated with the 30-day mortality in patients with CAP with a low risk for DRPs ( Okumura ). A propensity score was then developed for the extended-spectrum therapy using a logistic regression analysis with the following covariables associated with mortality, disease severity, or isolation of DRPs: age, sex, comorbidities, nonambulatory status, residence at a nursing home or long-term care facility, infusion therapy, wound care, tube feeding, gastric acid suppression, physical findings, laboratory findings, arterial blood gas data, radiological findings, and requiring vasopressors or requiring mechanical ventilation (including noninvasive positive-pressure ventilation) at the time of CAP diagnosis ( et al. , 2018 España ; et al. , 2006 Fine ; et al. , 1997 Mandell ; et al. , 2007 Okumura ; et al. , 2018 Shindo and Hasegawa, 2017 ; Shindo ). The inverse probability of treatment weighting (IPTW) analysis was then performed to assess the effect of extended-spectrum therapy on the 30-day mortality ( et al. , 2015 Cole and Hernán, 2008 ). The details of the IPTW analysis have been described in the Supplementary Material. Finally, a stratified analysis was conducted using the composite score on disease severity, and the pneumonia severity index (PSI) classes (class I-III, IV, and V) ( Fine ). Patients with missing values were excluded. All statistical analyses were performed using the SPSS statistics software (version 28; IBM, Armonk, NY, USA) and R (ver.4.1.1; R Foundation for Statistical Computing, Vienna, Austria). et al. , 1997 Results Participants and baseline characteristics A total of 750 patients with CAP were evaluated, and 721 patients were eligible to be included in the current study. Of the 627 patients with a low risk for DRPs, 257 underwent standard therapy, and 159 underwent extended-spectrum therapy ( Figure 1 ). The baseline characteristics of the patients undergoing both therapies are presented in Table 1 . Frequencies of patients with chronic lung diseases, central nervous system disorders, immunosuppression, nonambulatory status, abnormal vital signs, platelet count <100,000/cells/μl, albumin <3.0 g/dl, blood urea nitrogen ≥20 mg/dl, pH <7.35, the ratio of arterial oxygen partial pressure to fractional inspired oxygen ≤250, arterial carbon dioxide partial pressure ≥50 Torr, bilateral lung involvement, and severe pneumonia were higher in the extended-spectrum therapy group than in the standard therapy group. Identified pathogens The proportion of the identified pathogens was 51.4% (132/257) and 52.2% (83/159) in the standard therapy and extended-spectrum therapy groups, respectively. The distribution of identified pathogens is presented in Supplementary Table 1. Non-CAP-DRPs, such as Streptococcus pneumoniae, MRSA, Haemophilus influenzae , and antibiotic-sensitive enteric gram-negative bacilli, were identified in 46.7% (120/257) of the patients in the standard therapy group and 42.8% (68/159) of the extended-spectrum therapy group. CAP-DRPs were identified in 12 patients (4.7%) undergoing standard therapy and in 15 patients (9.4%) undergoing extended-spectrum therapy. Administered initial antibiotics The administered initial antibiotics are listed in Table 2 . In the standard therapy group, most patients received a nonantipseudomonal β-lactam plus a macrolide, regardless of disease severity, and all patients in this group received azithromycin. The most commonly used antibiotic therapy for the patients in the extended-spectrum therapy group was piperacillin-tazobactam monotherapy, followed by piperacillin-tazobactam plus macrolides and carbapenems plus azithromycin therapy in patients with nonsevere CAP, whereas the most frequently administered antibiotic therapy was carbapenems plus azithromycin therapy, followed by carbapenems and piperacillin-tazobactam monotherapy in patients with severe CAP. In the extended-spectrum therapy group, combination therapy was administered to more than 60% of patients with nonsevere as well as severe CAP. The administered initial antibiotics in patients who were not classified into the standard therapy or the extended-spectrum therapy groups are also described in Supplementary Table 2. Primary study end point Figure 2 presents the 30-day mortality proportions of the standard therapy and extended-spectrum therapy groups. The 30-day mortality of the standard therapy group was 3.9% (10/257), whereas that of the extended-spectrum therapy group was 13.8% (22/159) ( Figure 2 a). On categorizing disease severity into two groups according to the 2007 IDSA/ATS criteria ( Mandell ), the 30-day mortality of the standard therapy and extended-spectrum therapy groups in patients with nonsevere CAP was 2.8% (6/215) and 9.7% (9/93), respectively ( et al. , 2007 Figure 2 b), whereas in patients with severe CAP, it was 9.5% (4/42) and 19.7% (13/66), respectively ( Figure 2 c). Table 3 presents the effect of extended-spectrum therapy on 30-day mortality. In the crude analysis, extended-spectrum therapy appeared to increase the 30-day mortality compared with standard therapy (OR 3.97; 95% CI 1.83-8.62; P <0.001). In the primary multivariable logistic regression analysis, the extended-spectrum therapy was significantly associated with higher 30-day mortality than standard therapy (adjusted OR [aOR] 2.82; 95% CI 1.20-6.65). Sensitivity analysis results are presented in Table 3 . These trends are in accordance with the findings of the primary analysis. Multivariable logistic regression analysis including other covariables as potential confounders revealed that extended-spectrum therapy significantly increased the 30-day mortality compared with standard therapy (aOR 2.88; 95% CI 1.22-6.83). In the IPTW analysis, according to the propensity score, the extended-spectrum therapy also increased the 30-day mortality (aOR 2.82; 95% CI 1.11-7.16). Results associated with the validity of the IPTW propensity score analysis are provided in the Supplementary Material. The results of the stratified analysis of PSI classes also demonstrated the same trend (aOR 3.25; 95% CI 1.41-7.50) ( Table 3 ). The 30-day mortality of the standard therapy and extended-spectrum therapy groups in each severity class (PSI class I-III [mild], IV [moderate], and V [severe]) is presented in Supplementary Table 3. Subgroup analysis Primary multivariable logistic regression analysis was also performed to assess the effect of extended-spectrum therapy on the 30-day mortality in each severity class as per the 2007 IDSA/ATS criteria (Supplementary Table 4). Although extended-spectrum therapy did not increase the 30-day mortality in patients with severe CAP (aOR 1.71; 95% CI 0.48-6.11), it was significantly associated with increased 30-day mortality in patients with nonsevere CAP compared with standard therapy (aOR 4.47; 95% CI 1.30-15.36). Discussion To the best of our knowledge, this is the first post hoc analysis based on a multicenter prospective observational study to assess the effect of extended-spectrum antibiotic therapy on the 30-day mortality in patients with CAP with a low risk for DRPs undergoing treatment as per the 2019 ATS/IDSA CAP guidelines. The results of the primary analysis as well as those of three sensitivity analyses demonstrated that extended-spectrum therapy is constantly associated with increased 30-day mortality compared with standard therapy. In addition, subgroup analyses indicated that the increase in the 30-day mortality because of extended-spectrum therapy was distinct in patients with nonsevere CAP rather than in those with severe CAP. These results suggest that the administration of extended-spectrum antibiotics is harmful in patients with CAP with a low risk for DRPs. The increased prevalence of DRPs is an escalating health problem worldwide ( World Health Organization, 2021 ). The selection of appropriate target patients for the use of extended-spectrum antibiotics has been previously investigated in some studies ( Gleason ; et al. , 1999 Menéndez ). Regarding patients with pneumonia, the HCAP concept was considered to resolve this issue and effectively identify patients at risk of DRPs ( et al. , 2005 American Thoracic Society, 2005 ; Kollef ). However, several researchers have raised doubts on the HCAP concept, suggesting that it could increase the unnecessary use of extended-spectrum antibiotics ( et al. , 2008 Aliberti ; et al. , 2021 Ewig ; et al. , 2019 Webb ). Indeed, several studies have revealed an association between the use of extended-spectrum antibiotics and increased mortality in patients with CAP, including HCAP ( et al. , 2016 Attridge ; et al. , 2011 Jones ; et al. , 2020 Webb ). In the last decade, several research groups in different regions have reassessed the essential risk factors of DRPs to be considered when determining the initial antibiotics to be administered after pneumonia diagnosis ( et al. , 2019 Aliberti ; et al. , 2012 Prina ; et al. , 2015 Shindo ; et al. , 2013 Shorr ; et al. , 2012 Webb ). The current international trend in determining the initial antibiotics to be administered for patients with CAP is to assess the history of DRP isolation and risk factors of DRPs ( et al. , 2016 Metlay ; et al. , 2019 Pletz ). The use of extended-spectrum antibiotics, including those exhibiting antipseudomonal activity and anti-MRSA activity, is acceptable for patients with a history of DRP isolation or those at a high risk for DRPs. Therefore, the benefits and drawbacks of extended-spectrum antibiotics use should be carefully considered for patients without a history of DRP isolation and at a low risk for DRPs. The current study evaluated the effect of the extended-spectrum therapy in such patients. et al. , 2020 The results of this study revealed that the extended-spectrum therapy was significantly associated with increased 30-day mortality in patients with a low risk for DRPs. This finding is consistent with those of previous studies.( Attridge ; et al. , 2011 Jones ; et al. , 2020 Webb ) The strategy for identifying patients with a low risk for DRPs in the current study complies with the 2019 ATS/IDSA CAP guidelines ( et al. , 2019 Metlay ). The results suggest that the algorithm used for the selection of nonextended-spectrum antibiotics in CAP is appropriate and can improve patient outcomes. Furthermore, the current study revealed that adverse effects of the extended-spectrum therapy were more prominent in patients with nonsevere CAP than in those with severe CAP, indicating that physicians should refrain from administering extended-spectrum antibiotics to patients who are at a low risk for DRPs with nonsevere CAP. Moreover, no statistically different adverse effects were observed in patients with severe CAP undergoing extended-spectrum therapy compared with those with nonsevere CAP in the current study, implying that multidimensional management strategies, including appropriate respiratory care and adjunctive therapy, and the appropriate use of antibiotics are crucial for improving outcomes of patients with severe CAP ( et al. , 2019 Aliberti ; et al. , 2021 Torres ; et al. , 2021 Wunderink and Waterer, 2017 ). Further prospective studies are warranted to validate the guidelines recommendations. The possible explanation for the association of extended-spectrum antibiotics with increased mortality includes multiple mechanisms triggered by the antibiotics. The changes in the composition of microbiota after the administration of extended-spectrum antibiotics may be one of the key mechanisms ( Thibeault ). A review of the microbiota associated with pneumonia revealed that extended-spectrum antibiotics compromise microbiota-dependent colonization resistance mechanisms. As a result, the use of extended-spectrum antibiotics may contribute to the increased risk for hospital-acquired and ventilator-associated pneumonia associated with increased mortality ( et al. , 2021 Thibeault ). There may be other possible explanations, such as several extended-spectrum antibiotics can cause acute kidney injury; in particular, the combination of vancomycin with piperacillin-tazobactam, which is often prescribed for pneumonia, is associated with increased acute kidney injury ( et al. , 2021 Bellos ; et al. , 2020 Lee ; et al. , 2021 Luther ). Moreover, et al. , 2018 Clostridioides difficile is the causative pathogen of antibiotic-associated colitis ( McDonald ). A systematic review and meta-analysis revealed that carbapenems and cephalosporins induced et al. , 2018 C. difficile infection to a larger extent than penicillins or fluoroquinolones ( Vardakas ). In addition, previous studies have demonstrated that extended-spectrum antibiotics predisposed patients to nosocomial lung infections ( et al. , 2016 Metersky ; et al. , 2016 Shindo ; et al. , 2013 Thibeault ; et al. , 2021 Venier ). Furthermore, acute kidney injury and nosocomial infections, including et al. , 2011 C. difficile infection, are associated with increased mortality in pneumonia ( Becerra ; et al. , 2015 Chawla ; et al. , 2017 Shindo ). Thus, multiple events induced by extended-spectrum antibiotics use may result in adverse outcomes in patients. Although acute kidney injury and et al. , 2013 C. difficile infection were not assessed in this study, we plan to evaluate them in an ongoing multicenter observational study. This study has several limitations. There may be a potential bias because of the post hoc analysis based on a prospective observational study. Moreover, there might be unidentified confounding factors for the end point, and the number of events was relatively small. Accordingly, a primary multivariable logistic regression analysis and three sensitivity analyses were conducted. Moreover, the data used in this study were obtained before the COVID-19 pandemic. The results of this study may be applied to patients with CAP but not to those with COVID-19 pneumonia; we aim to investigate the effect of extended-spectrum therapy on mortality in patients with CAP, including COVID-19 pneumonia, in another multicenter observational study. Despite these limitations, the results of this study provided valuable information regarding the selection of appropriate initial antibiotics for patients with CAP with a low risk for DRPs. Conclusion The current study was focused on patients with CAP with a low risk for DRPs, and the results revealed that the use of extended-spectrum antibiotics is associated with increased mortality. Physicians should therefore acknowledge the significance of DRPs risk assessment when determining the empirical antibiotic therapy and should refrain from administering extended-spectrum antibiotics to patients with a low risk for DRPs. Author contributions All authors meet the International Committee of Medical Journal Editors authorship criteria. HK and YS designed this study. DK, YS, HK, TS, YM, TY, and HS participated in data acquisition. HK, YS, and SM created the statistical analysis plan, which was reviewed by all authors. HK, YS, TS, YM, MY, AM, KS, KM, RE, and SM contributed to data interpretation. HS and YH contributed to study supervision. HK and YS wrote the initial draft of the manuscript. TS, YM, MY, AM, TY, and SM contributed to the critical revision of the manuscript for important intellectual content. All authors approved the final draft. Funding This work was partially supported by the Japan Society for the Promotion of Science KAKENHI (grant number 20K08517 ). This study was also supported by the Central Japan Lung Study Group, a nonprofit organization supported by unrestricted donations from the following pharmaceutical companies: Chugai Pharmaceutical Co., Ltd.; Shionogi & Co., Ltd.; Daiichi Sankyo Co., Ltd.; Dainippon Sumitomo Pharma Co., Ltd.; Janssen Pharmaceutical K.K.; Eli Lilly Japan K.K.; Taisho Toyama Pharmaceutical Co., Ltd.; Meiji Seika Pharma Co., Ltd.; MSD K.K.; Bayer Holding Ltd.; Astellas Pharma Inc.; and Nippon Boehringer Ingelheim Co., Ltd. The founders of the Central Japan Lung Study Group had no role in the design and conduct of the study; gathering, management, analysis, and interpretation of data; and preparation of the manuscript. Declaration of Competing Interest All of the following information provides relevant financial activities outside of the submitted work. YS reports personal fees (payment for lectures, including service on speaker bureaus) from KYORIN Pharmaceutical Co., Ltd.; AstraZeneca K.K.; Daiichi Sankyo Company, Limited; Nippon Boehringer Ingelheim Co., Ltd.; GlaxoSmithKline plc; and Gilead Sciences Inc. and participates as a member of the case adjudication committee of GlaxoSmithKline Biologicals SA. TY reports grants and personal fees (payment for lectures, including service on speakers bureaus) from Shionogi & Co., Ltd.; Dainippon Sumitomo Pharma Co., Ltd.; and MSD K.K. SM reports personal fees (payment for consultations in other studies) from Takeda Pharmaceutical Co., Ltd. YH reports grants and personal fees (payment for lectures, including service on speakers bureaus) from Chugai Pharmaceutical Co., Ltd.; MSD K.K.; GlaxoSmithKline plc; KYORIN Pharmaceutical Co., Ltd.; Pfizer Japan Inc.; Meiji Seika Pharma Co, Ltd.; Sanofi K.K.; and Daiichi Sankyo Inc. All other authors have no competing interests to declare. Acknowledgments The authors would like to thank Drs. Ryota Ito, Mai Iwaki, Yuka Tomita, Mitsutaka Iguchi, Tomohiko Ogasawara, Yasuteru Sugino, and Hiroyuki Taniguchi for the acquisition of data; Drs. Yosuke Goto, Kunihiko Takahashi, and Nancy Thabet for their comments on the manuscript. The authors are grateful to the clinical research coordinators (Kyoko Kazeto and Sumiyo Tanaka), laboratory staff (Ikuo Yamaguchi, Mariko Mochizuki, Miho Saito, Yoshiko Sugaki, Yusuke Nishida, and Sachie Asai), and all healthcare professionals who participated in the data collection. We would like to thank Enago ( www.enago.jp ) for the English language review. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.ijid.2022.09.015 . Appendix Supplementary materials Image, application 1 | [
"ALIBERTI",
"ALIBERTI",
"AMERICANTHORACICSOCIETY",
"ATTRIDGE",
"BECERRA",
"BELLOS",
"CHAWLA",
"COLE",
"ESPANA",
"EWIG",
"FINE",
"GLEASON",
"JONES",
"KLOMPAS",
"KOBAYASHI",
"KOLLEF",
"KUMAR",
"LEE",
"LUTHER",
"MANDELL",
"MCDONALD",
"MENENDEZ",
"METERSKY",
"METLAY",
"OK... |
0c8021349af94a1391f4e6f5490c3425_PD-L1 expression was associated with aggressive clinicopathological features in patients with VHL-re_10.1016_S2666-1683(20)32935-9.xml | 328 PD-L1 expression was associated with aggressive clinicopathological features in patients with VHL-related renal cell carcinoma | [
"Ma, K.F.",
"Li, L.",
"Kenan, Z.",
"Gong, K.",
"Cai, L."
] | null | null | [] |
f5c092a2c23d4238a57e1fc8ba0b7cb9_Machine learning for additive manufacturing Predicting materials characteristics and their uncertain_10.1016_j.matdes.2023.111699.xml | Machine learning for additive manufacturing: Predicting materials characteristics and their uncertainty | [
"Chernyavsky, Dmitry",
"Kononenko, Denys Y.",
"Han, Jun Hee",
"Kim, Hwi Jun",
"van den Brink, Jeroen",
"Kosiba, Konrad"
] | Additive manufacturing (AM) is known for versatile fabrication of complex parts, while also allowing the synthesis of materials with desired microstructures and resulting properties. These benefits come at a cost: process control to manufacture parts within given specifications is very challenging due to the relevance of a large number of processing parameters. Efficient predictive machine learning (ML) models trained on small datasets, can minimize this cost. They also allow to assess the quality of the dataset inclusive of uncertainty. This is important in order for additively manufactured parts to meet property specifications not only on average, but also within a given variance or uncertainty. Here, we demonstrate this strategy by developing a heteroscedastic Gaussian process (HGP) model, from a dataset based on laser powder bed fusion of a glass-forming alloy at varying processing parameters. Using amorphicity as the microstructural descriptor, we train the model on our Zr52.5Cu17.9Ni14.6Al10Ti5 (at.%) alloy dataset. The HGP model not only accurately predicts the mean value of amorphicity, but also provides the respective uncertainty. The quantification of the aleatoric and epistemic uncertainty contributions allows to assess intrinsic inaccuracies of the dataset, as well as identify underlying physical phenomena. This HGP model approach enables to systematically improve ML-driven AM processes. | 1 Introduction Additive manufacturing (AM) - also known as three-dimensional (3D) printing - enables the fabrication of near-net shape components with high geometrical freedom, due to the layer-by-layer build-up based on a digital model [1–3] . Laser powder bed fusion (LPBF) is a widely used powder-bed based AM technology for the fabrication of metallic components [2] . A computer-controlled laser beam locally melts predestined volumes of a deposited thin powder layer ( Fig. 1 a). The molten volume fuses with the underlying material, so that one component’s layer is completed ( Fig. 1 b). Afterwards, the next powder layer is deposited and melted ( Fig. 1 c). This cycle is repeated, until the build-up is completed [4] . (see Table 1 ). Metal AM found its way into industry and still continues to be a transformative manufacturing process across multiple industrial sectors, such as aerospace, healthcare, energy and automotive [2,5] . Key challenges still remain with controlling the metal AM process rating high amongst them. Next to expensive printing equipment and feedstock material, a plethora of processing parameters must be correctly selected for the successful fabrication of components with desired microstructure ultimately determining their physical, functional and/ or mechanical properties [3,4] . Current practice to identify this so-called optimum processing parameter set is still based on trial and error [3] , although the expertise of the AM-device operator as well as design of experiments [6] and statistical analyses [7,8] contribute to reduce the number of attempts required. All of these approaches are time-consuming and expensive which is especially valid for metal AM [2,9,10] . Since AM operates at the crossroad of materials, machines, computing and data [7] , this innovative manufacturing technology is designed to be streamlined by automation [9,11,12] . Therefore, the application of data-driven ML methods would represent an alternative approach for the identification of the optimum parameter set. ML already found its way into metal AM, especially into LPBF [9–22] . One of the main focuses of ML applications to AM processing is the optimization of processing parameters based on property predictions of printed parts. The fundamental limitation of such a ML-based approach is the variability of the AM processes which is not adequately treated and captured by constructed ML models [23,24] , so far. Physical processes, which govern the LPBF fabrication of parts, such as the interaction of the laser beam with metallic powder leading to its melting and subsequent solidification of the material are stochastic. Thus, a probability distribution to obtain a specimen with certain properties results at a given processing parameter set. To be more precise, variations can encompass powder characteristics (size and morphology of particles) and fluctuations of the laser power or scanning speed as well as further processing parameters. Next to the fabrication, also the characterization of specimens contributes to the resulting dispersion in the measured target properties [23,25] . Finally, ML models developed for predicting properties of additively manufactured materials also have a level of uncertainty depending on, for instance, the ML algorithms used to construct the models. All these various factors contribute to the resulting uncertainty that has to be properly treated. The goal of the present work is twofold. Firstly, we intend to introduce ML methods for uncertainty quantification not previously used in the context of AM. The access to position-resolved uncertainty in the feature space is essential for designing smallest possible, but most efficient datasets for developing ML models with high predictive accuracy. As AM operator, one strives to produce and especially characterize as less as possible specimens required for generating the dataset used to develop the model. Therefore, it is of high interest to identify regions in the feature space with highest uncertainty, so that particularly the corresponding experimental datapoints are provided to the ML model for training. Furthermore, uncertainty quantification can be used to analyze the dataset for revealing underlying physical phenomena or measurement inaccuracy of LPBF-fabricated specimens. With the present work, we aim to demonstrate both, the identification of locations in the feature space with high uncertainty and the analysis of the experimental data used to develop the present ML model. Therefore, we generated a dataset based on laser powder bed fusion of a glass-forming Zr 52.5 Cu 17.9 Ni 14.6 Al 10 Ti 5 (at.%) alloy at varying processing parameters. This specific alloy was selected, because (partially) amorphous specimens can be fabricated by LPBF allowing to use the so-called amorphicity, which is a microstructural descriptor, as output label for the present ML model. The amorphicity is defined as the volume fraction of the amorphous phase of each (partially) glassy LPBF-fabricated specimen. In comparison to other microstructural descriptors, such as grain size and shape for crystalline materials, the amorphicity can be relatively easy accessed via differential scanning calorimetry. By contrast the determination of grain size and shape require extensive and very time-consuming investigations involving metallurgical preparation and microscopic analysis. In consideration of a large number of specimens, which needed to be analyzed for generating a sufficient large dataset, selecting the amorphicty as output label is the logical choice. A salient feature of the experimental dataset is that its noise (uncertainty) level demonstrates heteroscedastic behavior. More precisely, the noise level inherent to the experimental data varies across the LPBF process parameter space for the specimens’ fabrication. In order to construct a model with high predictive accuracy, one must adequately consider the heteroscedastic nature of the uncertainty inherent to the data. We thus use the HGP algorithm introduced in [26] to model interrelations between amorphicity and the LPBF processing parameters and determine the corresponding uncertainty level. Although Zr-based alloys have been extensively studied in recent years in the context of additively manufactured bulk metallic glasses [27–31] , a robust model for accurate prediction of amorphicity dependent on the LPBF processing parameters has not been yet constructed. The development of such a model and its detailed analysis is hence our second main goal. The deliberate modulation of microstructural descriptors permits to design the microstructure and hence tailor the mechanical properties of LPBF-fabricated parts [32] . The present efforts are a first step toward exploiting the process-structure–property-performance linkage for the ML-driven fabrication of materials by additive manufacturing [11] . 2 Experimental dataset We aim to model the amorphicity and the corresponding uncertainty dependent on the LPBF processing parameters. From this perspective, it is important to have self-consistent data, which implies the employment of the machine with the same configuration for the production of the entire data, while the target characteristics should be measured using the same methods. Thus, we do not gather data for the Zr-based alloys from different sources but use a self-consistent experimental dataset from [33] . Laser power, scanning velocity and hatching distance were used as input parameters, because these processing parameters have been reported to mostly affect the amorphicity of specimens fabricated by LPBF from Zr-based glass-forming powder [28,34–36] . Of course, further processing parameters, such as thickness of the deposited powder layer or scanning strategy might also have an influence on the amorphicity. In order to demonstrate that a ML model can be successfully developed, those three most influential parameters were selected to predict the average amorphicity and respective uncertainty. Cuboid-like specimens (5 × 5 ×10 ) were fabricated by processing powder with nominal composition of Zr mm 3 52.5 Cu 17.9 Ni 14.6 Al 10 Ti 5 via LPBF using a SLM50 device (Realizer GmbH, laser spot size of 50 ). A scanning strategy of unidirectional vectors rotated by 90°in neighboring layers was utilized and the powder layer thickness (40 μm) was held constant. For generating the dataset, the level of the three processing parameters were varied in such a manner that specimens with cuboid like shape were fabricated. Thereby, the fraction of defects like minor cracks and pores varied and depended on the exact processing parameters. The goal is to provide specimens with varying amorphicity, since the ML model also requires data from samples with low amorphicity to properly learn the interrelation between amorphicity and the three processing parameters. The selection of the exact levels is based on a previous study μ m [28] and is for the three processing parameters as follows: laser power (90, 100, 110, and 120 W), scanning velocity (0.7, 0.8, 0.9, 1.0, and 1.1 m/s), and hatching distance (overlapping between adjacent melt tracks: 0.18, 0.2, and 0.22 mm). Three specimens were fabricated for each parameter set at the top, center and bottom location of the build plate to ensure reproducibility of the results leading to a total number of 180 specimens fabricated at six build jobs ( Fig. 2 ). The measured amorphicity of the samples did not depend on the their print location and we intended only print iterations permitting the ML model to determine the uncertainty. Differential scanning calorimetry was conducted for all 180 samples (sample weight of about 25 mg) at a heating rate of 40 K/min using Al-crucibles in a Perkin-Elmer Diamond. Each sample was heated twice to 873 K to obtain a baseline required for determining the crystallization enthalpy. The amorphous volume fraction, which is defined as amorphicity, was determined from the measured crystallization enthalpy of each LPBF-fabricated specimen and subsequently normalized to the crystallization enthalpy of a fully amorphous specimen of same nominal composition which was prepared by copper-mold casting (Edmund Bühler GmbH). The dataset thus consists of 180 samples fabricated at 60 unique processing parameter sets. 3 Predicting model construction Let us now turn to the ML model construction. In the ML community, one usually refers to the model’s input data as features , while the output is called target . In our case, the LPBF processing parameters are features, and the amorphous volume fraction is the target. The task here is to predict the amorphicity based on a set of LPBF parameters. This is a classical regression problem to be tackled by applying a supervised ML algorithm. There are plenty of supervised ML algorithms in literature and the most profound ones among them, such as artificial neural networks, usually require a large training dataset. When dealing with medium-size structured data, decision tree-based algorithms (e.g., XGBoost [37] or Random Forests [38] ), generally demonstrate the best predictive performance. The nature of tree-based algorithms leads to a non-smooth behavior of the produced response surfaces. The regression problem at hand, however, suggests that the response surface for the amorphous volume fraction should be described by a smooth function. Given the output variation to be fully determined by the three features at hand, while all other physical parameters that may influence the amorphicity of the specimens are constant, it is reasonable to expect that all points in the feature space within a close vicinity are correlated. The most suitable algorithm for such a task is the Gaussian process (GP) regression method [39] . Moreover, one main advantage of GP-based algorithms is that they also provide a natural framework for uncertainty quantification. Based on given processing parameter values, proper application of GP-based algorithms allows to model the probability distribution for a certain property (e.g. amorphicity) of a specimen. However, the classical GP regression algorithm is only suited to model the homoscedastic noise distribution, i.e. the noise which is constant for varying points of the feature space. As for each set of the LPBF parameters three specimens were produced and their amorphicity was measured, we may estimate the noise level inherent to these data. The scatter plot in Fig. 3 demonstrates that the lower amorphicity a specimen shows, the higher dispersion in its values was measured. One may also observe that the same trait is inherent to the feature space. The measured dispersion hence demonstrates heteroscedastic behavior, since it depends on the input data which shows varying levels of uncertainty for different regions of the feature space. As a consequence, classical GP regression algorithms requiring a homoscedastic noise distribution are not suited for modelling the interrelations between amorphicity and the processing parameters of the present dataset. Among GP regression methods, there is a family of algorithms known as heteroscedastic GP (HGP) [26,40,41] and they are more suited for the present dataset. These ML algorithms are designed to properly consider the heteroscedasticity in the noise level and are thus exploited to solve the present regression problem. Therefore, we exploited the HGP algorithm proposed in [26] . It should be noted that the classical (homoscedastic) GP algorithm has been previously applied by several authors to metal AM [20,10,42–49] , including optimization of the LBPF processing parameters [20,10,49] . The HGP algorithm is, however, introduced to metal AM by the present study for the first time. 3.1 HGP regression algorithm There are three features for the problem at hand: laser scanning velocity, hatching distance, and laser power. The input vectors from the training dataset are aggregated in the matrix x i , while the corresponding target values are collected in the vector X = ( x 1 , … , x n ) T with components y . In the present case, each vector y i consists of three components, corresponding to the three LPBF processing parameters. We assume a Gaussian noise term contribution for the measured training data which depends on the input data. To be more precise, it is assumed that the measured amorphicity data, x i , is approximated by y i , where y i = f ( x i ) + ∊ i f is a Gaussian process prior and is a Gaussian noise term. The dispersion parameters ∊ i ∼ N ( 0 , σ i ) for σ i are defined by a function ∊ i , which is to be identified during the fitting procedure. Denoting a point in the feature space which shall be predicted by σ i 2 = r ( x i ) = r , the HGP regression algorithm gives mean x ∗ and standard deviation m ∗ values for the corresponding target according to following relations: σ ∗ where (1) m ∗ = K ( x ∗ , X ) K ( X , X ) + R ( X ) - 1 y , σ ∗ 2 = σ a 2 + σ e 2 , and (2) σ a 2 = R ( x ∗ ) , σ e 2 = K ( x ∗ , x ∗ ) - K ( x ∗ , X ) K ( X , X ) + R ( X ) - 1 K ( X , x ∗ ) , R is the diagonal matrix , while R ( X ) = diag ( r ) . The central object of any GP algorithm is the kernel R ( x ∗ ) = r ( x ∗ ) , defining the expressions above as k ( · , · ) together with (3) K ( X , X ) = k ( x 1 , x 1 ) … k ( x 1 , x n ) ⋮ ⋱ ⋮ k ( x n , x 1 ) … k ( x n , x n ) , and (4) K ( x ∗ , X ) = ( k ( x ∗ , x 1 ) , … , k ( x ∗ , x n ) ) , K ( X , x ∗ ) = K T ( x ∗ , X ) , . The ability to predict the standard deviation is the basis for the uncertainty quantification analysis demonstrated in this work. The meaning of variances K ( x ∗ , x ∗ ) = k ( x ∗ , x ∗ ) and σ a 2 will be explained in the following sections. σ e 2 The HGP and classical (homoscedastic) GP regression algorithms mainly differ in the term , which in the case of classical GP has to be replaced by the identity matrix R ( · ) multiplied by a constant parameter I σ n 2 [39] . The kernel function determines the generalization properties of the HGP model. For example, if one deals with data demonstrating periodic behavior, the best choice for the model construction would be to take a periodic kernel. However, the proper selection of the structural form of the kernel for most ML problems is somehow a black art k ( · , · ) [50] . In general, a common strategy is to use a kernel function from the Matérn class of kernels. In order to construct the present HGP model, we chose the Matérn-3/2 function which is defined by One may think of (5) k ( x i , x j ) = A 2 1 + 3 d ( x i , x j ) e - 3 d ( x i , x j ) . as a metric in the feature space and it (its squared value) can be written as d ( x i , x j ) where (6) d 2 ( x i , x j ) = ( x i - x j ) T M ( x i - x j ) , M is the diagonal matrix . In our case, there are three non-vanishing components M = diag ( l ) - 2 of the matrix which correspond to laser power, laser velocity and hatching distance, respectively. These parameters strongly affect the characteristic length scales which, in a sense, define how far one needs to move along one particular axis in the feature space to observe a significant change in the target values. The parameter l = ( l P , l v , l h ) is usually called signal variance and defines the kernel function amplitude. The constant parameters described above are usually called A 2 hyperparameters , and in what follows, we will refer to them by the symbol . Thus, in our case θ . The hyperparameters must be identified during the fitting procedure of the present HGP regression model. θ = ( l , A ) 3.1.1 Fitting procedure The main idea behind the algorithm proposed in [26] is to estimate the uncertainty level from the training data and to build an additional model to accurately predict it. To do so, one can perform the following steps. Firstly, a classical (homoscedastic) GP model, , is fitted using the training data. By definition, this model predicts a multivariate probability distribution for the target. In the next step, for each training input G 1 , one takes a number x i s of target values from the distribution generated by the model y i ( p ) and estimates the noise level according to following formula: G 1 where (7) r ( x i ) = 1 2 s ∑ p = 1 s y i - y i ( p ) 2 , is the predicted mean value of the target at point y i . In our model, we use the number of samples as x i . This step is followed by training another GP model, s = 100 , for which target is the noise level G 2 . Mean values predicted by R ( X ) for the training input define the term G 2 which is used to fit the model R ( X ) at the following step. Thus, G 1 is fitted with the noise term G 1 predicted by R ( X ) , in all the subsequent iterations. The cycle should be repeated until it converges. In our experiments, about ten iterations were, therefore, needed. G 2 All hyperparameters are optimized following the marginal likelihood maximization procedure [39] . The log marginal likelihood is defined by which is a function of hyperparameters (8) L ( θ ) = - 1 2 y T K ( X , X ) + R ( X ) - 1 y - 1 2 log det K ( X , X ) + R ( X ) - n 2 log 2 π , that are optimized to maximize the function θ . In the first iteration of the fitting algorithm, for the L model, the matrix G 1 R should be replaced by a diagonal matrix with the constant noise term which also has to be fitted. At all following iterations, one fits the model by minimizing the likelihood function with the noise level σ n 2 I R predicted by the model . G 2 4 ML modeling of the amorphous volume fraction Following the algorithm described above, a HGP model was developed (the source code for the HGP algorithm is available in a GitHub repository [51] ). The predicted mean and uncertainty (standard deviation) values for amorphicity are depicted in Figs. 4 and 5 , respectively. In the following, we will discuss the model performance and describe the influence of the LPBF parameters on amorphicity. 4.1 Model’s performance The creation of a dataset by metal AM technologies is generally laborious and cost-intensive, due to the processing of the feedstock and especially characterization of the resulting specimens [19] . As a consequence, relatively small datasets of high quality are mostly provided for developing ML models, such as the present HGP model. For the development of a ML model, the dataset must be split for training and testing, so that ”overfitting”, which is a statistical challenge for ML in general [52] , is avoided. Testing reveals whether the model really ”learned” the underlying interrelations between the output (e.g. amorphicity) and the features (e.g. processing parameters) and hence ”generalizes”, or whether it is overfitted during training. In order to fully exploit the dataset for developing an accurate model, the so-called k-fold cross-validation (CV) [39] can be used. Within the k-fold CV approach, the dataset is randomly split into k groups of equal size known as folds. One of the folds is used for testing, whereas the residual folds are used for training. However, for the present small dataset, the experimental matrix design has a subtlety: Removing just a small number of points would lead to a large region of the feature space uncovered, leaving the model without knowledge of the target behavior in this particular region. Yet, it is reasonable to expect the model to have good interpolation abilities. Let us remind that the training points form a cubic lattice. There are 54 points on the boundary of this cube and six points lie in the bulk. Using boundary points as a training dataset, while the bulk ones as test data, one may find the root-mean-squared error (RMSE) to be 1.12%. Compared to the range of the original observed amorphicity values, the RMSE is reasonably low. k-fold CV can be carried out to the extreme by splitting the dataset into the number of data-vectors, so that each data-vector is used once for testing and all residual ones for training. This split method is known as leave-one-out cross-validation (LOO CV), which we exploit in order to get insight into the extrapolation properties of our model. Recalling that for each point of the feature space three samples were fabricated, we define a performance metric where (9) RMSE LOO = 1 n ∑ i = 1 n 〈 y i exp 〉 - y i pred 2 , n is equal to the number of parameter sets used to generate the experimental dataset ( ). Here n = 60 stands for an averaged value of amorphicity over three samples for a point in the feature space 〈 y i exp 〉 i , while is the corresponding mean value predicted by the HGP model according to Eq. y i pred (1) . The present model yields an of 2.58%. In other words, it predicts the amorphicity at a given processing parameter set with an accuracy of 2.58%. We note that this error is rating the overall predictive performance of our HGP model, but not the position-resolved uncertainty of an output-inputs vector of the feature space. Certain points of the dataset used for developing this model are crucial. When performing the LOO CV, the largest contribution to the resulting error stems from points with lowest amorphicity and their neighboring data points (these are the points in the left bottom corner in RMSE LOO Fig. 4 ). The model is not able to adequately capture a drastic change in amorphicity in the vicinity of this region when the corresponding data points are removed from the training dataset. This fact is also reflected in the high experimentally measured dispersion which will be discussed below. If removing only one point with the highest experimentally measured dispersion prior to performing LOO CV, the significantly drops down to 1.69%. RMSE LOO 4.2 Amorphicity dependence on the LPBF processing parameters The aim of the HGP model is to accurately predict the amorphicity for the three given processing parameters. Following the fitting procedure described in the previous section, this model was developed using the complete experimental dataset. Fig. 4 a displays the mean values of the predicted amorphicity as a function of the three features (laser power, scanning velocity, hatching distance), thus covering the three-dimensional features space. Fig. 5 a allows a more quantitative perception of the amorphicity by illustrating the corresponding mean values in two-dimensional contour maps of this space at varying scanning velocity. In general, the amorphicity of the specimens is higher when lower laser powers and larger values for the hatching distance and scanning velocity were employed during their LPBF-fabrication as is in line with literature [35,53,28,33] . In particular, the contour maps shown in Fig. 5 a disclose two characteristic regions. At low laser powers up to about 100 W, the LPBF-fabricated specimens showed high mean values down to 98% for amorphicity. Interestingly, amorphicity is hardly affected by the scanning velocity and hatching distance in this region. By contrast, both processing parameters have a strong impact on amorphicity when a laser power higher than 100 W was employed for LPBF fabrication. Then the amorphicity strongly varies with scanning velocity and also hatching distance. A drastic drop in amorphicity from % was observed with decreasing scanning velocity from 95 - 80 m/s ( 0.9 - 0.7 Fig. 5 a). Since highly amorphous specimens are fabricated by LPBF in the first processing regime, higher cooling rates must be then effective. The volume fraction of the vitrified phase – its amorphicity – is determined by the exact cooling rate which the utilized processing parameters dictate. The heat from the consecutively molten pools is extracted through the underlying solid material [4] . The speed of heat extraction determines the cooling rate and depends on the thermophysical properties of the underlying solid and especially on the processing parameters. There is no access to alter the materials properties, but variation of the processing parameters allows to adjust the cooling rates. They must be sufficiently high to circumvent intervening crystallization and in turn favor vitrification. Low laser powers, fast scanning velocities and large hatching distances thus favor amorphicity or are more efficient in preventing crystallization during LPBF-processing. 5 Uncertainty modeling Gaussian process-based models, such as the present HGP algorithm, provide a natural framework to quantify the uncertainty of the predicted mean values for an output label of choice (e.g. amorphicity). This means that the model provides a position-resolved uncertainty for any mean value in the feature space, unlike the RMSE value rating the performance accuracy of the GP model (see Section 4.1 ). The access to position-resolved uncertainty is almost invaluable for generating smallest possible datasets which still allow for developing ML models with high predictive accuracy. This advantage is decisive especially for optimizing metal AM processing, since the generation of a dataset, which is tantamount to the fabrication and characterization of specimens, is time- and cost-intensive. Therefore, one aims to provide an absolute minimum number of experimental points to the ML algorithm to construct the respective model. In order to efficiently enhance its predictive accuracy, data points in the feature space with highest uncertainty must be identified, so that the corresponding specimens can be fabricated and characterized. The current HGP model exactly enables the identification of data points with high uncertainty. It additionally allows to understand the higher uncertainty in the feature space by providing the uncertainty contributions. This approach is demonstrated in the following by the uncertainty distribution of the amorphicity as a function of the processing parameters. Fig. 4 b displays a three-dimensional plot of the total uncertainty as a function of the three features: scanning velocity, hatching distance and laser power. Highest values of the standard deviation calculated according to Eq. (1) , which is a measure for the total uncertainty predicted by the HGP model, are observed in a contiguous region above a laser power of 100 W, below a scanning velocity of 0.9 m/s, and at a decreasing hatching distance ( Fig. 4 b, bottom left corner). Lowest mean values for amorphicity exactly characterize this particular region of the feature space, as Fig. 4 a proves. Possible reasons could stem from the intervening crystallization or the measurement accuracy of the amorphicity by DSC, and we are going to elaborate on these reasons later on. Higher values of the uncertainty can be, furthermore, found in regions located between the experimental data points. In other words, the total uncertainty shows lowest values in the vicinity of the training points, while distant points are characterized by higher uncertainty [39] . For the present HGP model, higher uncertainty can be found in regions periodically arranged in the feature space forming a layered structure of larger uncertainty. The respective two-dimensional contour plots ( Fig. 5 b) point out this layered structure. Larger uncertainty is clearly visible between the experimental data points in the contour plot at a constant scanning velocity of 0.9 m/s. The dispersion length differs along the three features. Otherwise, uncertainty would have been concentrated in regions with a cuboid-like shape in the feature space located between the experimental points which are defined by the three features. This is not the case, since contiguous layered regions of larger uncertainty are present at constant laser powers ( Figs. 4b and 5 b). A possible explanation for this uncertainty distribution pattern is the following. After having fitted the model, one may check that the length scale in Eq. l P (5) for laser power has the lowest value compared to the other features. The correlation length defines how far one should move along the corresponding axis to see a significant change in the target values. The lower value the length scale has for a particular feature, the weaker correlation will the points along the axis defined by the feature have [39] . When varying the feature laser power and moving away from the experimental points along the laser power axis, a higher uncertainty results than by varying the other feature values. 5.1 Aleatoric and epistemic uncertainty The total uncertainty is distributed in the feature space within two characteristic structures ( Figs. 4b and 5 b): (i) a contiguous region demarcated by low scanning velocities (below 0.9 m/s) and higher laser powers (above 100 W), and (ii) the layered structure. This finding could indicate two different sources of uncertainty. In machine learning often a distinction of uncertainty may appear unnecessary, because the ML model is asked to make a prediction for an output and the exact source of its uncertainty may be then irrelevant [54] . If, however, the aim is directed towards understanding the dataset for ultimately reducing the uncertainty of the ML model, a distinction of the corresponding sources is vital. Any source of uncertainty can be generally categorized as either aleatoric or epistemic [55,54] . Aleatoric uncertainty is inherent in the observations. To be more precise, the outcome of a specific experiment, such as e.g. ”coin flipping” is dictated by inherently random effects [54] . Consequently, this type of uncertainty cannot be reduced by gathering more knowledge or improving the predicting model. By contrast, the epistemic uncertainty originates from a lack of knowledge and can be in principle reduced by providing additional information for training the model [54] . The predicted standard deviation values, , were calculated according to Eq. σ ∗ (1) and they are comprised of an aleatoric and epistemic uncertainty contribution (see, e.g., [56] ). The aleatoric noise level is governed by the term from Eq. σ a (2) , while corresponds to the model’s epistemic uncertainty. σ e Figs. 6a and 6 b display the aleatoric and epistemic distribution values predicted by the HGP model, respectively. The modeled aleatoric part properly captures the noise level behavior arising from the training dataset and comprises the contiguous region demarcated by low scanning velocities and higher laser powers, as also displayed in Fig. 4 b. Thus, the aleatoric contribution poses one source of the total uncertainty. Fig. 6 b visualizes the predicted epistemic uncertainty also obtained from the training set. Highest values for the standard distribution are located in regions between the experimental data points readily characterized by a layered structure. Since, the superposition of Figs. 6a and 6 b yield the total uncertainty visualized in Fig. 4 b, the epistemic part is hence the second source of the total uncertainty. As previously mentioned, uncertainty quantification can be a powerful tool to design small datasets specifically for developing ML models with high predictive accuracy. The aim is to produce a minumum number of experimental datapoints required for model construction. The ability to separate the total uncertainty into the aleatoric and epistemic contributions is therefore the basis. The epistemic uncertainty is highest in the uncovered feature space ( Fig. 6 b) reaching a maximum in between the experimental data points. Therefore, this uncertainty contribution can be reduced by providing additional data points of uncovered feature space. By contrast, the aleatoric uncertainty contribution cannot be reduced by training the HGP model with additional experimental data points and instead strongly depends on the measured amorphicity, as depicted by Figs. 4a and 4 b. In order to efficiently improve the model, further experimental data points shall be generated for regions with highest uncertainty. A strategy is to determine the maximum value of the epistemic uncertainty which yields a threshold value, as displayed in Fig. 7 (red dotted line) highlighting the dependence of the total uncertainty on the laser power at constant hatching distance and velocity. One must generate additional datapoints for locations with higher uncertainty than this threshold value. Then the epistemic part is eliminated, so that only the aleatoric contribution remains. The separation into epistemic and aleatoric uncertainty contributions hints at whether performing new measurements would improve the model’s performance. Eliminating epistemic uncertainty, by definition, improves the predicting power of the model. If the aleatoric uncertainty gives the prevailing contribution to the total one in some region, adding new points in that region would not (significantly) improve the model performance. A way to circumvent this issue is to identify sources of the aleatoric uncertainty followed by revising the experimental procedure, especially the characterization method. This is the focus of the following section. Nevertheless, in order to testify the described strategy in full scale, the fabrication of additional samples for the identified processing parameters is required. This is the subject of future research activities and hence beyond the scope of the present work. 5.2 Sources for uncertainty During LPBF, crystallization can either occur during initial (partial) vitrification of the melt or is induced into the bulk metallic glass during subsequent cyclic re-heating, due to the layer-by-layer fabrication [35] . During the vitrification of the overlying layers, the heat is extracted through the underlying material which then experiences a heat treatment. Previously formed glass is re-heated above the glass-transition temperature, so that it devitrifies into a supercooled liquid [57,58] . At further heating, supercritical nuclei will form and continue to grow leading to (partial) crystallization of the affected volume. The molten pools will be heated to higher temperatures, when increasing laser powers, decreasing scanning velocities and lower hatching distances are employed during LPBF. More heat must be then extracted resulting in lower cooling rates effective during solidification of the molten pools [59] and resulting in also stronger heating of the underlying material. In this process regime, crystallization of the supercooled liquid is more likely to occur, in turn leading to lower amorphicity. The LPBF process yields reproducible results, since identical powder is processed at same conditions to fabricate multiple samples with same microstructure. The size, shape and maximum temperature of the molten pools are equal, since they are determined by the same processing parameters. As a consequence, all samples should show a similar amorphous volume fraction eventually with crystals uniformly distributed throughout the whole sample. Thus, LPBF fabrication of the samples most likely does not contribute significantly to the characteristic uncertainty observed in Fig. 6 a (bottom left corner). The next step for generating the experimental dataset used for training the HGP model was the measurement of the amorphous volume fraction using differential scanning calorimetry. The corresponding measurement can be only carried out with a certain resolution. Furthermore, a baseline was generated to extract the crystallization enthalpy used for calculating the amorphicity. The measurement and analysis thus involve an error characterized by a certain absolute value [60] . This absolute value has a stronger impact on samples with lower absolute values of the crystallization enthalpy ultimately leading to a less accurate determination of the amorphicity, as Fig. 3 proves. The aleatoric uncertainty is increasing with smaller values of amorphicity. Thus, this behavior originates from the input-dependent uncertainty so that the amorphous volume fraction has higher dispersion above a laser power of 100 W and below a scanning velocity of 0.9 m/s ( Fig. 6 a). 6 Conclusions Here, we have introduced a HGP model for accurate modeling of materials properties and uncertainty quantification. The realization of such a ML-driven approach for the fabrication of materials with predestined microstructure and resulting properties, was successfully demonstrated by modeling the amorphous volume fraction – the amorphicity – of Zr 52.5 Cu 17.9 Ni 14.6 Al 10 Ti 5 specimens fabricated by LPBF. The amorphicity served as exemplary property. Altogether, the used dataset consists of 180 Zr-based fabricated samples at 60 unique combinations of varied laser power, scanning velocity and hatching distance. The amorphicity was determined via differential scanning calorimetry for all fabricated specimens. The developed HGP model shows an accurate predictive performance characterized by a root mean square error (RMSE) of 1.12%. Based on the predicted amorphicity, two characteristic regions of the feature space can be distinguished: A highly amorphous region was observed when laser powers below 100 W were used for the LPBF fabrication of the Zr-based specimens. The amorphicity of this region is hardly affected by varying scanning velocities and hatching distances. By contrast, the second region characterized by laser powers above 100 W shows a strong influence of the residual processing parameters. The amorphicity drops down to below 80% when additionally, low hatching distances and slow scanning velocities were used for LPBF fabrication. The HGP model was designed to not only predict the mean values of the amorphicity position-resolved in the feature space, but also to quantify the aleatoric and epistemic uncertainty contributions. The total uncertainty is distributed in the feature space within two characteristic structures which correspond to each uncertainty contribution. The epistemic uncertainty characterized by high values of the standard deviation is located in regions between the experimental data points forming a layer-like structure. This uncertainty source can be reduced by providing additional data points of uncovered feature space. By contrast, more training data points would not reduce the aleatoric uncertainty contribution, which in the present case strongly depends on the measured amorphicity. The aleatoric uncertainty is inherent to the training dataset and showed highest values of the standard deviation within a contiguous region demarcated by low scanning velocities and higher laser powers. This uncertainty contribution originated from the limited measuring accuracy of the differential scanning calorimetry device used for determining the amorphicity. Lower values of amorphicity were measured less accurate. Thus, the present HGP model does not only predict the amorphicity of Zr-based specimens fabricated by LPBF at a given processing parameter set with high accuracy, but it additionally provides the uncertainty of the respective mean value. Furthermore, this model serves as analysis tool for the investigation of the dataset used for training. Limited measurement accuracy or the random influence of underlying physical phenomena can be revealed by the aleatoric uncertainty contribution. The conceptualization of the dataset with respect to selected levels and their number of the processing parameter combinations determine the epistemic uncertainty. Insights into the uncertainty of the predicted mean values for a certain microstructural descriptor (e.g. amorphicity) or property are vital for an AM device operator who uses an HGP model as a guiding map for the selection of optimum LPBF processing parameters. This additional information allows him to evaluate the quality of the dataset and to possibly reveal processing parameter combinations with high uncertainty of the property. By generating additional experimental datapoints, the corresponding epistemic contribution is eliminated, so that the total uncertainty in this region is reduced. This resource-efficient strategy yields a HGP model with even higher predictive accuracy. The constructed HGP model has, however, limitations. Its extrapolation properties are not reliable for uncovered regions of the feature space spanned by the LPBF processing parameters. This is particularly valid for uncovered regions far away from the experimental data points used for constructing the HGP model and the predicted epistemic uncertainty demonstrates this fact. We note that this limitation does not depend on the ML algorithm on its own and can be only overcome by providing additional experimental data points. Along this line we would like to address possible further developments of the present work. The GP family of algorithms is widely known in the Bayesian optimization context. Thus, the next logical step would be the combination of the presented HGP approach with Bayesian optimization methods. The presence of heteroscedastic noise, however, poses a challenge for Bayesian optimization and ongoing research activities within the ML community are focusing on this problem (see, e.g, [56] ). We believe that the incorporation of Bayesian optimization tools into the presented HGP method may result in the development of a more effective approach for the optimization of LPBF processing and this is the focus of upcoming works. 7 Data availability statement Data is available from the corresponding author on reasonable request. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments We thank N. Geißler and P.G.K. Seethapettai for experimental support and Sergio Scudino for stimulating discussions. The work of D.C. was supported by the Alexander von Humboldt Foundation. K.K. acknowledges the financial support from the German Research Foundation (DFG KO 5771/1–1) and the Leibniz Association. | [
"MUKHERJEE",
"DEBROY",
"KING",
"POLLOCK",
"ALMANGOUR",
"SUN",
"PAWLAK",
"QI",
"TAPIA",
"WANG",
"MENG",
"KAMATH",
"PARK",
"SCIME",
"SCIME",
"SHEN",
"SHEVCHIK",
"LEE",
"TAPIA",
"LIU",
"LIU",
"HU",
"WANG",
"LOPEZ",
"PAULY",
"SOHRABI",
"LI",
"LI",
"HE",
"KOSIBA"... |
89733c47165b4a3ba4934b38711f5e69_Demonstrating sorption analogy of lanthanides in environmental matrices for effective decision-makin_10.1016_j.geoderma.2023.116730.xml | Demonstrating sorption analogy of lanthanides in environmental matrices for effective decision-making: The case of carbon-rich materials, clay minerals, and soils | [
"Serra-Ventura, Joan",
"Rigol, Anna",
"Vidal, Miquel"
] | Examining the effect of lanthanide-contaminated wastes, which have the potential to impact to other environmental compartments, requires conducting interaction studies with soils, as feasible first receptors of lanthanide leachates, and, if necessary, with sorbent materials, such as clay minerals and carbon-rich materials, which can serve as natural barriers and immobilisation agents used in remediation strategies. In this context, it is relevant to have available and reliable data on solid–liquid distribution coefficients (Kd) to understand the lanthanide sorption in these environmental matrices. Moreover, confirming lanthanide sorption analogies permits filling data gaps and data extrapolation among different contaminated scenarios, and thus facilitate to have available input data for decision-making related to the impact of a contaminated site. In this study, we demonstrate for the first time an analogous sorption of La, Sm, and Lu in carbon-rich materials (i.e., biochar and activated charcoal), clay minerals and soils, through laboratory batch experiments. The obtained sorption Kd values revealed similar sorption patterns among the three lanthanides for each matrix tested, even at different initial lanthanide concentrations. In all matrices, the maximum Kd values exceeded 104 L kg−1, with a significant decrease when testing high lanthanide concentrations. The analogy was first confirmed by examining the Kd correlations for the La-Sm, Lu-Sm, and La-Lu pairs within each matrix, for which strong linear correlations were obtained in all cases. Data compilations were built with own and literature data, and derived cumulative distribution functions revealed statistically equal lanthanide distributions and Kd best estimates. In addition to this, Kd variability decreased when grouping the data according to significant material properties. For the first time, Kd (Ln) best-estimates for different scenarios and materials were proposed as input data for risk assessment models. | 1 Introduction Lanthanides (Ln) are widely used in technological sectors, with magnets, catalysts, and alloys forming the largest commercial markets ( Eliseeva and Bünzli, 2011; Navarro and Zhao, 2014; Stegen, 2015 ). Their growing use is causing a related increase in Ln-enriched wastes and source-terms that could affect environmental compartments. Soil and aquatic systems with concerning levels of Ln are increasingly found all around, especially in China, a worldwide major producer and supplier ( da Pereira et al., 2022 ; Sthiannopkao and Wong, 2013 ), but also in other mine-influenced waters in sites around the world ( Gomes et al., 2022 ; Verplanck et al., 2004 ). Li and Wu (2017) reported up to 100, 70, and 3 µg L −1 of lanthanum (La), samarium (Sm), and lutetium (Lu), respectively, and Sun et al. (2012) reported up to 80, 30, and 1 µg L −1 of La, Sm, and Lu, respectively, in Chinese rivers affected by acid mine drainage. In Europe, unusual enrichments of lanthanides in water bodies of the Wiśniówka mining area have been reported, with concentrations up to 2600, 1900, and 120 µg L −1 of La, Sm, and Lu, respectively ( Migaszewski et al., 2019 ), and approximately 1800, 610, and 8 µg L −1 of La, Sm, and Lu, respectively, in acidic waters of Tinto River, in Spain ( Lecomte et al., 2017 ). Monitoring Ln concentrations in environmental compartments and examining the interaction of waste-leached Ln in soils, which are a key environmental compartment governing pollutant fate, are key steps in assessing the risk derived from the increase in Ln wastes. It is reported that a long-term exposure to these wastes can cause adverse effects in human health related with DNA damage or cell death ( Brouziotis et al., 2022 ). The interaction of Ln in soils depends on their edaphic properties and the relative weight of the phases such as organic matter, clay minerals, and metal oxides ( Ramírez-Guinart et al., 2017 ). To lower the mobility of Ln in soils or remove them from aqueous systems, different types of materials are increasingly being applied. Among these, clay minerals have been used as soil amendments to decrease radionuclide mobility in soils ( Vidal et al., 2001 ) and are often used as engineered barriers in the management of nuclear wastes rich in Ln and actinides ( Alba et al., 2011 ). Carbon-rich materials such as activated charcoal are one of the most commonly used sorbents, as they have high sorption capacities due to their high specific surface area and microporous systems ( Tan et al., 2015 ). An alternative attracting increasing interest is biochar, a carbon-rich material derived from biomass residues pyrolysed in the absence of oxygen, thus permitting the implementation of a circular economy for soil and water treatment ( Saravanan and Kumar, 2022 ). By optimising the pyrolysis procedure, a wide range of the physicochemical properties of biochar can be improved such as surface functionalisation, exchange capacity, and surface area ( Gopinath et al., 2021 ), enabling an increase in their sorption capacity. Other carbon-rich materials that could be used for the same purpose are coal fines, a by-product of coal originating from metallurgic industries that has a low market value and usually accumulates in stockpiles due to its expensive disposal. To investigate the sorption behaviour of Ln in carbon-rich materials, clay minerals, and soils, laboratory methods such as batch sorption experiments are required to derive sorption parameters as the solid–liquid distribution coefficient (K d , L kg −1 ) ( Aldaba et al., 2010 ; Ramírez-Guinart et al., 2017 ). However, sorption data of Ln in environmental matrices currently available in the literature do not permit to demonstrate or discard a comparable behaviour of Ln for the same type of pure soil phase, bulk soil or an environmentally relevant material. This makes it difficult to understand the Ln sorption process and to extrapolate existing information to untested Ln. For this reason, the present work investigated the possible chemical analogies in the sorption of Ln in different matrices of environmental interest by obtaining data from batch experiments and enhancing the resulting datasets with data available in the literature. La, Sm, and Lu were used as representatives of the Ln series, since La and Lu present the most different ionic radii, while the existence of both stable and radioactive isotopes makes Sm a lanthanide of interest to be examined. Carbon-rich materials, clay minerals, and soils were selected as the environmental matrices under study, and their K d values and distribution functions were compared to examine similarities or differences in their Ln sorption pattern. This analysis allowed us to identify the relevant factors affecting the variability of Ln sorption and provide modellers with a single K d (Ln) best-estimate for each material. 2 Materials and methods 2.1 Samples and sample characterisation The carbon-rich materials used in this work were classified into three groups: biochars, coal fines, and activated charcoals. The four biochars tested were produced by slow pyrolysis at 350 °C in the absence of oxygen and came from different feedstocks: castor meal (CM), eucalyptus forest residues (CE), sugarcane bagasse (SB), and the pericarp of green coconut (PC). The samples were sieved to obtain a particle size of < 2 mm. The coal fines (CF) were from the waste of a metallurgical industry. The two activated charcoals, an untreated (GAC) and a steam-activated one for water processing (NGAC), were supplied by Merck. The characterisation data of the carbon-rich materials used are summarised in Section S1 and Table S1 in the Supplementary Material . Four natural smectite clays (2:1 phyllosilicates) were used in this work: a hectorite (HEC) (Source Clays Repository of the Clay Minerals Society, University of Missouri, Columbia, USA), two montmorillonites (STx-1 (Source Clays Repository of the Clay Minerals Society, University of Missouri, Columbia, USA), and SCa-3 (Solvay Alkali GMBH)); and a bentonite (FEBEX) (ENRESA, Spain), with a montmorillonite content greater than 90 % ( Villar et al., 1998 ). Only the fraction of < 2 µm was used for the sorption experiments. The characterisation data of the clay minerals used are summarised in Section S1 and Table S2 in the Supplementary Material . Six agricultural soil samples from Spain and other locations across Europe (DELTA2, DUBLIN, ANDCOR, CABRIL, ASCO, and RED STONE) with contrasting edaphic properties were selected from a well-characterised collection of soils ( Ramírez-Guinart et al., 2017 ). All soil samples were taken from the surface layer (0–10 cm), air-dried and sieved to obtain a particle size of < 2 mm. The classification and characterisation data of the soils used are summarised in Section S1 and Table S3 in the Supplementary Material . 2.2 Laboratory batch sorption experiments The sorption experiments consisted of equilibrating 2 g of carbon-rich materials, 0.2 g of clay samples, and 1 g of soils with 50, 30, and 25 mL of the initial Ln solutions, respectively, in polypropylene centrifuge tubes. La, Sm, and Lu stock solutions were prepared by dissolving weighed amounts of La(NO 3 ) 3 , Sm(NO 3 ) 3 , and Lu(NO) 3 (Merck), respectively. Dilutions were performed to obtain the Ln solutions within the range of 0.03 to 10/13 meq L −1 , which defined Ln initial concentrations. These concentrations are representative for environmental polluted scenarios due to mine leachates, as earlier mentioned, while the highest initial concentration served to quantify the maximum sorption capacity of the materials. The suspensions were placed in an end-over-end shaker for 24 h, an adequate contact time to reach equilibrium in the three matrices tested in this study ( Coppin et al., 2002; Kołodyńska et al., 2018; Ladonin, 2019 ). The samples were then centrifuged at 4400 g for 15 min in a Hettich Rotina 420 centrifuge. The supernatants obtained were decanted, filtered through 0.45-µm nylon syringe filters, and stored in polyethylene vials at 4 °C. Aliquots for the analysis of La, Sm, and Lu were acidified to 1 % HNO 3 . The same procedure was followed for the blank experiments, without spiking with the Ln solutions. Supernatant aliquots from the blank sorption experiments were used to obtain the characterisation parameters mentioned in Section S1 of the Supplementary Material . 2.3 Analytical measurements The levels of La, Sm, Lu, as well as of the major cations Ca, Mg, K, and Na, in the solutions from the sorption experiments were determined by inductively coupled plasma optical emission spectroscopy (ICP-OES) using a PerkinElmer Optima 3200 RL spectrometer (Perkin Elmer) and the following emission lines (in nm): La, 408.672; Sm, 359.260; Lu, 261.542; Ca, 317.933; Mg, 279.077; K, 766.490; and Na, 589.592. In the cases where the concentrations were below the quantification limit for ICP-OES (Lu, Ca, and Mg: 0.05 mg L −1 ; La and Sm: 0.1 mg L −1 ; and K and Ca: 0.5 mg L −1 ), the concentrations were determined by inductively coupled plasma mass spectrometry (ICP-MS) using a PerkinElmer ELAN 6000 spectrometer. The limits of quantification for ICP-MS were 0.1 µg/L for La and Sm, and 0.5 µg/L for Lu. 2.4 Data treatment The sorbed Ln concentration (C sorb , mg kg −1 ) was calculated from the initial (C i , mg L −1 ) and equilibrium (C eq , mg L −1 ) Ln concentrations using the following equation (Eq. (1) : where m is the mass of material (carbon-rich material, soil or clay mineral) used in the experiment (kg) and V the volume of the initial Ln solution added (L). (1) C sorb = C i - C eq ∗ V / m The sorption solid–liquid distribution coefficient (K d , L kg −1 ) was calculated as the ratio of the target Ln C sorb and C eq , as follows (Eq. (2) : (2) K d = C sorb / C eq 2.5 Creation of the K d (Ln) datasets The K d values of the three Ln in the different matrices obtained in this work by sorption batch experiments were combined with the data compiled from the literature after a critical review of the available data and publications. The terms used during the search were: ‘rare earth elements’, ‘lanthanides’, ‘lanthanum’, ‘samarium’, ‘lutetium’ and/or ‘biochar’, ‘activated charcoal’, ‘soil’, ‘clay mineral’ and/or ‘sorption’, ‘adsorption’, and ‘removal’, among other combinations. The K d values from the literature were only accepted if they were directly reported in the body of the manuscript or could be easily extracted from the figures. Furthermore, they had to be derived from batch sorption experiments. Thus, the gathered K d data were obtained through single-point K d , linear sorption isotherms and the linear part of Langmuir-shaped sorption isotherms. The overall datasets, comprising our experimental data and the data from the literature, consisted of 35 entries for La, 33 for Sm, and 28 for Lu in carbon-rich materials; 70 entries for each Ln in clay minerals; and 44 entries for La and 42 each for Sm and Lu in soils. The references used and the entries extracted for the K d datasets of La, Sm, and Lu in the three matrices, along with the sorption data from this work, are shown in Table S4 in the Supplementary Material . 2.6 Statistical analysis of the K d (Ln) datasets The K d datasets of each matrix tested were subjected to linear regressions using the least-squares method (MATLAB R2022a, MathWorks, Inc.) to assess the relationship between the La-Sm, Lu-Sm, and La-Lu K d pairs. The chemical equilibrium diagrams of Ln in specific scenarios were constructed using the Chemical Diagrams Medusa-Hydra software. The K d parameter is log 10 -normally distributed ( Ramírez-Guinart et al., 2020; Sheppard, 2011 ). Therefore, it only takes positive real values. Thus, a log K d distribution can be described by means of the location parameters of percentiles (the 50th percentile, which is considered the most probable value of the distribution (the so-called K d best estimate, BE), and the 5th and 95th percentiles) and the scale parameter (or geometrical standard deviation, GSD) that estimates the log K d variability within a dataset. First, the log K d values of a certain dataset were ordered by increasing value and given an empirical frequency equal to 1/n, where n is the number of entries in the dataset. Then, the experimental cumulative distribution functions (CDF) were derived by assigning a cumulative frequency to the sorted log K d values, which is the sum of the preceding frequencies (up to 1). The resulting CDF was subjected to the Kolmogorov-Smirnov test to ensure it followed a normal distribution and fitted to the theoretical CDF equation using the least-squares method through the cftool toolbox of MATLAB. The fitted CDF allowed us to obtain the BE from the 50th percentile antilog and the GSD from the standard deviation antilog. The Fisher’s least significant difference (FLSD) test used for pairwise comparisons of the log 10 -transformed K d datasets was performed with Statgraphics Centurion 18 (Statgraphics Technologies, Inc.). To obtain more valuable K d BE with a lower associated variability, the datasets were refined by following different criteria for each of the matrix tested and splitting them into partial datasets, ensuring a sufficient number of entries to obtain CDFs with an appropriate goodness-of-fit. 3 Results and discussion 3.1 Examination of the Ln sorption analogy in carbon-rich materials 3.1.1 Description of Ln sorption in carbon-rich materials Table 1 summarises the derived K d values of La, Sm, and Lu from our own experiments, with the initial Ln concentration indicated for each experiment, and Table S5 in Supplementary Material summarises the equilibrium concentrations of each experiment. Within each carbon-rich material, La, Sm, and Lu generally showed the same trend when increasing the initial concentration. In the case of biochars, as previously observed for Sm ( Serra-Ventura et al., 2022 ), the maximum K d values of La and Lu were not mostly found at the lowest concentration tested (0.05 meq L −1 ), but at the next higher initial concentration. A further increase in the initial concentration led to a decrease in the K d values due to saturation of the sorption sites. Still, the sorption process of La and Lu in biochars at the lowest initial concentrations could be hampered by competition with solution components such as dissolved organic carbon (DOC), which could prevent the Ln from binding to the carbon-rich material by forming soluble chelates with it (see Table S1 in the Supplementary Material ). To support this, chemical equilibrium diagrams were obtained at different initial Ln concentrations and with varying DOC contents of the materials. Given the difficulty of modelling DOC-Ln complexation and considering that the major fraction of DOC is primarily composed of carboxylic groups and, to a lesser extent, phenolic groups the complexing capacity of DOC was simulated using ethylenediaminetetraacetic acid (EDTA) as a proxy. The diagrams in Fig. S1 of the Supplementary Material show an equally distributed presence of the Ln(EDTA) − and Ln(HEDTA) species, in addition to the free Ln ion and cationic species, at the initial and final experimental pH, proving a similar speciation and the same effect of DOC for the three Ln. Regarding coal fines and activated charcoals, the three Ln presented a huge affinity for the matrix (near or higher than 10 5 L kg −1 ), which was much higher than that for biochars. However, whereas low K d values at high initial concentrations were not observed in CF and GAC, 10 meq L −1 of La, Sm or Lu was enough to saturate the sorption sites in NGAC. Therefore, these results were the first indication that factors such as the Ln concentration or the nature of the carbon-rich material are more relevant in Ln sorption than the specific Ln involved in the sorption process. 3.1.2 Examination of the compiled K d (Ln) data and CDFs in carbon-rich materials The bibliographic research identified several relevant studies involving biochars and chemically activated biochars, with entries concerning batch sorption experiments with La and Sm that were accepted into the overall dataset ( Awwad et al., 2010; Chen, 2010; Hadjittofi et al., 2016 ; Wang et al., 2016 ; Liatsou et al., 2017; Kołodyńska et al., 2018 ; Zhao et al., 2021 ), along with our sorption data shown in Table 1 . To examine Ln analogy within the group of the carbon-rich materials, linear regression analyses were performed within the overall dataset between K d data of the La-Sm, Lu-Sm, and La-Lu pairs, when available for the same material. As the literature data did not present any of these pairs, linear regressions were performed only with our sorption K d data. Significant linear correlations with the three pairs of log 10 -transformed K d data were found (p-value < 0.05), presenting the following equations, with the confidence range in brackets. Graphical representations of these correlations are provided in Fig. S2 in the Supplementary Material . log K d (Sm) = 0.8 (0.1) x log K d (La); N = 28, R 2 = 0.91 log K d (Sm) = 1.0 (0.1) x log K d (Lu); N = 28, R 2 = 0.94 log K d (Lu) = 0.8 (0.1) x log K d (La); N = 28, R 2 = 0.92 The correlations presented a non-significant y-intercept, suggesting that there was not a relevant bias between the K d of the two Ln tested. The slope obtained for Sm vs. Lu was statistically 1, indicating a direct analogy, while for the La-Sm and La-Lu pairs was also near to 1. By arranging the data in terms of the CDF of the overall dataset, it was possible to evaluate and describe the variability in the K d datasets of La, Sm, and Lu in carbon-rich materials (see Fig. 1 ). The probabilistic approach led to obtain K d BE values with a difference of less than one order of magnitude between La and Lu (3040 and 970 L kg −1 , respectively), which were not statistically different. However, the CDF showed high variability due to the different nature of the carbon-rich materials included and the wide range of Ln concentrations that were compared. To reduce data variability, besides excluding the unlikely event of finding 10 meq L −1 of Ln in a natural waste leachate, a two-group split was tested. One group contained the biochars (CM, CE, PC, SB, and the literature entries), while the other contained the rest of the carbon-rich materials (CF, GAC, and NGAC). The grouping criterion resulted in K d distributions with a lower intrinsic variability, with all the fitted CDFs presenting regression coefficients higher than 0.92. The K d BE values obtained for the non-biochar carbon-rich materials were higher by more than one order of magnitude than those for the biochars, the first one being a more suitable option for remediation purposes. Within each group, the La, Sm, and Lu datasets and their K d BE values were statistically equal according to the FLSD test, but statistically different when comparing the same Ln values between the two different groups. This was consistent with the previous examination of the sorption data, demonstrating that the three Ln presented a similar sorption behaviour in carbon-rich materials with different origins and characteristics. This observation led to the building of a joint La, Sm, and Lu dataset, considering that the data could be treated as lanthanides (Ln). Thus, two statistically different K d best-estimates for Ln were proposed for biochars and other carbon-rich materials, differing in one order of magnitude. 3.2 Examination of the Ln sorption analogy in clay minerals 3.2.1 Description of Ln sorption in clay minerals The K d values obtained from the batch experiments with the clay minerals at differing C i are summarised in Table 2 , whereas Table S6 in Supplementary Material reports the experimental C eq measured. K d data revealed a similar sorption pattern among the three different Ln studied. As in previous studies, a Langmuir behaviour was observed in all cases. All the 2:1 phyllosilicates presented high K d values until the saturation of the sorption sites at the maximum initial concentration tested ( Galunin et al., 2010 ). The K d values obtained were higher than 10 4 L kg −1 in all cases, revealing a strong and durable affinity across the different initial concentrations tested. This affinity could be associated with the formation of inner-sphere complexes between Ln and the clay surface as it was previously reported in literature ( Stumpf et al., 2002; Hartmann et al., 2008 ), in which the complexes were favoured by the basic nature and low ionic strength of the contact solution. In most cases, the sorption for the low initial concentration range followed a linear trend. 3.2.2 Examination of the compiled K d (Ln) data and CDFs in clay minerals The gathering of the batch sorption data of Ln in clay minerals from the literature resulted in the finding of a few publications, some of which did not directly report the data in the body of the manuscript ( Gu et al., 2022 ). The collected K d data came from two scientific publications, producing 54 new entries for 1:1 and 2:1 clay minerals (kaolinite, smectite, and halloysite) under different experimental conditions of pH (from 3 to 7.5), ionic strength (0.01 and 0.025 M NaNO 3 ), and initial Ln concentration (100 to 130 µg/L), among others ( Coppin et al., 2002; Yang et al., 2019 ). The selection of K d data from these publications was made based on the representativeness of the laboratory conditions of environmental scenarios. To examine Ln analogy within the group of clay minerals, linear regression analyses were performed using the overall dataset. For this type of material, all the accepted literature entries provided K d data of the three Ln. Significant linear correlations with the three pairs of log 10 -transformed K d data were found (p-value < 0.05), presenting the following equations, with the confidence range in brackets (see graphical representations in Fig. S3 in the Supplementary Material ): log K d (Sm) = 1.04 (0.06) x log K d (La); N = 70, R 2 = 0.94 log K d (Sm) = 0.95 (0.04) x log K d (Lu); N = 70, R 2 = 0.96 log K d (Lu) = 1.07 (0.07) x log K d (La); N = 70, R 2 = 0.94 The correlations presented a non-significant y-intercept, suggesting that there was no relevant bias in any of the pairs examined. The slopes obtained were mostly statistically equal to 1, indicating a direct analogy among Ln and that the change of aqueous solution parameters, such as pH and ionic strength, equally affected the K d values of La, Sm, and Lu. The construction of the CDF of La, Sm, and Lu from the overall data gathered led to obtaining K d BE values that were not statistically different according to the FLSD test ( Fig. 2 ). Nevertheless, the clay minerals were grouped as 1:1 and 2:1 clay minerals to reduce the dataset variability, removing, as in the case of the carbon-rich materials, the data involving an initial concentration of 13 meq L −1 . As summarised in Fig. 2 , the K d BE values for the 1:1 clay minerals were lower by one order of magnitude than those for the 2:1 clay minerals, which is in agreement with previously published studies on the sorption of heavy metals in clays with different characteristics ( Uddin, 2017 ). Within each partial dataset, the K d distributions of La, Sm, and Lu exhibited lower variability than that of the overall dataset, and regression coefficients greater than 0.98, with their BE values and distributions statistically equal according to the FLSD test. This confirmed the previous finding that changes in experimental conditions equally affected the K d values of La, Sm, and Lu in both 1:1 and 2:1 clay minerals. As performed in the previous section, a joint La, Sm, and Lu dataset was built and two single and statistically different K d (Ln) BE were proposed for both 1:1 and 2:1 clay minerals, differing by an order of magnitude. Thus, 2:1 clay minerals may be a more effective barrier in a remediation strategy. 3.3 Examination of the Ln sorption analogy in soils 3.3.1 Description of Ln sorption in soils As seen in Table 3 , La and Lu underwent similar sorption processes as the ones observed for Sm in a previously published work ( Ramírez-Guinart et al., 2018 ), either exhibiting a linear K d behaviour across the whole range of initial concentrations tested (DELTA2 and DUBLIN) or reaching site saturation at the highest initial Ln concentration of 10 meq L − 1 (ANDCOR, CABRIL, ASCO, and RED STONE). Additionally, the equilibrium concentrations for the low initial Ln concentrations were predominantly within the range of 0.2 to 1000 µg L −1 for La, Sm, and Lu (see Table S7 in Supplementary Material ). To strengthen this assumption, Ln speciation in three different soils was investigated (see Fig. S4 in the Supplementary Material ). The speciation of the three Ln in the supernatant under conditions of varying pH and anions in solution was observed for high carbonate and sulphate levels in solution (ASCO), low carbonate and sulphate levels in solution (RED STONE), and high carbonate but low sulphate levels in solution (DUBLIN). In all cases, the predominant species in the range of pH values tested were very similar, with Ln 3+ , LnSO 4 + , and LnCO 3 + being the most predominant species. Hence, these same ones interact with the soil surface during the sorption process of La, Sm, and Lu. 3.3.2 Examination of the compiled K d (Ln) data and CDFs in soils In the bibliographic research of the K d data for La, Sm, and Lu in soils, several publications were found, although a few of them did not directly report the data in the body of the manuscript or figures and they were not considered ( Tang and Johannesson, 2005; Ladonin, 2019 ). Two scientific papers and two technical reports produced 21 new entries of the K d values for La and 19 each for Sm and Lu in soils, tills, and gyttja ( Dinali et al., 2019; Sheppard et al., 2009, 2011; Zuyi et al., 2000 ). The revision provided extra data for pH and salt conditions that were different to those used in our experiments, thus enriching the subsequent analysis of analogies between the Ln, but also increasing the related variability. Significant linear correlations with the three pairs of log 10 -transformed K d data were found (p-value < 0.05), presenting the following equations, with the confidence range in brackets: log K d (Sm) = 0.98 (0.03) x log K d (La); N = 43, R 2 = 0.90 log K d (Sm) = 1.07 (0.02) x log K d (Lu); N = 41, R 2 = 0.97 log K d (Lu) = 0.92 (0.02) x log K d (La); N = 43, R 2 = 0.92 The correlations presented a non-significant y-intercept, suggesting that there was not a relevant bias in any of the pairs examined, with slopes approaching to 1. Graphical representations are provided in Fig. S5 in the Supplementary Material . The examination of the CDF of the overall datasets showed that the K d BE values of the three Ln were within the same order of magnitude (see Fig. 3 ). The distributions were statistically equal according to the FLSD tests, but with high associated variability. A closer examination of the dataset led to the grouping of the data according to the dynamics of the interactions of the Ln with the soil. Therefore, ‘freshly contaminated’ (i.e., data from laboratory batch sorption experiments) and ‘native’ (i.e., data from desorption of soil native lanthanides, which actually led to the quantification of desorption K d ) data groups were created, excluding the 10 meq L −1 initial concentration K d data. This approach reduced the variability of the partial datasets with respect to that of the overall dataset and enabled a better evaluation of the similarity of the Ln distributions in each group. The CDF of the partial datasets presented regression coefficients higher than 0.97. The distributions of La, Sm, and Lu for freshly contaminated Ln were statistically different from those of La, Sm, and Lu for the native group, as confirmed by the FLSD test. Therefore, the management of long-term contaminated sites will be different to the one from freshly contaminated sites due to the interaction dynamics with the soil, hence requiring different decision-making. Within each group, statistically equal distributions of K d BE values were obtained for the three Ln confirmed by the FLSD test. This suggested that La, Sm, and Lu behaved similarly in the short-term scenario of freshly contaminated soils as well as in the desorption of native Ln from the soil. Thus, a joint La, Sm, and Lu dataset was built in order to propose a single and statistically different K d (Ln) BE for the two scenarios. 4 Conclusions The systematic study with own and literature data of the sorption of La, Sm, and Lu in carbon-rich materials, clay minerals, and soils unequivocally showed a Ln sorption analogy in each of the group of materials tested. Splitting overall datasets into partial groups confirmed that the distributions of the three Ln tested were statistically equal when comparing carbon-rich materials with different characteristics (such as biochar versus activated charcoals and coal fines) and clay minerals with different structures (i.e., 1:1 and 2:1 phyllosilicates), as well as when the dynamics of the interaction between the Ln and soil were considered (i.e., freshly incorporated versus native Ln). The finding allowed us to simplify the analyses and create aggregated datasets where K d data of La, Sm, and Lu were integrated, proposing single K d (Ln) best-estimates for the risk assessment in the specific scenarios reported, which in turn contributes to filling gaps and enhance the knowledge for an eventual decision-making on the management of a contaminated site and the use of materials, such as the clays and carbon-rich materials here tested, for remediation purposes. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements This work was carried out in a Generalitat de Catalunya Research Group (2021 SGR 01342) and has received funding from the Ministerio de Ciencia e Innovación de España (PID2020-114551RB-I00). Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.geoderma.2023.116730 . Appendix A Supplementary data The following are the Supplementary data to this article: Supplementary data 1 | [
"ALBA",
"ALDABA",
"AWWAD",
"BROUZIOTIS",
"CHEN",
"COPPIN",
"DINALI",
"ELISEEVA",
"GALUNIN",
"GOMES",
"GOPINATH",
"GU",
"HADJITTOFI",
"HARTMANN",
"KOLODYNSKA",
"LADONIN",
"LECOMTE",
"LI",
"LIATSOU",
"MIGASZEWSKI",
"NAVARRO",
"PEREIRA",
"RAMIREZGUINART",
"RAMIREZGUINART",... |
66cb39272d8e4a428e5659af79f63f0e_A unifying conceptual framework of factors associated to cardiac vagal control_10.1016_j.heliyon.2018.e01002.xml | A unifying conceptual framework of factors associated to cardiac vagal control | [
"Laborde, Sylvain",
"Mosley, Emma",
"Mertgen, Alina"
] | Cardiac vagal control (CVC) reflects the activity of the vagus nerve regulating cardiac functioning. CVC can be inferred via heart rate variability measurement, and it has been positively associated to a broad range of cognitive, emotional, social, and health outcomes. It could then be considered as an indicator for effective self-regulation, and given this role, one should understand the factors increasing and decreasing CVC. The aim of this paper is to review the broad range of factors influencing CVC, and to provide a unifying conceptual framework to integrate comprehensively those factors. The structure of the unifying conceptual framework is based on the theory of ecological rationality, while its functional aspects are based on the neurovisceral integration model. The structure of this framework distinguishes two broad areas of associations: person and environment, as this reflects adequately the role played by CVC regarding adaptation. The added value of this framework lies at different levels: theoretically, it allows integrating findings from a variety of scientific disciplines and refining the predictions of the neurovisceral integration model; methodologically, it helps identifying factors that increase and decrease CVC; and lastly at the applied level, it can play an important role for society regarding health policies and for the individual to empower one's flourishing. | 1 Introduction 1.1 The need for a framework to understand the factors associated to cardiac vagal control The phenomenon of heart rate variability (HRV), representing the change in the time intervals between adjacent heartbeats ( Shaffer et al., 2014 ), was first reported as an observation in a horse by Hales in 1733, as described by Berntson et al. (1997) . Since then, it has been the focus of an extensive amount of research, mainly due to the fact it can index non-invasively the activity of the vagus nerve ( Berntson et al., 1997 ; Chapleau and Sabharwal, 2011 ; Laborde et al., 2017b ; Malik, 1996 ), the main nerve of the parasympathetic nervous system, and more specifically the activity of the vagus nerve modulating cardiac functioning, which we refer thereafter as cardiac vagal control (CVC). CVC is linked with a broad range of self-regulation phenomena such as cognitive performance, emotion and stress regulation, health, and social interactions ( Porges, 2007b ; Shaffer et al., 2014 ; Thayer et al., 2009 ). The amount of CVC research is quite impressive: on the 2 nd of March 2018, using the search engine of the Web of Science (Thomson Reuters), a search including keywords linked to CVC returned about 40,000 unique results (see Section 2.1 for details). At the theoretical level, a summarising and categorising endeavour seems crucial in order to acquire a comprehensive overview of the factors associated to CVC. At the applied level, such summarising and categorising endeavour would increase awareness of the potential risk factors for CVC, and help to understand the factors able to improve it. Therefore, the aim of this paper is to provide a unifying conceptual framework of factors influencing CVC. 1.1.1 Framework rationale Two main theoretical accounts exist regarding CVC: the neurovisceral integration model (R. Smith, Thayer, Khalsa and Lane, 2017 ; Thayer et al., 2009 ) and the polyvagal theory ( Porges, 2007b ). While the predictions of the neurovisceral integration model cover cognition, emotion, and health, the polyvagal theory focuses on social functioning. Given the neurovisceral integration model is the more complete regarding its predictions related to CVC, we focus on this model to ground the physiological aspects of the framework developed in this paper. The neurovisceral integration model assumes that a higher CVC is linked to an overall better self-regulation of the organism, and specifically to better executive functioning, better stress and emotion regulation, and better health ( Thayer et al., 2012 ; Thayer et al., 2009 ). The neurovisceral integration model is based on the central autonomic network ( Benarroch, 1993 ), a functional network which links the heart to the prefrontal cortex. A full description of the central autonomic network is beyond the scope of this paper, however interested readers can refer to Thayer et al. (2009) for a detailed explanation of its role in self-regulation processes. Finally, another recent theoretical development about CVC, the vagal tank theory ( Laborde et al., 2018b ), based on the neurovisceral integration model (R. Smith et al., 2017 ; Thayer et al., 2009 ) and the polyvagal theory ( Porges, 2007b ), points out the importance to consider the 3Rs of CVC functioning (i.e., resting, reactivity, and recovery) ( Laborde et al., 2017b ). The main focus of this theory is to understand the importance of both tonic and phasic CVC in line with the implications associated to adaptation. Those three adaptations mechanisms will be taken into account in developing this framework, with a focus on resting CVC. CVC is genetically inherited to a certain extent ( Neijts et al., 2014 , 2015 ; Wang et al., 2009 ) and relatively stable. Consequently, many studies differentiate between participants high and low in resting CVC ( Thayer et al., 2009 ). However, even if CVC is to some extent genetically determined and relatively stable, there are many factors associated to it, positively and negatively, on a short- or long-term basis. To date, no summary endeavour has been taken to integrate comprehensively those associations (for an exception focusing on physiological and environmental factors, see Fatisson et al., 2016 ). This hinders the development of the research field, because for researchers investigating CVC a fastidious search for influential factors has to be performed independently before beginning empirical work, which adds to the methodological aspects to consider when realizing research with CVC ( Laborde et al., 2017b ; Quintana et al., 2016 ; Quintana and Heathers, 2014 ). This fastidious search currently resembles an effort to collect some pieces of an immense puzzle corresponding to the huge amount of scientific work published on CVC each year. This would also allow the development of a sound theoretical basis on which future empirical work can find solid foundations. This is an important endeavour given the relationship between CVC with cognitive, emotional, social, and health regulation, and generally all mechanisms related to the successful adaptation of the organism to the challenges individuals face on an everyday basis, as depicted by the neurovisceral integration model ( Thayer et al., 2009 ). In developing this framework it will create an important advancement at the theoretical level to serve as a ground to build further research upon, such as investigating dose-response relationships, quantifying effects in terms of magnitude and duration, as well as delineating improvement and impairment thresholds. A systematic categorisation of factors influencing CVC will also contribute to refining the predictions of the neurovisceral integration model ( Thayer et al., 2009 ), giving the opportunity to test empirically whether its predictions apply regardless of the source affecting CVC changes, and eventually refine them. Indeed, the majority of support considers higher CVC as a positive phenomenon (i.e., linear relationship), however in some cases a non-linear relationship depicts the relationship between CVC and positive adaptation ( Kogan et al., 2013 ; Kogan et al., 2014 ; Stein et al., 2005 ). The development of this framework will enable a more systematic scrutinization of these linear and non-linear relationships between CVC and adaptation processes according to the large range of factors associated to it. At the methodological level, it is thought to guide researchers designing experiments aimed for example either to increase CVC or to understand better how it can get depleted, completing recent recommendations on the topic ( Laborde et al., 2017b ). Furthermore it can inform researchers about the confounding factors that might influence their results. At the applied level, it is aimed to inspire people in regard to the many ways that exist to take care of aspects of their lives related to their mental and physical health, given the established links of CVC with cognition, emotion, and general health ( Thayer et al., 2009 ). It can also help to inform practitioners about CVC and to understand how to measure it and to influence it in their own and clients daily lives in order to improve their self-regulatory abilities ( Segerstrom and Nes, 2007 ). 1.1.2 Ecological rationality to structure the factors influencing CVC We structure our unifying conceptual framework of factors influencing CVC according to the theory of ecological rationality ( Todd and Gigerenzer, 2012 ), which assumes that human reasoning and behaviour are considered ecologically rational when they are adapted to the environment in which humans evolve. A metaphor from Simon is often used to represent ecological rationality depicting human rational behaviour as being “shaped by a scissors whose two blades are the structure of task environments and the computational capabilities of the actor” ( Simon, 1990 , p. 7). Therefore both the person and environment are separate domains in themselves however they do intersect, when the blades meet, in order to create another domain, person/environment. According to this theory, person specifically describes the properties of the human mind referring to its computational capacities, and environment is conceived according to its physical and social structure. Person/environment would be considered according to the notion of fit between the computational capacities of humans and the structure of the environment where humans evolve in a given moment. Given that CVC is expected to reflect the adaptive fit between the person and the environment ( Thayer et al., 2009 ; Thayer and Lane, 2009 ), the ecological rationality framework seems an appropriate basis to underpin our framework. Thus we build upon this person/environment distinction to formulate our conceptual framework integrating the factors influencing CVC (see Fig. 1 ). We acknowledge that we do not use the original definitions of person and environment from the ecological rationality framework. This is due to the fact that we adapt them to match our perspective on CVC. Furthermore, this means that we do not use the original predictions of ecological rationality theory but instead use the overarching structure of the theory that matches the approach of CVC as adaptive fit between both the person and environment. Hence, specifically in our framework, after adapting the ecological rationality theory to fit CVC, we define the person dimension as reflecting the factors influencing CVC having the individual at their core. Within the person dimension we delineate three major CVC associations: biological characteristics, being either stable or transient; somatic interventions and stimulation methods; and behavioural strategies, reflecting all actions that one could take to influence CVC. The environment dimension reflects all factors influencing CVC having the environment at their core, which is further elaborated as being either social or physical. Social environment refers to all interactions, or absence of interactions, of an individual with other people, extended as well to interactions with animals. Physical environment refers to the physical properties of the environment. Finally, the person-environment dimension reflects the interaction between the person and the environment associated to CVC. The main category here will be stressors, being physical, mental, or health-related. 2 Main text 2.1 Survey methodology 2.1.1 Inclusion and exclusion criteria In terms of variables included, CVC is reflected in four parameters of HRV ( Chapleau and Sabharwal, 2011 ; Grossman et al., 1990 ; Laborde et al., 2017b ; Malik, 1996 ): for the time-domain, the root mean square of the sum of the squared differences between adjacent normal RR intervals (RMSSD), the percentage of successive normal sinus RR intervals more than 50 ms (pNN50), and the peak-valley analysis (also referred to as peak-to-trough), and for the frequency-domain, high-frequency (HF). Whenever possible, we will refer in this paper to research based on one of those four CVC indicators. In addition, we considered as well another HRV indicator that can also represent CVC which is known as respiratory sinus arrhythmia (RSA). RSA is the heart rate variations related to the respiratory cycle ( Eckberg and Eckberg, 1982 ), which is equivalent to HF and is therefore also supposed to represent CVC ( Berntson et al., 1993 ), when the respiratory frequency is comprised between 9 and 24 cpm ( Malik, 1996 ). In this framework, we discarded studies indicating only increases or decreases in HRV without mentioning exactly which HRV parameter was involved, because in this case the role of CVC can't be properly inferred. Moreover, we discarded also work based on the sympatho-vagal balance ( Montano et al., 2009 ), which is reflected by the low frequency/HF ratio. This is due to the mechanisms underlying the links of the sympatho-vagal balance with behaviour which are far less obvious than the links with CVC ( Billman, 2013 ; Quintana and Heathers, 2014 ). Finally, one element that will be excluded from this unifying conceptual framework is research on CVC in animals. Although CVC can be found in all vertebrates ( Taylor, 1994 ) and if similar mechanisms as those depicted in humans can be observed in mammalians ( Porges, 2007b ) as well as in non-mammalian species ( Monteiro et al., 2018 ; Taylor et al., 1999 ; Taylor et al., 2014 ), this paper will focus specifically on human studies, and we will therefore not report any animal studies (e.g., Carnevali and Sgoifo, 2014 ; Kovacs et al., 2014 ; von Borell et al., 2007 ). 2.1.2 Search strategies To establish this framework, we adopted two complementary strategies: top-down, starting from the ecological rationality theory and the person and environment dimensions; and bottom-up, with a systematic literature search. This search was realized on the 2 nd of March 2018, using the search engine of the Web of Science (Thomson Reuters). The following keywords were used: “heart rate variability”, returning 24,158 results, “vagal”, returning 17,456 results, “parasympathetic”, returning 11,987 results, and “respiratory sinus arrhythmia”, returning 2,643 results. A combined search of those 4 keywords provided a total of 43,293 unique results, that were first systematically scanned for titles, and then when deemed necessary for abstracts and full texts. The main principles of a general inductive approach were followed to generate the categories from this literature search ( Thomas, 2006 ). The three authors discussed the categories until consensus was reached regarding the final categorization. The final categorization system was fine-tuned during discussions within authors' working groups, which involved researchers experienced either in ecological rationality theory or CVC research. Subsequently we elaborate on the above-mentioned categories to complete the unifying conceptual framework of factors associated to CVC, providing examples for each category we introduce. Unless mentioned specifically, we refer always to resting CVC. We would like to point out that the following is not an exhaustive and systematic presentation of all factors within each category, considering every single potential moderator, but rather serves as an illustration in order for the reader to get a clearer picture of the content of the unifying conceptual framework. Moreover, we don't assume any direct causal relationships between the factors we mention and CVC, given often CVC may be a by-product of other processes. We acknowledge that specific mechanisms may be at stake between each factor and CVC, however detailing systematically those mechanisms for each factor would be out of the scope of this paper. Consequently, our aim in the following is to introduce our framework, presenting its building blocks and listing a number of examples to illustrate each category, without claiming to be exhaustive regarding all factors belonging to a specific category, nor detailing the mechanisms linking each factor to CVC. 2.2 A unifying conceptual framework of factors influencing CVC 2.2.1 Person Within this section we focus directly on the person dimension of the framework. This is defined as factors influencing CVC that stem directly from biological characteristics, somatic interventions and stimulation methods, or behavioural strategies used by the individual. 2.2.1.1 Biological characteristics Biological characteristics include both stable and transient constructs that affect CVC directly from the biological level. 2.2.1.1.1 Stable biological characteristics Regarding the stable biological characteristics of a person, there are some main findings regarding gender, age, body composition, ethnicity, and genetics. Concerning gender, it has been found that resting CVC was greater among women compared with men in all different ages, as shown by 24-hour ECG recording ( Antelmi et al., 2004 ) and resting measures ( Britton et al., 2007 ). Regarding age, CVC generally decreases during the years when considering an adult population ( Antelmi et al., 2004 ). In relation to body composition, a higher level of fat mass, percentage fat and waist-to-hips ratio are associated with lower CVC ( Kim et al., 2005 ), while fat-free mass was related to higher CVC ( Rossi et al., 2014 ). Overweight was associated with a lower CVC ( Alrefaie, 2014 ). Regarding ethnicity, in a systematic meta-analysis Afro-Americans were found to have a higher CVC than European Americans ( Hill et al., 2015 ). Considering genetics, CVC is genetically inherited to a certain extent, as evidenced by both resting and ambulatory measurements ( Golosheykin et al., 2017 ; Neijts et al., 2014 , 2015 ; Wang et al., 2009 ). The association of stable biological characteristics will be coupled to more changeable biological factors, as we detail below. 2.2.1.1.2 Transient biological characteristics Regarding the transient biological characteristics, we present as examples weight loss, circadian rhythm, body position, bladder filling, breathing, blood pressure, body temperature, and hormones. In adults, a moderate weight loss in overweight and obese individuals results immediately in CVC increase ( Sjoberg et al., 2011 ). Circadian rhythm is associated to CVC with a lower CVC during wake periods and a maximum for CVC being reached during the night ( Boudreau et al., 2012 ). Body position is associated to CVC, being highest in supine, then seated, and lowest in standing ( Young and Leicht, 2011 ). In a further comparison to supine position, the right lateral decubitus position increases CVC more than supine ( Kuo and Chen, 1998 ). Finally, endogenous factors associated to CVC include bladder filling ( Heathers, 2014 ), blood pressure ( Purves et al., 2004 ), body temperature ( Carrillo et al., 2013 ), breathing ( Eckberg and Eckberg, 1982 ), hormones ( Armour, 2008 ) and menstrual cycle ( Bai et al., 2009 ). In review, both stable and transient biological characteristics are associated to CVC, and should hence be considered attentively by researchers, as they may influence results when testing the predictions of the neurovisceral integration model ( Thayer et al., 2009 ). Hence they should be taken into account when designing experiments and when interpreting the results. While the individual can't influence most of these factors, a more active choice is presented in the next two categories, somatic interventions and stimulation methods, and behavioural strategies. 2.2.1.2 Somatic interventions and stimulation methods Given the fact that many pathologies are associated to a decreased CVC ( Thayer and Ruiz-Padial, 2006 ), influencing CVC has been a focus of medical research. Specifically, somatic interventions and stimulation methods to influence CVC consist of pharmacologic factors, vagus nerve stimulation, transcutaneous vagus nerve stimulation, brain stimulation, carotid baroreceptors stimulation, esophageal electrostimulation, and oxygen inhalation. 2.2.1.2.1 Pharmacologic factors The first category addresses specific drugs or molecules that have the potential to influence CVC. Drugs can increase or inhibit CVC ( Elghozi et al., 2001 ). A review presenting the pharmacologic modulation of CVC in case of heart failure ( Desai et al., 2011 ) mentioned drugs affecting: a) the renin-angiotensin-aldosterone system which includes, angiotensin-converting enzyme inhibitor, often used in therapies to increase CVC ( Binkley et al., 1993 ), and angiotensin receptor blockers; b) the sympathetic nervous system, with beta-adrenergic antagonists (beta-blockers); and finally c) vasodilator drugs. Further, drugs that increase CVC can include: antidepressant ( Balogh et al., 1993 ) - except for non-tricyclic antidepressant ( Kemp et al., 2010 ) -, methacholine ( Zannin et al., 2015 ), intranasal administration of oxytocin ( Norman et al., 2011 ), and intravenous injection of insulin and glucose ( Stockhorst et al., 2011 ). Finally, placebo may also influence CVC, as placebo serotonin was found to increase CVC in participants who were told it would enhance recovery ( Darragh et al., 2015 ). In summary, even if it is often difficult to differentiate between indirect and direct pharmacologic effects on CVC ( Desai et al., 2011 ), a broad range of drugs and molecules were found to be linked to CVC changes. 2.2.1.2.2 Vagus nerve stimulation The second category refers to direct vagus nerve stimulation, which is an invasive technique using an electrode and a pulse generator/battery with a connecting extension or lead. The electrode is placed in contact with the vagus nerve at the level of the neck, and direct vagus nerve stimulation has the potential to increase CVC ( Vonck et al., 2014 ). Given the broad neural vagal network, it is recognised that vagus nerve stimulation may exert a neuromodulatory effect to activate certain “protective” pathways for restoring health, in inflammatory conditions ( Bonaz et al., 2016a ; Bonaz et al., 2017 ) and many others ( Yuan and Silberstein, 2015a,b ). 2.2.1.2.3 Transcutaneous vagus nerve stimulation If vagus nerve stimulation is investigated as a potential therapy for a range of conditions, the invasive nature and costs limits its use. Therefore a non-invasive method was developed, through electrical stimulation of the auricular branch of the vagus nerve distributed to the skin of the ear, hence being called transcutaneous ( Clancy et al., 2014 ). The stimulation of the ear, through electrical current, proved to be able to increase CVC, while this was not the case for manual stimulation ( La Marca, Nedeljkovic, Yuan, Maercker and Elhert, 2010 ). 2.2.1.2.4 Brain stimulation The third category is brain stimulation. Several techniques of brain stimulation exist, such as repetitive transcranial magnetic stimulation, transcranial direct current stimulation, transcranial pulsed current stimulation, deep brain stimulation, and electroconvulsive therapy. Repetitive transcranial magnetic stimulation is achieved through the repetitive application of a train of high-frequency or low-frequency magnetic pulses on a brain area, which allows respectively the increase ( Siebner et al., 1998 ) or the decrease ( Chen et al., 1997 ) in cortical excitability even beyond the duration of the train of stimuli. CVC increase was shown specifically through prefrontal repetitive transcranial magnetic stimulation ( Gulli et al., 2013 ). Transcranial direct current stimulation is a neuromodulatory technique in which the exposed tissue is polarized, the spontaneous neuronal excitability and activity being modified by a tonic de- or hyperpolarization of resting membrane potential, the size of the induced changes depending on current intensity used ( Nitsche et al., 2008 ). It has been found to be able to increase CVC ( Brunoni et al., 2013 ). Transcranial pulsed current stimulation is a new paradigm derived from transcranial direct current stimulation, based on the fact that conversion of direct current into unidirectional pulsatile current increases its efficacy to enhance corticospinal excitability ( Jaberzadeh et al., 2014 ). It can also increase CVC ( Morales-Quezada et al., 2015 ). Deep brain stimulation employs chronically implanted electrodes in the brain to electrically stimulate neuronal networks ( Albert et al., 2009 ), and has also been shown to increase CVC ( Sumi et al., 2012 ). Finally, electroconvulsive therapy, formerly known as electroshock therapy, is used in psychiatric treatment whereby seizures are electrically induced in patients in order to provide relief. Seizure activity might actually act as a regulator of neurogenesis in the adult brain ( Rudorfer et al., 2003 ) and has been found to increase CVC ( Nahshoni et al., 2001 ). 2.2.1.2.5 Carotid baroreceptors stimulation Carotid baroreceptors stimulation is a non-invasive procedure. This can be achieved mechanically by neck suction ( Zannin et al., 2015 ), and also electrically ( Lovic et al., 2014 ). Indeed, carotid baroreceptors can be stimulated non-invasively by externally applying focal negative pressure bilaterally to the neck and this might trigger an increase in CVC ( Zannin et al., 2015 ). 2.2.1.2.6 Esophageal electrostimulation Direct access to vagal afferent fibres in the distal esophagus is possible through the use of specifically designed esophageal catheter/manometer probe. The effect of vagal afferent electrostimulation at this level, playing the role of a visceral sensory input, was shown to provoke increases in CVC ( Fallen et al., 2001 ). 2.2.1.2.7 Oxygen inhalation Oxygen inhalation can provoke a CVC increase as a direct or indirect effect of hyperoxia (e.g., Zannin et al., 2015 ). Researchers used for example an oxygen percentage in the inspired gas of 60% ( Zannin et al., 2015 ), or delivered air as a rate of 15 L/min, which increased CVC ( Waring et al., 2003 ). 2.2.1.2.8 Continuous airway positive pressure Continuous airway positive pressure works with a ventilator, which keep the airways continuously open through the application of mild air pressure. Some respiratory diseases such as obstructive sleep apnoea ( Jurysta et al., 2013 ) and chronic obstructive pulmonary disease ( Reis et al., 2010 ) are associated with vagal over activity. Applying continuous airway positive pressure helps in this case to reduce this vagal over activity, in other words to decrease CVC. Overall, somatic interventions and stimulation methods proved to be efficient ways to reliably influence CVC. We should however notice that most of these procedures are only conducted in people with serious cardiac/neurological diseases. In the majority of cases an increase of CVC is positively associated with health, which would be in line with the predictions of the neurovisceral integration model ( Thayer et al., 2009 ). However, for specific medical conditions which are linked to an excessively high CVC, like obstructive sleep apnoea ( Jurysta et al., 2013 ) and chronic obstructive pulmonary disease ( Reis et al., 2010 ), a decrease in CVC is needed in order to restore a healthy condition. If somatic interventions and stimulation methods generally require the assistance of medical professional, in the next category we describe simple behavioural strategies that can be used by every individual to influence its CVC level. 2.2.1.3 Behavioural strategies Behavioural strategies represent all the organised actions taken by an individual in the pursuit of a goal. They encompass a broad range of activities, including nutrition, non-ingestive oral habits, water immersion, body temperature reduction, sleep habits, relaxation methods, cognitive techniques, praying, media entertainment, music, and then exercise. 2.2.1.3.1 Nutrition Regarding nutrition, three aspects of nutrition have been investigated: diet, beverages and supplementations. 2.2.1.3.1.1 Diet The diet refers not only to what but also in which way a person eats. Certain foods will contribute to increasing CVC, for example: pistachio nuts ( Sauder et al., 2014 ), soy oil ( Holguin et al., 2005 ), yoghurt enriched with bioactive components ( Jaatinen et al., 2014 ), green leafy vegetables ( Park et al., 2009 ), and fatty fish ( Hansen et al., 2010 ) like salmon ( Hansen et al., 2014 ). Further, some foods will trigger CVC recovery from stressful events, such as chocolate enriched with gamma-aminobutyric acid (H. Nakamura, Takishima, Kometani and Yokogoshi, 2009 ). Regarding lifestyle diets, vegetarians were found to have higher CVC ( Fu et al., 2006 ). Fasting showed ambivalent results: an acute fast, representing a stressor for the organism, tends to lower CVC ( Mazurak et al., 2013 ) while long-term caloric restriction ( Stein et al., 2012 ) leads to an increase in CVC, which can be linked to findings related to weight loss ( Sjoberg et al., 2011 ). Finally, food digestion reduces CVC ( Lu et al., 1999 ). Taken together, these results suggest that people can easily influence CVC when taking care of their diet. 2.2.1.3.1.2 Beverages In a similar vein to food, beverages can also influence CVC. For example, coffee ( Richardson et al., 2004 ) and water ( Heathers et al., preprint ; Routledge et al., 2002 ) are differently associated to CVC, according to the dose. On an acute level, a high level of alcohol consumption will decrease CVC ( Sagawa et al., 2011 ), while moderate alcohol intake can enhance CVC ( Quintana et al., 2013 ). Taken together, those findings indicate that beverages are an easy way to influence one's CVC level, given their pervasive presence in our everyday lives. 2.2.1.3.1.3 Supplementations Several supplementations were found to increase CVC, from omega-three fatty acid ( Sauder et al., 2013 ) and DHA-rich fish oil supplementation ( Sjoberg et al., 2010 ), which reflects the findings about fatty fish, to vitamin B12 ( Sucharita et al., 2013 ), vitamin D ( Hansen et al., 2014 ), magnesium ( Almoznino-Sarafian et al., 2009 ), multi-vitamin-mineral preparation supplemented with guarana ( Pomportes et al., 2015 ), and lavender capsules ( Bradley et al., 2009 ). Finally, CVC was found to be increased by some herbs including Ginseng, Oriental Bezoar and Glycyrrhiza ( Zheng and Moritani, 2008 ). In summary findings show that supplementation can be a reliable way to increase CVC. 2.2.1.3.2 Non-ingestive oral habits This section covers habits, daily activities that are repeated automatically, that are specifically linked to non-ingestive oral practices i.e. oral stimulation without swallowing action. Habits such as tobacco smoking ( Barutcu et al., 2005 ; Hayano et al., 1990 ; Sjoberg and Saint, 2011 ), waterpipe smoking ( Cobb et al., 2012 ), inhaling second-hand tobacco smoke ( Zhang et al., 2013 ), and chewing gum ( Shiba et al., 2002 ) were found to provoke a decrease in CVC. Overall, the current findings related to non-ingestive oral habits would point toward a negative association with CVC. 2.2.1.3.3 Water immersion Water immersion refers to the immersion of either the full body or only part of it in water. Water immersion may increase CVC: for example apnea in the form of scuba diving ( Chouchou et al., 2009 ), or face immersion producing the diving reflex ( Kinoshita et al., 2006 ). Moreover, cold water immersion was found to facilitate CVC recovery in comparison to warm water ( de Oliveira Ottone et al., 2014 ). Overall water immersion appears as an efficient way to increase CVC, when appropriate water temperature is taken into consideration. 2.2.1.3.4 Body temperature reduction Given hot temperature is associated to reduced CVC, cooling techniques might prove to be efficient in this case to increase CVC after exposure to hot environment, such as with ice packs and fan cooling with intermittent water spray ( Leicht et al., 2009 ) or with cryostimulation ( Hausswirth et al., 2013 ). Taken together, those results suggest that cooling techniques have the potential to increase CVC, however future research should also investigate whether this is the case without prior exposure to heat or without prior exercise. 2.2.1.3.5 Sleeping habits We refer here to sleep as a natural periodic suspension of consciousness, contributing to recovery and restoration of the organism, which makes it a good candidate to influence CVC. It has been found for example that sleep deprivation ( Dettoni et al., 2012 ), short sleep duration caused by insomnia ( Spiegelhalder et al., 2011 ) and rotating shift work ( Wong et al., 2012 ) can cause decreases in CVC. In contrast, higher subjective and objective sleep quality markers are linked to higher CVC ( Werner et al., 2015 ), with differences associated to the sleep stages, CVC being higher during the non rapid eye movement sleep stage in comparison to the rapid eye movement sleep stage ( Berlad et al., 1993 ). Overall, findings indicate that regular and sufficient sleep is profitable for CVC. 2.2.1.3.6 Relaxation methods Relaxation methods have shown to have positive effects on CVC and are achieved by a number of different methods. Relaxation is defined here by any act that is purposively realized to make a person feel focused and calm. Among the different relaxation methods to increase CVC, we find acupuncture ( Kitagawa et al., 2014 ), hypnosis ( Aubert et al., 2009 ) – for mixed findings, see as well Laborde et al. (2018a) -, left nostril breathing ( Pal et al., 2014 ), massages (A. P. Smith and Boden, 2013 ), meditation with mindfulness training ( Garland et al., 2014 ), Qigong (a combination of posture and breathing exercises) ( Chang, 2014 ), Reiki (a hands-on-healing technique) ( Diaz-Rodriguez et al., 2011 ), slow paced breathing ( Laborde et al., 2017a ; Wells et al., 2012 ), theta-frequency binaural beats ( McConnell et al., 2014 ), and yoga ( Krishna et al., 2014 ). Overall, relaxation methods appear an efficient way to increase CVC, which is in line with their recognised role in our societies as stress management techniques. 2.2.1.3.7 Cognitive methods Cognitive methods may also play a role on CVC. Purely cognitive methods, such as cognitive reappraisal, have shown for example to enhance CVC specifically when while watching anger-inducing videos ( Denson et al., 2011 ), while cognitive behavioural therapy have been found to increase CVC in severely depressed patients ( Carney et al., 2000 ). Overall we can expect a positive influence of cognitive methods to influence CVC, however research needs to disentangle purely cognitive methods from other strategies, which are often mixed in cognitive behavioural therapy for example. 2.2.1.3.8 Praying Prayer is a central part of religions. It involves seeking and responding to the Divine and includes an orientation towards one's own or to others struggles, regrets, needs or desires ( Cole, 2010 ). Praying ( Doufesh et al., 2014 ) and spirituality in general is associated with higher CVC ( Berntson et al., 2008 ). Those results would point toward a positive relationship between CVC and religious actions and beliefs. 2.2.1.3.9 Media entertainment Media entertainment refers to the pleasant experiences of users while spending time with the media, like watching TV, playing video games, etc. ( Vorderer, 2001 ). Overall the studies point towards a negative association between media entertainment and CVC reactivity (i.e., this is to say inducing a decrease from resting CVC), like with fear-inducing film scenes ( Gilissen, Koolstra, van Ijzendoorn, Bakermans-Kranenburg and van der Veer, 2007 ), playing a car simulation race game ( Subhani et al., 2012 ), or playing video games that include shock-avoidance situations ( Miller, 1994 ). Regarding a different usage, video games can also be used as a therapy, and for example a video game specially designed to treat mental disorders characterised by problems in impulse control has been shown to increase resting CVC ( Fagundo et al., 2013 ). In sum, current literature points more towards a negative association between media entertainment and CVC. However, overall the focus was mainly on movies or video games involving a high degree of arousal, which is usually associated to a lower CVC ( Brodal, 2010 ). Therefore exploring whether there is a “positive side” of media entertainment on CVC stills needs to be researched. 2.2.1.3.10 Music Within this section we consider music as an art to organise sounds in time in order to express ideas and emotions through elements such as rhythm, melody, and harmony. We thus distinguish music as a behavioural strategy for the individual, who can either be listening to music or physically playing music, in comparison to being exposed to sounds from the environment, a subcategory which will be discussed later in the environment dimension. The general “calming” or “stimulating” properties of the music seem to impact CVC. When listening to sedative music, CVC is significantly higher in comparison to listening to excitative music ( Iwanaga et al., 2005 ). Further, researchers tried to identify links between type of music and CVC, but findings appeared to be mixed. For example in some cases classical music ( Umemura and Honda, 1998 ) in comparison to rock or noise increases CVC, while in other circumstances it will have no effect ( Perez-Lloret, et al., 2014 ). Similar mixed findings appeared for heavy metal music, that has been found to provoke either an increase ( Ferreira et al., 2015 ) or decrease ( da Silva et al., 2014 ) in CVC. In summary, differentiating the effects of type of music on CVC based on the type of music seems so far inconclusive. Further research is required in order to identify the music properties that are linked to CVC changes, potentially taking into account the musical preferences of the listener. Physically playing music has also an effect on CVC. For example, singing can increase CVC ( Grape et al., 2003 ), while playing expressive piano music in comparison to non-expressive piano music was found to decrease CVC given the increase in arousal level ( Nakahara et al., 2009 ). The effects of playing and listening music can be combined in therapy. When participating in long-term (minimum duration: six months) music therapy, which includes: singing, listening to music, learning the recorder and performing music, CVC increases significantly ( Chuang et al., 2010 ). Overall, music seems to be a reliable and simple way to influence CVC, which makes it an appealing strategy to use given its place in our everyday lives. 2.2.1.3.11 Exercise In this section we refer to exercise as a planned mode of physical activity, and we differentiate between the effects on CVC during and after exercise. While during the exercise CVC will drop in order to provide the necessary activation to the body, after a certain time after exercise stops, CVC starts to rise again. This action of vagal reactivation illustrates the health of the vagal system ( Stanley et al., 2013 ). On the long-term, moderate aerobic training increases CVC ( Hautala et al., 2009 ). Athletes and physically active individuals mostly display increased CVC when compared to non-athletes ( Rossi et al., 2014 ). Overall, integrating regular physical activity to one's lifestyle seems a straightforward way to increase CVC on the long-term. Taken together, these findings linking behavioural strategies and CVC functioning are very encouraging, because they illustrate the fact that individuals can influence their CVC through specific voluntary actions ( Thayer et al., 2009 ). This gives them an aspect of control on areas of their lives related to cognition, emotions, and health. In addition, such categorisation might trigger some theory development. For instance, the effects on CVC of one of the strategy identified, slow paced breathing, can even be explained by a dedicated theory, the resonance frequency model ( Lehrer, 2013 ). This shows us that the unifying conceptual framework of factors influencing CVC can allow identifying the areas where the predictions of the neurovisceral integration model ( Thayer et al., 2009 ) apply as expected. It can also identify the areas where further theoretical development might be required in order to explain the precise effects on CVC. 2.3 Environment Within this section we focus on the environment dimension of the framework. This is defined as factors influencing CVC that stem directly from the social and physical aspects of the environment. 2.3.1 Social environment The social aspects of the human environment reflect the regular contact of the individual with other humans and animals, as well a lack of it. 2.3.1.1 Contact with humans The association of social aspects with CVC starts very early in life ( Field and Diego, 2008 ). CVC of preterm infants can be increased via skin-to-skin contact, also known as kangaroo care ( Feldman and Eidelman, 2003 ). The quality of care giving received in the early years of life is associated to a higher CVC in infants ( Bosquet Enlow et al., 2014 ), conversely children with coercive-preoccupied patterns of attachment show a lower resting CVC ( Kozlowska et al., 2015 ), while being separated from the attachment figure elicits vagal withdrawal in young children ( Oosterman and Schuengel, 2007 ). Findings concerning contact with other human beings show that social contact and support, or even the subjective feeling of social support, is associated to higher CVC ( Maunder et al., 2012 ). Further, low levels of social integration is associated with lower CVC ( Gouin et al., 2015 ). As a physical manifestation of social support, touch plays an important role as well, and physical contact has been found to increase CVC (R. Feldman, Singer and Zagoory, 2010 ). Furthermore, marriage ( Randall et al., 2009 ), love ( Schneiderman et al., 2011 ) and sexuality ( Costa and Brody, 2012 ) may contribute to increase CVC. Overall, social contact seems to have a positive association with CVC, which reinforces the view of humans being by nature social beings. Importantly, the role of social contact can be extended to animals, as we see in the subsequent section. 2.3.1.2 Contact with animals It has been found that animal contact have very positive effects on CVC. For example, ambulatory assessments showed that owning a pet ( Aiba et al., 2012 ) and going for a walk with a dog, as well as patting and talking to a dog ( Motooka et al., 2006 ) were found to enhance CVC. Taken together, these results show that low CVC is associated to a lack of social contact and support, and that closer contact to humans or animals is linked to higher CVC, which would be in line with the neurovisceral integration model ( Thayer et al., 2009 ) and the polyvagal theory ( Porges, 2007b ), which positively link CVC to social functioning. 2.3.2 Physical environment Aside from social factors, physical factors of the environment may have an impact on the modulation of CVC, through aroma, light, temperature, sound, and the outdoor environment. 2.3.2.1 Aromas Regarding aromas, this subcategory refers to the emanation of odour molecules which are perceived by the sense of smell ( Binder et al., 2009 ). Positive effects on CVC have been found for lavender aromatherapy ( Duan et al., 2007 ; Matsumoto et al., 2013 ), Cedrol, which can be found especially in essential oils of pine trees ( Dayawansa et al., 2003 ), and for Yasmin tea ( Inoue et al., 2003 ). The current evidence hints to a variety of aromas that can be used to increase CVC, while the reaction of displeasing or foul odours on CVC still needs to be investigated. 2.3.2.2 Lights Regarding the exposure to lights, we refer to the octaves of electromagnetic radiation which the organs of sight react to ( Manutchehr-Danai, 2009 ). Both light exposure and the absence of lights have been found to be associated positively to CVC: for example, bright light exposure can enhance CVC in patients with severe depression ( Rechlin et al., 1995 ), and oscillating coloured light proved to increase CVC more so than white light ( Grote et al., 2013 ), however these effects may be different according to the colours used ( Grote et al., 2013 ; Schafer and Kratky, 2006 ). Conversely, turning off the lights is linked to increase in CVC ( Boudreau et al., 2012 ), potentially as this links to preparatory systems that allow the organism to rest. In summary, light exposure seems to have positive effects on CVC, however the explicit role of specific colours and interdependencies with disorders need further research. 2.3.2.3 Sounds (excluding music) The subcategory of sounds is defined by as pressure waves caused by vibrating objects ( Li and Jain, 2009 ), which are interpreted as sound by the hearing organs. We differentiate here from music in the sense we consider sounds as part of the environment of the person, while we consider music as something the person makes an active decision to indulge in, for example to sing, play, or listen. It makes sense that sounds which we subjectively perceive as displeasing lead to a decrease of CVC, examples of this include sounds of violence and the crying of a baby ( Tkaczyszyn et al., 2013 ), higher levels of noise exposure in an individuals daily life ( Kraus et al., 2013 ), isochronous tones and music-like noise in comparison to a silence condition ( Krabs et al., 2015 ), and mechanical sounds, in comparison to bird twitters and synthesiser music ( Yanagihashi et al., 1997 ). On the other hand, listening to sounds of nature in a virtual natural environment produced an increase in CVC in comparison to being exposed to a virtual natural environment without sound and to a control group ( Annerstedt et al., 2013 ). Overall, sounds are associated either positively or negatively CVC, and individuals should be aware of their soundscape environment given this potential influence. 2.3.2.4 Temperature Regarding temperature, reflecting the degree of heat in the surrounding environment, CVC is influenced by hot and cold environments. Ambient heat exposure decreases CVC ( Bruce-Low et al., 2006 ) and specifically avoiding higher ambient temperatures in the warm season is linked to increased CVC in elderly people ( Ren et al., 2011 ). Moreover, in younger individuals CVC decreases in hot ambient conditions while it does not change in cold or baseline conditions ( Sollers et al., 2002 ). Concerning cold, an acute exposure to a cold environment provokes a minor increase in CVC, which becomes more important after acclimation ( Makinen et al., 2008 ). Finally, abrupt changes in temperature provoke a CVC withdrawal, until the point at which the organism adapts ( Peng et al., 2015 ). As a summary, it seems that hot environments tend to decrease CVC, while cold environment seem to maintain or increase it, even after adaptation periods if abrupt changes are experienced. 2.3.2.5 Electromagnetic fields Electromagnetic fields are physical fields produced by electrically charged objects. They are emanating from electrical power supply lines and various types of electrical equipment, such as visual display terminals, fluorescent lights, household appliances and televisions, and mobile phones ( Johansson, 2008 ). The exposure to medium-frequency electromagnetic fields might provoke a decrease in CVC ( Bortkiewicz et al., 1996 ). However, the dose question remains to be elucidated: for example effects from mobile phone are not clear, because researchers found no effect of electromagnetic fields from mobile phone on CVC ( Parazzini et al., 2007 ). Hence, further research needs to investigate the dose-response relationship concerning electromagnetic fields. 2.3.2.6 Outdoor environment The penultimate subcategory for the physical environment is the outdoor environment surrounding the individual, which does not clearly affect one particular sensory organ. Furthermore, we distinguish between natural environment (i.e., forest) and urban environment (i.e., city). It has been found that walking in the forest ( Lee et al., 2014 ) or in a park ( Song et al., 2014 ) compared to walking in the city leads to improved CVC. It may be necessary to be physically present within a natural environment to see the effects on CVC ( Horiuchi et al., 2014 ), because just seeing a virtual natural environment had no effect on CVC ( Annerstedt et al., 2013 ). Polluted air, either outside ( Pope et al., 2004 ) or in a room (L. Y. Lin, Chuang, Liu, Chen and Chuang, 2013 ) was found to decrease CVC, as well as exposure to ambient ozone ( Jia et al., 2011 ) and the chronic exposure to organic solvents ( Murata et al., 1994 ). Overall, findings would suggest that natural environments are linked to higher CVC in comparison to urban environments, however even if dwellings are situated in a city walking in a park may provoke some increase in CVC. 2.3.2.7 Altitude Altitude, the height of a point in relation to sea level or ground level, may also influence CVC. For example, people who are born at high altitude have a naturally higher CVC which even remains after a long period of residence at sea level ( Zhuang et al., 2002 ). Conversely, CVC significantly decreases when individuals reach an altitude of 2700 meters, when compared to 170 meters ( Trimmel, 2011 ), and this was also found at 3440 meters when compared to sea level ( Huang et al., 2010 ). Enhanced resting CVC was found to be a marker of the organism adaptation to high altitude hypoxia ( Bhaumik et al., 2013 ; Passino et al., 1996 ). When individuals come back from a stay in moderate altitude (1500 m–2500 m) it has a positive effect on CVC ( Schobersberger et al., 2010 ). Finally, living on the highest floors of high-rise air-conditioned buildings increases CVC when compared to the lower floors (P. C. Lin, Chen, Kao, Yang and Kuo, 2011 ). Overall it seems that dwelling at a higher altitudes will firstly provoke a decrease in CVC. However after an adaptation period CVC can reach its initial level or even a higher level than before, and these levels are preserved to some extent when returning to sea level. Thus a stay at altitude may have long-term positive consequences on CVC. In summary, findings showed that our physical environment may have a strong influence on our CVC, which highlights the need to carefully consider our physical surroundings. This nicely complements the neurovisceral integration model ( Thayer et al., 2009 ) regarding the adaptation properties of CVC, allowing to clearly identifying the role of the physical components of the environment over the adaptation of the organism. As a general overview of the environment dimension, we can conclude that both the social and physical aspects of environment play a role on CVC, which helps to make people aware of the importance of their surroundings regarding CVC. 2.4 Person/Environment Within this section we focus directly on the interactions between the person and the environment dimensions. This is defined as factors influencing CVC that stem directly from multiway interactions, or transactional processes, between the person and the environment. This specifically concentrates on different levels including: physical, mental, and health-related. Given the fact we consider the transaction between the person and the environment, we will refer to those levels in terms of stressors, based on Lazarus and Folkman (1984) . When defining stressors, we build on the classical definition by Selye (p.32) regarding stress: “the non specific response of the body to any demand made on it” ( Selye, 1936 ). As the terminology can be seen as vague and has been the object of numerous debates (see for example Knapp, 1988 ; Rice, 2012 ), we will precisely define what we mean when discussing stress. When talking about the factors influencing CVC, we refer to the demands that affect CVC, while the consequences of those demands reflect CVC increase/decrease, which Selye refined later as stressor vs. stress ( Selye, 1976 ). Those demands can be either physical or mental. Further, we build on this stressor definition to create a last category of health-related stressors, which reflects the fact that health is at the interplay between the characteristics and actions of a person and his/her adaptation to the environment. The demands will hence be physical, mental, and health-related; and the consequences of those demands will reflect the changes on CVC, either increasing or decreasing it. 2.4.1 Physical stressors In this section we will refer to the physical demands as physical stressors. In terms of reactivity, physical stressors require a vagal withdrawal in order for the organism to meet the physical demands of the task (Y. Nakamura, Yamamoto and Muraoka, 1993 ; Stanley et al., 2013 ). This reflects the evolutionary role of CVC as a “call to arms” mechanism to nurture the fight or flight response ( Porges, 2007a , 2007b ; Thayer et al., 2009 ). The fight or flight response, evolutionary associated respectively with subjective experiences such as rage and panic, is associated with near complete CVC withdrawal ( Beauchaine et al., 2007 ; Porges, 1995 , 2001 ). This CVC withdrawal facilitates large increases in cardiac output by the sympathetic nervous system, which no longer faces the opposition of vagal inhibition. During the transition from rest-to-exercise, the increase in heart rate to meet the physical demands is predominantly mediated by CVC withdrawal and after this CVC withdrawal by an increase in sympathetic activity ( Fagraeus and Linnarsson, 1976 ; Magder, 2012 ). CVC will also decrease at times of physical fatigue ( Atlaoui et al., 2007 ). Importantly the level of CVC withdrawal during the physical stressor, as well as the amplitude and kinetics of CVC recovery, will depend mainly on the intensity of the physical stressor rather than on the duration ( Stanley et al., 2013 ). Moreover, the initial fitness levels of the person will influence both the amplitude and kinetics of vagal recovery, individuals having a greater aerobic fitness will recover faster ( Stanley et al., 2013 ). 2.4.2 Mental stressors Demands can also be mental, which we coin as mental stressors. These be defined in line with the work of Lazarus and Folkman (1984 , p. 19) who refer to mental stressor as a “relationship between the person and the environment that is appraised by the person as taxing or exceeding his or her resources and endangering his or her wellbeing”. This is a complimentary link to the fact that the appraisal of threat is a central aspect to a decrease in CVC, as opposed to an appraisal of safety ( Thayer et al., 2009 ). Mental stressors can also be coined as pressure ( Laborde et al., 2015 ; Laborde and Raab, 2013 ; Laborde et al., 2014 ; Mosley et al., 2017, 2018a,b ). Further, we want to acknowledge that the mental stressor influence on CVC might be very individualised according to the appraisal process ( Lazarus and Folkman, 1984 ). For example the magnitude of CVC withdrawal can depend on the degree of threat appraisal ( Thayer et al., 2009 ). Mental stressors may be mainly cognitive ( Hjortskov et al., 2004 ) or emotional ( Tucker et al., 2012 ), and both will be able to induce a decrease in CVC. A trauma could be considered as an acute case of mental stressor. A trauma reflects the exposure to a traumatic or stressful event and the consequent disruption for the individual in his/her ability to respond adequately to a perceived threat related to the traumatic/stressful event, which is associated to a lower CVC at rest ( Gillie and Thayer, 2014 ). In summary, physical and mental stressors will tend to provoke a decrease in CVC. The magnitude of the reactivity and of the following recovery may depend on individual characteristics. Examples of this include initial fitness levels for physical stressors ( Stanley et al., 2013 ), and the appraisal of the individual regarding mental stressor ( Lazarus and Folkman, 1984 ; Thayer et al., 2009 ). 2.4.3 Health-related stressors In this category, we consider all health-related issues linked to CVC. We refer to them as stressors because they stem from a transaction between the person and the environment, and they reflect the interplay between the characteristics and actions of a person and his/her adaptation to the environment. We acknowledge that some of the health-related stressors mentioned here could also originate from previously included elements, however for brevity and clarity they will be displayed here. They will range from cross-dimensional phenomena such as pain, inflammation, and fatigue, then they will be detailed, following a generally acknowledged higher-order organisation of medical conditions starting with symptoms, syndromes, disorders, and then diseases as over-encompassing dimension. Finally, we will describe the category addictions. Please be aware that for all mentioned health-related stressors a bidirectional relationship can be expected with CVC. This means that a low CVC can ease the apparition of a health-related stressor, and in turn a health-related stressor can also provoke a decrease in CVC. A strong argument for considering CVC in health-related stressors is that overall, CVC's association to self-rated health is stronger than inflammatory and other frequently used biomarkers ( Jarczok et al., 2015 ). 2.4.3.1 General mechanisms 2.4.3.1.1 Pain Pain includes a prominent affective-motivational component ( Melzack, 1999 ), and the predisposition towards unregulated affective responding to environmental demands might link it to CVC ( Appelhans and Luecken, 2008 ). Overall low CVC is linked with pain ( Appelhans and Luecken, 2008 ), chronic pain ( Koenig et al., 2015b ), chronic pelvic pain ( Williams et al., 2015 ), pain catastrophizing ( Koenig et al., 2015a ) and even experimentally induced pain ( Koenig et al., 2014 ). Taken together, findings indicate that lower CVC has been linked to altered pain processing. 2.4.3.1.2 Inflammation Inflammation is part of the complex biological response of body tissues to harmful stimuli, such as pathogens, damaged cells, or irritants ( Ferrero-Miliani et al., 2007 ). CVC plays a key role in the regulation of the immune response, more specifically regarding its action on the cholinergic anti-inflammatory pathway ( Tonhajzerova et al., 2013 ). Decreased CVC is linked with increased pro-inflammatory markers, which have negative health consequences. Overall low CVC is linked to low grade inflammation ( Jarczok et al., 2014 ; Thayer and Fischer, 2013 ). 2.4.3.1.3 Fatigue We refer here to pathological fatigue, thus differentiating physical fatigue for example from acute exercise. This is due to the fact that it has been referred to earlier in terms of a physical stressor which constitutes an adaptive response of the organism. Here fatigue is viewed as a maladaptive response accompanying some pathological states. For example fatigue in cancer patients is associated to lower CVC ( Crosswell et al., 2014 ; Fagundes et al., 2011 ). 2.4.3.2 Medical conditions The links between lower CVC and general health mechanisms such as altered pain processing ( Appelhans and Luecken, 2008 ) and inflammation mechanisms ( Tonhajzerova et al., 2013 ) makes it linked to many medical conditions. Therefore we now discuss the medical conditions linked to CVC according to a generally acknowledged higher-organisation of medical conditions with symptoms, syndromes, disorders, and diseases. 2.4.3.2.1 Symptoms A symptom, according to the Oxford English Dictionary ( n.d. ) is a bodily or mental phenomenon, circumstance, or change of condition arising from and accompanying a disease or affection, and constituting an indication or evidence of it, or a characteristic sign of some particular disease. The symptoms linked with low CVC include for example headache and migraines ( Koenig et al., 2015d ) and psychogenic non-epileptic seizures ( Ponnusamy et al., 2011 ). Symptoms can also be associated to high CVC, for example in the case of malaise causing fainting, referred to as vasovagal syncope – “a sudden transient loss of consciousness and postural tone caused by cerebral hypoperfusion provoked by physiological or emotional stressors” ( Eccles et al., 2015 , p. 7). This originates from sympathetic vasoconstrictor withdrawal causing vasodilatation and increased vagus nerve activity thus causing bradycardia. This leads to hypotension and as a result a loss of consciousness. A higher frequency of fainting events is then related to higher CVC ( Beacher et al., 2009 ), however in this precise case a higher CVC is considered as dysfunctional for the organism. Overall, we found evidence for symptoms being linked to low CVC, but there are also some cases of symptoms associated to high CVC like in vasovagal syncope. 2.4.3.2.2 Syndromes A syndrome, according to the Oxford English Dictionary ( n.d. ) is a group of symptoms which consistently occur together, or a condition characterised by a set of associated symptoms. Among the syndromes linked with low CVC are for example metabolic syndrome ( Jarczok et al., 2012 ), the irritable bowel syndrome ( Mazurak et al., 2012 ), and the Tourette syndrome ( Hawksley et al., 2015 ). Overall, we find evidence for syndromes being linked to low CVC. 2.4.3.2.3 Disorders A disorder, according to the Oxford English Dictionary ( n.d. ), is a disturbance of the bodily or mental functions, not implying structural change. Among disorders, we describe psychopathology/psychiatric disorders, eating disorders, functional somatic disorders, and breathing disorders. 2.4.3.2.3.1 Psychopathology/Psychiatric disorders Health-related stressors stemming from abnormal or non-functional self-regulatory function can be known as pathophysiology, and they originate or are linked with autonomic dysfunction. Low resting CVC may reflect a common psychophysiological mechanism that underpins particular difficulties in emotion regulation and impulsivity ( Koenig et al., 2015c ), in line with the use of CVC as a marker of emotion regulation in healthy adults ( Balzarotti et al., 2017 ). In particular, low resting CVC and excessive CVC reactivity (i.e., withdrawal) have been consistently observed in a wider range of emotion regulation related disorders ( Beauchaine, 2015 ). These include anxiety, phobias, attention problems, autism, callousness, conduct disorder, depression, non-suicidal self-injury, panic disorder, and trait hostility ( Beauchaine, 2015 ). Additional psychiatric disorders linked to low CVC can include for example: borderline personality disorder ( Koenig et al., 2015c ), acute psychosis ( Valkonen-Korhonen et al., 2003 ), post-somatic stress disorder ( Gillie and Thayer, 2014 ), and schizophrenia ( Clamor et al., 2016 ; Montaquila et al., 2015 ). Overall, a large range of psychopathology/psychiatric disorders seems to be linked with low CVC. 2.4.3.2.3.2 Eating disorders Eating disorders are considered as mental illnesses defined by abnormal eating habits. Anorexia nervosa and bulimia nervosa are among the most common eating disorders, and they are linked to higher CVC. This is because voluntarily binge-eating and vomiting provokes an alteration in vagal firing patterns, triggering cyclic increases in vagal activity driving in turn the urge to binge-eat and vomit ( Faris et al., 2006 ). For both anorexia nervosa and bulimia nervosa the stress response is also affected, with patients suffering from eating disorder showing an over activity of CVC during a stressful situation ( Het et al., 2015 ). In relation to eating disorders, food craving is associated to lower CVC ( Meule et al., 2012 ). Overall, the established link between eating disorder and disturbed vagal function may point toward using CVC as a relevant clinical target for eating disorders. 2.4.3.2.3.3 Functional somatic disorders Functional somatic disorders are syndromes of related complaints with no known underlying organic pathology ( Tak et al., 2009 ). The “big three” are chronic fatigue syndrome, fibromalgya and irritable bowel syndrome, and they seem to be linked to decreased CVC, however the methodology of the studies should be improved ( Tak et al., 2009 ). 2.4.3.2.3.4 Breathing disorders Not all disorders are associated to decreased CVC, which is the case for some breathing disorders. For example, obstructive sleep apnoea is linked with overactive CVC during the night ( Chrysostomakis et al., 2006 ). Similarly, nasal septum deformities, one of the most frequent reasons for nasal obstruction presented with a reduction in nasal airflow and chronic mucosal irritation, is linked with vagal over activity ( Acar et al., 2010 ). The therapy to address those breathing disorders will then contribute to decrease this vagal over activity, for example with continuous positive airway pressure therapy during the night in the case of obstructive sleep apnoea ( Reis et al., 2010 ). 2.4.3.2.3.5 Other disorders Overall, a broad range of disorders is associated to CVC. We reviewed above the main disorders categories, but there are many other that can potentially be linked to CVC, such as fluency disorders like stuttering ( Jones et al., 2014 ), sexual disorders such as female sexual dysfunction ( Stanton et al., 2015 ), or gastrointestinal disorders such as functional dyspepsia ( Dal et al., 2014 ), as measured by 24-hour recording. 2.4.3.2.4 Diseases A disease, according to the Oxford English Dictionary ( n.d. ) is a condition of the body, or of some part or organ of the body, in which its functions are disturbed or deranged. Lower CVC has been linked to altered pain processing ( Appelhans and Luecken, 2008 ) and inflammation mechanisms ( Tonhajzerova et al., 2013 ), which would direct its link towards diseases and risk stratification of medical accidents. Cardiovascular diseases have a direct link to CVC. Low CVC is related to cardiovascular diseases ( Thayer et al., 2010 ), such as coronary artery disease ( Evrengul et al., 2006 ) and hypertensive heart disease ( Todoran and Zile, 2013 ). Therefore enhancing CVC is expected to improve cardiovascular condition ( Olshansky et al., 2008 ; Schwartz, 2011 ; Schwartz et al., 2008 ). Other than cardiovascular diseases, many other diseases have links with low CVC, for example type 1 diabetes mellitus ( Javorka et al., 2008 ) and diabetes associated with cardiac autonomic dysfunction ( Uehara et al., 1999 ), early stages of the Parkinson's disease ( Buob et al., 2010 ), cancer ( Adams et al., 2015 ), inflammatory bowel disease ( Bonaz et al., 2016b ; Pellissier et al., 2010 ), and epilepsy ( Lotufo et al., 2012 ). Finally, when associated to disease, CVC can be linked to risk stratification for medical accidents and predicting death events, such as strokes ( Mravec, 2010 ), chronic heart failure ( De Ferrari et al., 2011 ), congestive heart failure ( De Ferrari, Sanzo and Schwartz, 2009 ; Desai et al., 2011 ), and sudden unexplained death in epilepsy ( DeGiorgio et al., 2010 ). In summation, findings encompassing symptoms, syndromes, disorders and diseases, albeit with some exceptions, almost always link lower CVC to medical complications. 2.4.3.3 Addictions Addictions can be defined as the continued use of rewarding stimuli and/or mood-altering substances or behaviours despite adverse consequences ( Angres and Bettinardi-Angres, 2008 ; Robinson and Berridge, 2000 ). Addictions are associated with self-regulation dysfunction, and a lower CVC is associated positively with addiction, which could be depicted as not having enough self-regulatory strength to resist to the temptation. Low CVC has been for example linked to alcohol abuse ( Thayer et al., 2006 ), substance cravings among alcohol patients ( Ingjaldsson et al., 2003a,b ), internet addiction, (P. C. Lin, Kuo, Lee, Sheen and Chen, 2014 ), and nicotine dependence ( Gallagher et al., 1992 ; Kupari et al., 1993 ). Refraining from addiction also has effects as a one week smoking abstinence has been found to increase CVC ( Minami et al., 1999 ). Resting CVC plays a role but importantly so does CVC reactivity ( Laborde and Mosley, 2016 ; Laborde et al., 2017b ). In regard to smoking, a blunted CVC reactivity (i.e., smaller acute decrease) was linked to a decreased tobacco smoking relapse time, while no links were found with resting CVC ( Ashare et al., 2012 ). These findings indicate an association of lower CVC with addiction, as well as an association between blunted CVC reactivity to a stressor and addiction. In the person-environment dimension, we introduced three main dimensions of influences: physical stressors, mental stressors (being cognitive and emotional), as well as health-related stressors. The findings regarding physical, mental, and health-related stressors fit the self-regulation, adaptation and health functions assumed for CVC by the neurovisceral integration model ( Thayer et al., 2009 ). Still, some relationships within the health-related stressors go against the direction predicted by the neurovisceral integration model ( Thayer et al., 2009 ), meaning that in those cases a higher resting CVC is associated positively with some dysfunction, such as with breathing disorders ( Chrysostomakis et al., 2006 ). In this case, the framework provides the opportunity to clearly delineate the predictions of the neurovisceral integration model ( Thayer et al., 2009 ) according to specific categories, and potentially identify exceptions that can help to further develop the predictions. For the majority of the factors we identified in this dimension, it might seem at first glance rather counter-intuitive to consider them as influential factors. In hindsight, it seems that those outcomes can simultaneously be input for self-regulation, given the feedback loop involved in any regulation system. 3 Conclusions 3.1 Future directions In this paper we aimed to provide a unifying conceptual framework of factors influencing CVC ( Table 1 ), in order to complement the neurovisceral integration model ( Thayer et al., 2009 ). This endeavour was critical given the role that CVC plays in regards to cognitive, emotional, social, and (physiological) health regulation ( Porges, 2007b ; Thayer et al., 2009 ), which can be evidenced from earlier life ages ( Patriquin et al., 2015 ). The framework we developed was based on the theory of ecological rationality ( Todd and Gigerenzer, 2012 ), giving sense to the world by understanding human's behaviour in terms of the reciprocal influence between a person and the environment. This reflects to some extent the adaptive processes at stake with CVC, as depicted by the neurovisceral integration model ( Thayer et al., 2009 ). This view was fitting in our attempt to categorise the factors influencing CVC, as illustrated by Fig. 1 . Although some of the dimensions we identified may not be completely under one's control, it is still important to be aware of them, regarding the impact they can have on our lives. One of the most promising aspects of this unifying conceptual framework are the behavioural strategies we identified to increase CVC, which may have implications in order to improve one's self-regulation abilities. When this is taken together with the other dimensions it could help people to increase or limit the decrease of CVC by paying attention to their daily routines and activities and to the environment surrounding them. An important avenue for researchers will be to examine empirically the outcome of CVC changes (either increase or decrease) according to the different factors identified in this framework on cognition, emotion, social, and (physiological) health regulation, considering for example dose-response relationships. This would also examine whether the neurovisceral integration model ( Thayer et al., 2009 ) would apply to all cases, meaning that whether CVC would have similar outcomes regardless of the source that influenced it. Therefore before concentrating therapeutic efforts on CVC markers, we need to ensure that their positive influence on CVC will also translate to positive outcomes. Another factor to be considered in future research is how individual differences might moderate the way factors are associated to CVC. For example the CVC recovery from a physical stressor depending on the initial fitness level of the person ( Stanley et al., 2013 ), or the response to an (emotional) mental stressor depending on the initial CVC ( Park et al., 2014 ). More generally, CVC findings should be systematically considered regarding the characteristics of the sample with which they have been obtained, for example regarding gender, age, clinical condition, etc. 3.2 Potential limitations As a limitation of this work some cautions need to be taken during interpretation of the presented framework. First, the inductive nature of the generation of the categories within the boundaries of ecological rationality implies that the experience of the authors' played a role in establishing the resulting framework ( Thomas, 2006 ). We endeavoured to address this issue via discussing our categorization system with experts of ecological rationality theory and CVC research. In addition, we highlight that the current version of this framework is not a fixed overview, but rather a flexible categorization system that may see the appearance of new categories in the future, as well as the fine-tuning of existing ones. Regarding the distinction between the categories, we acknowledge that a clear separation between the person and environment dimensions doesn't appear fully plausible, and it seems very likely that the association of environmental factors to CVC is mediated by individual characteristics or processes. Further, we don't claim to have been exhaustive regarding the factors influencing CVC, our aim was more to provide the reader with a unifying conceptual framework where all influential factors could potentially be integrated. Therefore, the studies cited serve an illustration purpose, and are by no means attempts to fully comprehensively review the different categories of our framework, which would have been beyond the scope of this paper. In line with this illustration purpose, we did not present any details regarding the mechanisms at stake about how each factor is associated to CVC, and this aspect should be addressed in future focused research. Similarly, the evaluation of the methodological quality of the studies cited was also outside the scope of this paper. Further, given the categorisation aim of this paper, we don't report any effect sizes concerning the studies we mention. Meta-analyses on some factors regarding effects on CVC already exist, for example for health-related stressors such as epilepsy ( Lotufo et al., 2012 ) and schizophrenia ( Clamor et al., 2016 ). We hope our unifying conceptual framework will enable researchers to go forward with those meta-analytic efforts, which will be extremely critical to enable to weight the importance of the factors identified in this framework. In addition, we hope researchers will aim to identify the potential moderators interacting with the effects of the factors influencing CVC, and to evaluate the quality of the studies. The context of assessing CVC indicators should also considered more closely, considering for example the implications of assessing them through resting/reactivity laboratory measurements and ambulatory measurement. Furthermore, the reader has to keep in mind that some of the factors we mentioned have a clearly unidirectional influence on CVC (e.g., behavioural strategies, CVC does not influence them in return) while for others the relationship is more likely to be bi-directional (e.g., stressors, CVC being influenced by the stressor but also influencing the stressor in some way). A further limitation could be that we almost exclusively considered higher CVC as being a positive phenomenon. However, recent evidence may suggest that the adaptive effects linked to increased CVC may be not a linear but rather a nonlinear one, as evidenced with subjective well-being ( Kogan et al., 2013 ). This is in line with the argument that some biological processes may cease to be adaptive when reaching extreme levels. In this case it may be important to define cut-off values, as in some cases higher CVC does not mean better, as shown in both laboratory and ambulatory recordings ( Kogan et al., 2014 ; Stein et al., 2005 ). In addition, despite the apparent ease to assess CVC, we would like to remind future researchers to not neglect basic methodological considerations ( Laborde et al., 2017b ; Quintana and Heathers, 2014 ). This is to ensure that changes of CVC parameters really reflect CVC changes and not changes in respiratory frequency for example ( Malik, 1996 ), which could help to understand some surprising findings (e.g., Tarkiainen et al., 2003 ). A last recommendation, we would like to remind the reader that indirect measures of CVC i.e. HRV are not CVC itself, and therefore increases or decreases in, HRV parameters supposed to reflect CVC do not necessarily reflect direct effects on the vagus nerve, as an example this could come from modification of the baroreflex sensitivity ( Desai et al., 2011 ). 3.3 Concluding remarks Despite the limitations outlined above, the added value of this unifying conceptual framework of factors influencing CVC reflects a unique endeavour. This endeavour attempts to link the immense body of work on CVC from a broad range of scientific disciplines. Therefore, the added value of this framework lies at different levels. At the theoretical level, it fosters an all-encompassing framework of the research carried out with CVC, spanning across a broad range of disciplines such as medicine, biology, psychology, etc. From there it can ensure a further development of the neurovisceral integration model ( Thayer et al., 2009 ), allowing a systematic empirical test of the factors influencing CVC. This would enable researchers to see whether all factors are associated in the direction predicted by the neurovisceral integration model and eventually refine it to be able to explain findings that suggest a higher CVC does not mean a better outcome ( Kogan et al., 2013 , 2014 ). Ultimately, such endeavour can suitably inform recent theoretical development aiming to provide a multidisciplinary view on self-regulation mechanisms ( Bridgett et al., 2015 ). The theoretical premises offered by the unifying conceptual framework of factors influencing CVC are also very promising, given it enables to further refine the predictions of the neurovisceral integration model ( Thayer et al., 2009 ). At the methodological level, it helps to make researchers aware of all potential confounding factors on their results involving CVC, and provides them with an overview of parameters that they might like to control when designing their experiments. Moreover, it will guide researchers to design their experiments in their choice of variables, whether they aim to increase or decrease CVC. At the applied level, it gives precious indications regarding the factors that could deplete or replenish CVC, which might impact the health, affective and cognitive life of individuals. Pertinently, it is worth noting that we do not claim that this framework is a conclusive piece of theoretical work – it is rather a starting point for further study into the area of CVC. By increasing the breadth and depth of research surrounding CVC we can further understand its wide-ranging effects. Consequently, following the recommendations that can be derived from this framework, it may have huge implications in helping to foster a healthier population, which can be considered as the building blocks for a flourishing society. Declarations Author contribution statement All authors listed have significantly contributed to the development and the writing of this article. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Competing interest statement The authors declare no conflict of interest. Additional information No additional information is available for this paper. Acknowledgements We would like to thank the members of the research group of Prof. Dr. Dr. Markus Raab, Institute of Psychology, Department of Performance Psychology, Cologne, Germany, for their critical and helpful comments in the different steps of designing the unifying conceptual framework of factors influencing cardiac vagal tone. Moreover, we would like to thank the members of the research groups of London South Bank University, University of Stirling, and Bournemouth University (United Kingdom), for their stimulating feedback and input during talks on heart rate variability given by the first author. Finally, we would like to thank Ms. Frauke Esser for her help in preparing the summary table on factors influencing cardiac vagal tone, and Ms. Marina Dikova for her help in preparing the figure displaying the way heart rate variability is calculated. | [
"ACAR",
"ADAMS",
"AIBA",
"ALBERT",
"ALMOZNINOSARAFIAN",
"ALREFAIE",
"ANGRES",
"ANNERSTEDT",
"ANTELMI",
"APPELHANS",
"ARMOUR",
"ASHARE",
"ATLAOUI",
"AUBERT",
"BAI",
"BALOGH",
"BALZAROTTI",
"BARUTCU",
"BEACHER",
"BEAUCHAINE",
"BEAUCHAINE",
"BENARROCH",
"BERLAD",
"BERNTSON... |
01684c3bb023446fb6b7c2e45dd264a9_Transcatheter versus minimally invasive surgical aortic valve replacement A propensity score-matched_10.1016_j.xjon.2025.05.001.xml | Transcatheter versus minimally invasive surgical aortic valve replacement: A propensity score-matched analysis in low-risk patients | [
"Lutz, Mark R.",
"Cavanaugh, Shaelyn M.",
"Martin, Samuel J.",
"Gleboff, Anna",
"Dilip, Karikehalli",
"Nazem, Ahmad",
"Cherney, Anton",
"Zhou, Zhandong",
"Lutz, Charles"
] | Objective
Minimally invasive aortic valve replacement (MIAVR) and transcatheter aortic valve replacement (TAVR) represent less-invasive alternatives to conventional surgical aortic valve replacement. In contrast to Society of Thoracic Surgeons (STS) Database data revealing <10% of all surgical aortic valve replacement procedures are performed via a minimally invasive approach, our center performs a high volume of MIAVR procedures. This propensity-score matched study aims to compare the outcomes of MIAVR versus TAVR in low-risk patients (STS Predicted Risk of Mortality <4%).
Methods
We identified 476 low-risk patients who underwent MIAVR via a right anterolateral minithoracotomy and 679 low-risk patients who underwent TAVR at our institution between 2017 and 2024. In a total of 1155 cases, propensity score analysis performed at a ratio of 1:1 yielded 295 matched pairs.
Results
The matched groups had similar baseline characteristics aside from a higher proportion of tricuspid valves in the TAVR group and greater rates of aortic regurgitation in the MIAVR group. The baseline STS scores were also higher in the TAVR group (1.84 vs 1.69; P = .030), although still below the low-risk threshold (STS-PROM <4.0). Postoperatively, patients in the MIAVR group experienced lower rates of permanent pacemaker implantation (0.4% vs 7.8%; P < .001), aortic regurgitation (0.3% vs 5.4%; P < .001), and paravalvular leak (0.0% vs 5.8%; P < .001). Patients undergoing MIAVR had longer hospital lengths of stay (6.23 vs 2.07; P < .001) and higher aortic valve mean gradients (7.29 vs 6.04 mm Hg; P = .004). There was no significant difference in early mortality or stroke rates between the 2 groups.
Conclusions
To our knowledge, this is the first propensity-score matched comparison of clinical outcomes in low-risk patients undergoing MIAVR versus TAVR, revealing that MIAVR could provide lower rates of permanent pacemaker implantation, paravalvular leak, and aortic regurgitation, without any increase in short-term mortality or stroke. Future prospective or randomized controlled trials are needed to validate these results. | Comparing key postoperative outcomes between low-risk patients undergoing MIAVR and TAVR. Central Message In low-risk patients, MIAVR was associated with lower rates of PPM and AI than TAVR, with comparable early mortality and stroke rates, highlighting its potential as a durable, underutilized alternative. Perspective With shared decision making shaping AVR choices, patients increasingly favor less-invasive options. MIAVR demonstrated comparable early mortality and stroke rates to TAVR, suggesting it may confer many of the early benefits associated with TAVR while maintaining the proven long-term durability of SAVR. Integration of MIAVR into clinical practice could expand patient-centered treatment options. Aortic stenosis (AS) is the most prevalent valvular disease in the developed world. It affects approximately 12.4% of individuals older than age 75 years, with severe AS present in about 3.4% of this age group. 1 Historically, conventional surgical aortic valve replacement (SAVR) via a median sternotomy was the gold standard treatment for patients with symptomatic AS. However, the number of patients undergoing SAVR has steadily declined over the past 5 years as transcatheter aortic valve replacement (TAVR) continues to gain popularity. 2 3 Although TAVR was initially marketed for those deemed intermediate or high risk, 4 several groundbreaking clinical trials have expanded the indications to include low-risk patients. 5 6 , These studies have demonstrated the early safety and efficacy of TAVR in lower-risk cohorts, yet concerns remain regarding long-term valve durability, paravalvular leak, and pacemaker implantation rates, particularly in younger, low-risk patients who may benefit from a more durable solution. 7 8 , 9 Despite the rising predilection for minimally invasive transcatheter alternatives to conventional sternotomy, there has not been a reciprocal increase in the uptake of minimally invasive surgical aortic valve replacement (MIAVR) in recent years. Compared with conventional SAVR, MIAVR especially when performed through a right thoracotomy, has been associated with reduced transfusion incidence, intensive care stay, hospitalization, and renal failure. Recent literature also suggests that MIAVR may be able to provide a mortality benefit compared with both conventional SAVR and TAVR. 10-13 14 , However, <10% of SAVR procedures are performed minimally invasively, and the number of centers in the United States performing MIAVR has decreased from 2015 to 2022. 15 3 This discrepancy underscores the need for a closer examination of MIAVR as a unique and underutilized option in the treatment landscape. MIAVR offers the potential to combine the benefits and long-term durability of a surgical prosthesis with the reduced invasiveness that makes TAVR appealing to patients. Furthermore, limited direct comparisons between MIAVR and TAVR, particularly in low-risk populations, highlight a significant knowledge gap. The present study aims to address this gap by directly comparing clinical outcomes of MIAVR and TAVR in low-risk patients. Methods This study was approved by the Trinity Health New York (Syracuse, NY) Institutional Review Board (study No. 19-1230-3; January 13, 2020). We queried the Society of Thoracic Surgeons (STS) database for patients at our center and identified 476 low-risk patients who underwent isolated MIAVR via a right anterolateral mini-thoracotomy between 2017 and 2024 ( Table 1 ). Using the STS/American College of Cardiology Transcatheter Valve Therapy Registry we were also able to identify 679 low-risk patients who underwent TAVR at our institution between 2020 and 2024 ( Table 1 ). In a total of 1155 cases, propensity score analysis performed at a ratio of 1:1 yielded 295 matched pairs ( Figure 1 ). Preoperative Planning Patients with severe AS underwent evaluation through a standardized institutional algorithm to guide procedural planning and referral ( Figure 2 ). The decision-making process incorporated initial referral to cardiac surgery or a dedicated valve clinic, where comprehensive risk stratification was performed. Key considerations included age, frailty, comorbidities (eg, chronic kidney disease, pulmonary disease, and prior chest radiation), and anatomical factors such as porcelain aorta or the need for concomitant procedures. Most patients were discussed by a multidisciplinary heart team, particularly in cases of elevated STS Predicted Risk of Mortality (PROM) score, complex anatomy, or unclear procedural benefit. Based on this assessment and patient preference, patients were triaged to the most appropriate intervention, as illustrated in Figure 2 . Surgical Approach All SAVR procedures were performed minimally invasively through a small right anterolateral minithoracotomy via the second or third intercostal space, with or without rib division, depending on surgeon preference and patient body habitus ( Video 1 ). The use of video assistance was also at the discretion of the operating surgeon. In most cases, cardiopulmonary bypass was achieved via femoral cannulation. Direct aortic arterial cannulation was performed if the patient had severe peripheral vascular disease. Cardioplegia strategy involved either Buckberg or del Nido cardioplegia solution delivered antegrade or retrograde at surgeon discretion. All replacement aortic valves (96.9% bioprosthetic) were implanted using fixed pledged sutures (2-0 Ti-Cron; Covidien) in a supra-annular position and secured with titanium knot fasteners (COR-Knot Device; LSI Solutions). Transcatheter Approach All procedures were performed in a hybrid operating room or catheterization laboratory under either monitored anesthesia care or general anesthesia, depending on patient comorbidities and procedural complexity. The majority of procedures were performed via a percutaneous transfemoral approach (94.4%) ( Table 2 ). Alternative access routes, including transsubclavian or direct aortic access, were utilized primarily in patients with severe iliofemoral disease, excessive vascular tortuosity, or significant peripheral artery disease. Balloon-expandable valves (BEVs) comprised 95.6% of all TAVR implants, reflecting institutional preference for BEV devices in low-risk patients and those with favorable annular anatomy ( Table 2 ). Self-expanding valves were used selectively in patients with small aortic annuli or extensive annular and left ventricular outflow tract calcification to minimize the risk of annular rupture and patient-prosthesis mismatch. Valve deployment was guided by fluoroscopy and transesophageal echocardiography to ensure optimal positioning, valve function, and the absence of significant paravalvular leak (PVL). Deployment was aimed at ensuring no more than 20% of the device was below the annulus and the noncoronary isolation technique was utilized to optimize positioning, with the caveat that there are always individual patient factors that are considered. Following valve implantation, patients were monitored for conduction abnormalities with continuous telemetry. Permanent pacemaker implantation (PPM) was performed only in cases of persistent high-degree atrioventricular block or other conduction disturbances unresponsive to conservative management. Matching Procedure Nearest neighbor matching was performed using the MatchIt package in R (R Foundation for Statistical Computing) to balance preoperative characteristics between the 2 treatment groups. This approach was selected to minimize differences in propensity scores, which were estimated using a logistic regression model incorporating age, sex, body mass index, STS-PROM score, and key comorbidities such as chronic lung disease, diabetes, and peripheral artery disease. A caliper of 0.2 was applied to restrict the maximum allowable difference in propensity scores between matched pairs, enhancing the quality of the match and reducing potential bias. Target standard mean difference was <0.10 and was achieved for most variables ( Figure 3 ). As a result, 295 well-matched pairs were identified, ensuring comparable baseline characteristics between patients undergoing AVR in both treatment groups. Statistical Analysis Following propensity score matching, we compared the baseline characteristics and surgical outcomes between the groups. Categorical variables were analyzed using χ 2 tests to assess the independence between groups. Continuous variables were tested for normality using visual inspections and the Shapiro-Wilk test. Based on these assessments, Student t tests were applied to normally distributed continuous variables, whereas the Wilcoxon rank-sum test was used for those not meeting normality assumptions. This tailored approach to data analysis ensured the application of the most appropriate statistical methods, enhancing the robustness of our findings. All statistical analyses were performed using SPSS software (SPSS Inc) and R. Results Prematch Cohort Prematch patient characteristics from the 2 groups demonstrated the TAVR group having greater age (76.33 vs 66.00 years; P < .001), STS-PROM scores (2.20 vs 1.40; P < .001), rates of diabetes (37.7% vs 28.4%; P = .001), peripheral vascular disease (14.1% vs 3.6%; P < .001) and proportion of tricuspid valve morphology (99.6% vs 92.4%; P < .001). Rates of aortic regurgitation (AR) were greater in the MIAVR group (53.2% vs 42.6%; P = .002) ( Table 1 ). The prematch comparison yielded several differences in postoperative outcomes. As expected, total hospital length of stay (LOS) was higher in the MIAVR group (5.91 vs 2.03 days; P < .001) as well as the number of operative approaches converted (1.5% vs 0.0%; P = .005). Mean left ventricular ejection fraction was higher in the MIAVR group (59.36 vs 57.54; P < .001) although aortic valve mean gradients were also increased in the MIAVR group (7.64 vs 5.92 mm Hg; P < .001). In the TAVR group, rates of PPM (8.7% vs 0.4%; P < .001), AR (6.9% vs 0.2%; P < .001), and PVL (5.9% vs 0.0%; P < .001) were all significantly higher than those seen in MIAVR ( Table 3 ). Propensity Score-matched Cohort In the propensity score-matched cohort, we observed a higher proportion of tricuspid valves (100% vs 92.2% tricuspid; P < .001) in the TAVR group and higher rates of preoperative AR in the MIAVR group (59.9% vs 45.1%; P < .001). Despite propensity score matching, mean STS-PROM scores remained slightly higher in the TAVR group (1.84 vs 1.69; P = .030), although still well below the low-risk threshold (STS-PROM score <4.0) ( Table 1 ). All other baseline patient characteristics were similar between matched groups. Postoperatively, patients in the MIAVR group experienced lower rates of PPM (0.4% vs 7.8%; P < .001), AR (0.3% vs 5.4%; P < .001), and PVL (0.0% vs 5.8%; P < .001) than the TAVR group ( Table 3 ). Patients undergoing MIAVR did have longer hospital LOS (6.23 vs 2.07 days; P < .001), higher aortic valve mean gradients (7.29 vs 6.04 mm Hg; P = .004) and had a greater mean left ventricular ejection fraction (59.87 vs 57.67; P = .003) ( Table 3 ). There was no significant difference in early mortality (<30 days) or stroke rates between the 2 groups ( Table 3 ). Discussion The results of this propensity score-matched analysis comparing MIAVR and TAVR in low-risk patients highlight important clinical differences between these 2 treatment strategies. MIAVR was associated with significantly lower rates of PPM, PVL, and AR compared with TAVR ( Table 3 ). These findings are consistent with previous studies comparing surgical and transcatheter approaches to AVR, irrespective of risk stratification and degree of surgical invasiveness. 4 , 5 , 7 , The long-term significance of these findings is increasingly recognized, particularly in younger low-risk patients with greater life expectancy. PPM after TAVR has been associated with higher long-term mortality, increased risk of heart failure hospitalization, and reduced left ventricular function. 16-18 Similarly, PVL and AR—especially moderate or greater—have been shown to independently predict late mortality after TAVR. 19-21 5 , Even mild PVL has been associated with worse long-term outcomes in low-risk patients at extended follow-up points. 22 23 , Although outcomes in this study were excellent overall, it is important to acknowledge that differences in conduction disturbances and valve seating may have long-term consequences. 24 Patients undergoing MIAVR also demonstrated higher postoperative left ventricular ejection fraction and mean aortic valve gradients compared with those undergoing TAVR ( Table 3 ). Although these differences reached statistical significance, both values remained within normal physiologic ranges, suggesting that the observed differences are unlikely to be clinically meaningful. Instead, these findings reinforce the notion that both approaches provide excellent postprocedural hemodynamic performance, with effective restoration of left ventricle function and valve hemodynamics. As expected, MIAVR patients also experienced a significantly longer postoperative hospital stay compared with patients undergoing TAVR. The early postoperative period constitutes the highest-risk interval for SAVR, a phase during which TAVR has consistently demonstrated superior outcomes across all risk profiles. 4 , 5 , 7 , 17 , In this current study, no differences were observed in early mortality or stroke rates ( 25-28 Table 3 ), supporting the notion that MIAVR may deliver many of the early safety benefits attributed to TAVR while preserving the long-term durability profile of surgical bioprostheses. Initially introduced for patients with AS deemed inoperable, the indications for TAVR have expanded rapidly during the past 10 years supported by several pivotal randomized trials. 4 , 5 , 16 , 17 , 27 , The optimal treatment strategy for low-risk patients with AS remains an area of ongoing debate. In 2023, the highly anticipated 5-year results of the Placement of Aortic Transcatheter Valves 3 (PARTNER-3) trial demonstrated no significant differences between TAVR and SAVR in key outcomes, including a composite end point of death, stroke, or rehospitalization. 28 However, critics argue that these findings are premature, emphasizing that aortic valve disease management is lifelong and that concerns persist regarding higher rates of mild or greater paravalvular regurgitation in TAVR (20% vs 3%), which has been associated with increased long-term mortality rates. 7 23 , 29 , Additionally, they argue that using rehospitalization in the composite end point may skew results in favor of TAVR, downplaying the fact that mortality numerically favored surgery (hazard ratio [HR], 1.23; 95% CI, 0.79 to 1.97). 30 29 The current guidelines reflect this uncertainty surrounding long-term outcomes in younger patients undergoing TAVR, recommending SAVR for those younger than age 65 years with severe symptomatic AS, provided they are not at prohibitive surgical risk. However, a recent study derived from the California statewide registry found that between 2013 and 2021, TAVR rates in patients aged 60 years and younger increased from 7.2% to 45.7% and was associated with significantly worse 5-year survival when compared to SAVR in a propensity-score matched analysis. Similarly, a national Medicare analysis by Mehaffey and colleagues 26 found that among low-risk patients younger than age 75 years, TAVR was associated with significantly worse 3-year survival, stroke, and valve reintervention rates compared with SAVR, despite lower in-hospital morbidity. 25 The deviation from consensus guidelines in real-world clinical practice has been largely fueled by the increasing emphasis of shared decision making as a quality metric, which has empowered patients to take a more active role in their treatment choices. Given the choice, many patients naturally gravitate toward less-invasive options to avoid the risks and recovery associated with heart surgery involving a median sternotomy. Although TAVR offers compelling short-term benefits—faster recovery, reduced perioperative morbidity, and earlier discharge—emerging long-term data suggest that it may not provide equivalent durability or optimal outcomes compared with SAVR, especially in younger patients. 31 25 , 26 , 32 , Structural valve deterioration, PVL, PPM requirements, and the need for future valve interventions remain concerns because TAVR is increasingly used in younger, lower-risk individuals. 33 MIAVR via right thoracotomy presents a potential best of both worlds approach, offering a less-invasive alternative to a full sternotomy while preserving the durability and outcomes of conventional surgical valve replacement. A recent network meta-analysis comparing MIAVR, TAVR, and SAVR found that in pairwise comparison of high-quality randomized controlled trials and propensity score-matched data, TAVR showed superior mortality to conventional SAVR until 37.5 months, beyond which there was no significant difference, much like the results observed in high-profile randomized controlled trials. 7 , MIAVR on the other hand demonstrated significantly lower mortality than both TAVR (HR, 0.70; 95% CI, 0.59-0.82) and conventional SAVR (HR, 0.69; 95% CI, 0.59-0.80). 14 Although these data are encouraging and support the potential for MIAVR to provide improved outcomes, it is important to note that the direct comparison of MIAVR and TAVR relied on propensity score-matched studies, highlighting the need for further validation. 14 14 Another recent meta-analysis of 12 observational studies directly comparing MIAVR and TAVR found that patients undergoing MIAVR had lower rates of both 30-day (relative risk, 0.63; 95% CI, 0.42-0.96; P = .03) and midterm mortality at 4 years of follow-up (HR, 0.76; 95% CI, 0.67-0.87; P < .001). Notably, in both meta-analyses, MIAVR was broadly defined to include right thoracotomy, ministernotomy, and sutureless aortic valves. 15 14 , However, prior literature suggests that a sternal-sparing, right thoracotomy approach may provide superior outcomes compared with other MIAVR approaches, potentially reducing operative time, hospital LOS, postoperative pain, and morbidity, while improving both early and long-term survival. 15 34 , Therefore, it is possible these meta-analyses are underestimating the benefits of MIAVR performed through a right thoracotomy due to the inclusion of other approaches. 35 An additional limitation of these meta-analyses is the lack of stratification by patient risk profile because neither analysis performed a subgroup evaluation for low-risk patients and only 1 study included in the meta-analyses compared MIAVR and TAVR in low-risk patients. Furthermore, in that single low-risk propensity score-matching study, the MIAVR cohort was defined as those receiving stentless valves and >50% of the procedures were performed via median sternotomy, making it questionable for inclusion in the MIAVR meta-analyses at all. The lack of direct comparisons between MIAVR via right thoracotomy and TAVR in low-risk populations represents a significant knowledge gap in the management of severe AS. Addressing this gap is critical as it involves the most controversial area of TAVR expansion and the patient group where MIAVR has the greatest potential to demonstrate superior outcomes. Although our study demonstrates favorable outcomes for MIAVR via right thoracotomy, several limitations must be considered. A key limitation is that both surgeons in this study have more than 20 years of experience in minimally invasive cardiac surgery, including extensive expertise in MIAVR. Similarly, the expertise of the interventional cardiologists at our institution, who rank among the top in New York State in terms of TAVR volume and outcomes, represents another factor that may influence the study's findings. The added technical complexity and steep learning curve associated with minimally invasive cardiac surgery may limit the generalizability of this study to lower-volume centers or less-experienced surgeons. Studies have shown that operative times, complication rates, and overall outcomes improve significantly as surgeons progress along the minimally invasive cardiac surgery learning curve, with some estimates suggesting that 50 to 100 cases may be required to achieve proficiency. 36 , Consequently, centers without established MIAVR expertise may experience longer crossclamp times, increased perioperative complications, and a higher conversion rate to full sternotomy, potentially diminishing the advantages observed in our study. 37 Similarly, the predominant use of BEVs in our TAVR cohort, reflecting institutional preference and operator experience, may limit the generalizability of our findings to centers with different device selection practices. 37 Compared with surgical approaches, a key limitation of TAVR is the inability to perform aortic annular enlargement, a technique often necessary to accommodate appropriately sized prostheses in patients with small aortic annuli. This limitation has important implications for the lifetime management of AS, particularly in younger, low-risk patients who may require future valve interventions. Performing aortic annular enlargement at the time of SAVR has gained increasing attention in structural cardiac surgery because it can also facilitate the placement of a larger prosthesis in the event of a future valve-in-valve TAVR. 38 Traditionally performed via median sternotomy, aortic annular enlargement has more recently been adapted to minimally invasive techniques, including right thoracotomy approaches. In our study, 4.4% of MIAVR patients underwent annular enlargement ( 39 Table 2 ), demonstrating the feasibility of this technique in a minimally invasive setting. Although this proportion may appear modest, it is notable that most of these cases occurred in the later years of the study period, with more than 50% occurring during the past 2 years in response to the increasing national emphasis. Annular enlargement at our institution was also historically performed through sternotomy and thus excluded from this cohort. Similarly, the need for aortic annular enlargement adds to the overall surgical risk, and some patients undergoing MIAVR with aortic annular enlargement may have had STS-PROM scores exceeding 4.0%, rendering them ineligible for inclusion in this low-risk analysis. As with any observational study comparing treatment options, patient selection bias is an inherent limitation. To mitigate this bias, we used propensity score matching to balance preoperative risk factors and baseline characteristics, ensuring that comparisons between MIAVR and TAVR patients were as equitable as possible. Propensity score matching is a widely accepted method for addressing confounding in nonrandomized studies, but it cannot fully eliminate unmeasured confounders such as frailty, anatomical suitability, or surgeon/institutional preferences, which may have influenced treatment selection. It is also important to acknowledge that randomized controlled trials comparing TAVR and SAVR are not immune to selection bias and often include highly selected patient populations. In the PARTNER-3 trial, for example, low-risk patients randomized to TAVR versus SAVR were carefully screened for favorable anatomy, often excluding those with complex root morphology, bicuspid aortic valves, or small annuli, which may not fully represent real-world practice. This reinforces the need for broader, real-world studies to guide clinical decision making. 40 Finally, our study, like most observational studies comparing surgical and transcatheter approaches, lacks adequate long-term follow-up. Thus, limiting our ability to definitively assess valve longevity, reintervention rates, and long-term mortality. Although surgical bioprostheses have historically demonstrated superior durability compared with early-generation TAVR valves, it remains unclear whether current-generation TAVR prostheses will match or exceed surgical durability in low-risk patients. Ultimately, the excellent outcomes observed in both groups underscore the strength of a well-integrated heart team approach, where collaborative expertise in patient selection, perioperative management, and procedural proficiency optimizes results and ensures the highest standard of care. Although MIAVR is the standard approach for low-risk isolated SAVR at our institution, it remains underutilized nationally. This disparity highlights a critical opportunity for the cardiac surgical community to expand MIAVR adoption through enhanced training and increased exposure, bridging the gap between conventional SAVR and TAVR. Future prospective and randomized controlled trials are warranted to validate these findings and further delineate the optimal treatment strategy for low-risk patients. Conclusions To our knowledge, this study is the first propensity score-matched comparison of clinical outcomes in low-risk patients undergoing MIAVR versus TAVR. Both groups demonstrated excellent short-term outcomes; however, the lower rates of PPM, PVL, and AR observed in the MIAVR cohort may offer long-term advantages. Similarly, the comparable early mortality and stroke profiles suggest that MIAVR may confer the benefits of reduced invasiveness traditionally only seen in TAVR. Webcast You can watch a Webcast of this AATS meeting presentation by going to: https://www.aats.org/resources/post-cardiotomy-ecmo-support-i-9821 . Conflict of Interest Statement Dr Lutz is a consultant for Medtronic. All other authors reported no conflicts of interest. The Journal policy requires editors and reviewers to disclose conflicts of interest and to decline handling or reviewing manuscripts for which they may have a conflict of interest. The editors and reviewers of this article have no conflicts of interest. Supplementary Data Video 1 Minimally invasive aortic valve replacement (MIAVR) performed via a right anterior thoracotomy, as routinely performed at our institution. The video demonstrates standard steps, including thoracotomy exposure, aortic crossclamping, native valve excision, annular decalcification, and implantation of a bioprosthetic valve using interrupted sutures. This approach reflects the institutional technique used for MIAVR in low-risk patients included in the study cohort. Video available at: https://www.jtcvs.org/article/S2666-2736(25)00152-4/fulltext . | [
"ALURU",
"OSNABRUGGE",
"WYLERVONBALLMOOS",
"LEON",
"MACK",
"POPMA",
"MACK",
"FORREST",
"SA",
"PHAN",
"FURUTACHI",
"KRISHNA",
"HASSAN",
"FONG",
"AHMED",
"REARDON",
"MACK",
"KHAN",
"FAROUX",
"URENA",
"NAZIF",
"KODALI",
"ATHAPPAN",
"THYREGOD",
"MEHAFFEY",
"ALABBADI",
... |
f4f6ec4579ed48a2927111959078869d_Síndrome de Horner y plexopatía braquial en el contexto de una masa apical vascular diagnóstico dife_10.1016_j.aprim.2025.103302.xml | Síndrome de Horner y plexopatía braquial en el contexto de una masa apical vascular: diagnóstico diferencial del síndrome de Pancoast | [
"Manzur-Sandoval, Daniel"
] | null | Presentación del caso Paciente masculino de 38 años presentó una masa cervical derecha de 3 meses de evolución, con extensión supraclavicular, ptosis palpebral, dolor y debilidad progresiva del brazo derecho. La exploración mostró fuerza muscular 2/5 en esa extremidad. La radiografía reveló una opacidad en el vértice pulmonar derecho. La tomografía computarizada mostró una lesión apical homogénea con desplazamiento traqueal y realce contrastado, compatible con un aneurisma de la arteria subclavia derecha con trombo mural ( fig. 1 A-D). Aunque el síndrome de Pancoast suele asociarse a tumores pulmonares, se deben considerar causas vasculares, como aneurismas, en el diagnóstico diferencial ante síntomas neurológicos similares. Financiación Sin financiación Consideraciones éticas El consentimiento informado por escrito para la publicación de información e imágenes del paciente fue proporcionado por el paciente o un representante legalmente autorizado. Contribución de los autores DMS: análisis y redacción del borrador original, revisión y edición. Conflicto de intereses El autor declara que no tiene conflicto de interés. | [] |
69eb7213f5f1436b98e0a0f169be3f57_Disección tipo A intervenida tras implantación de stent en mesentérica superior_10.1016_j.circv.2013.08.003.xml | Disección tipo A intervenida tras implantación de stent en mesentérica superior | [
"Palmer, Neiser",
"Sureda, Carlos",
"Rodriguez, Rafael"
] | null | Paciente de 63 años con antecedentes de hipertensión arterial y obesidad que ingresa por disección aórtica tipo A de Stanford con afectación de los segmentos iniciales de tronco braquicefálico derecho, carotídeo y subclavia izquierda, así como arteria mesentérica superior y arteria renal izquierda, ambas con presencia de trombo intraluminal, sin permeabilidad distal ( fig. 1 ). Durante el ingreso presentó 4 deposiciones diarreicas, abdomen levemente distendido aunque con peristaltismo conservado y sin signos de irritación peritoneal. La analítica mostró creatinina de 1,8 mg/dl y lactatos de 2,85 mmol/l. Ante estos hallazgos sugestivos de isquemia intestinal y una imagen clara de afectación de la arteria mesentérica superior ( fig. 2 ) se decide la colocación inicial de una endoprótesis en la arteria mesentérica superior. Tras la repermeabilización exitosa de la misma ( fig. 3 ), se procedió a su intervención quirúrgica realizándose una sustitución valvular aórtica y de aorta ascendente según técnica de Bentall de Bono modificada (Carboseal de 23 mm). El paciente evolucionó satisfactoriamente, siendo dado de alta sin más incidencias. | [] |
2506b74589664081a1187d94514e7110_An Online safety monitoring system of hydropower station based on expert system_10.1016_j.egyr.2022.02.040.xml | An Online safety monitoring system of hydropower station based on expert system | [
"Han, Zhang",
"Li, Yanling",
"Zhao, Zepeng",
"Zhang, Bo"
] | The hydropower station real-time safety monitoring is not only related to the safety, but also to the generation benefit of the power station. However, as the amount of monitoring data is very huge, and the dam structure is very complex, real-time monitoring cannot be carried out in most of the dam. It is useful to develop a high-performance safety monitoring system to assist the dam safety management work. As such, the paper introduces an expert system for dam safety management with a mode which can carry out the entire process of real-time dam safety monitoring, covering abnormal importing data, abnormal data changes, monitoring work, dam safety. The system has abundant functions such as dam safety analysis, visual query and remote consultation, etc. The system has been successfully applied to more than ten dam projects in China. The application shows that reliable data evaluation can identify data anomalies such as single point jump, multi point outlier and step change. Trend tracking can find possible structural changes. And clustering comprehensive evaluation can give a reasonable evaluation of the safety state of different parts of the dam. Complete online system can provide a supporting decision-making platform for the safety monitoring of the dams and improve the efficiency of power station management. | 1 Introduction There are 98,002 dams in China. Some of them are considered as giant dams, such as the 314 m high Shuangjiangkou dam, the 314 m high Jinping I dam and the 292 m high Lianghekou dam. These dams have brought great social and economic benefits to China. However, due to various reasons, more than one-third of these dams are classified as unsafe. In recent years, due to earthquake and other causes, many dams were damaged and some even failed, as shown in Fig. 1 . As a consequence, dam safety is gaining more attention. However, there are many shortcomings in the current dam safety management model. For large projects, the number of monitoring points can be more than ten thousand, this means that the quantity of monitoring data might be close to a ten million every year. In view of such huge amounts of data, it is not practical to analyze the data manually. As such, it is more difficult to assess the dam safety on a real-time basis [1,2] . To solve all these problems, it is proposed to develop a computerized dam safety monitoring system [3–5] . The earlier systems only had the basic functions, such as data management and reporting [6,7] . With the advancements in computer technology and dam analysis theory, the proposed system places emphasis on data analysis and safety evaluation. In the late 1980s, a function which can evaluate dam safety was added to Italy’s DAMSAFE [8] . Korea’s KDSMS [9] is a dam safety management system which can manage several dams under emergency situations. INDACO [10] is a digital management system which has been used to manage more than ten dams in Italy. Supakchukulu, U team put forward a type of automatic assessment of three kinds of main dam 35 kinds of failure mode, and run 24 h a day of dam safety assessment expert system [11] . Since the 1980s, China has also been developing dam safety systems. Wang Peicheng et al. correlate the BIM model with monitoring data and develops a dam safety monitoring information management system [12] . Changjiang Xie, put forward the key technique of dam safety monitoring based on Lora, to realize the intelligent collection, analysis and early warning of dam multifactor safety data [13] . In recent years, as project owners in China are paying more attention to dam safety, dam safety systems are changing towards basin systems [14,15] . Systems for Yalong River basin and the Yellow River basin [16] have been developed. However, during applications, there are still problems in the new monitoring system. As the structural, geological and hydrological conditions of dams are very complicated, most of the criteria are still empirical [17,18] . With the increase in the height of the dam and the growing complexity of dam foundation condition, it is necessary to carry out further study so as to improve the criteria. Thus, the designing process for the system should be as intelligent as possible [19–21] , and should be able to meet the change of relevant knowledge increases. However, most current systems adopt the streamline-based development mode and are weak in terms of knowledge learning and expansion flexibility [22,23] . This has resulted in discrepancies between the safety evaluation conclusion and the reality. This paper introduces our team’s SCU-DSMES which started its development in 2005. Since then, it has been used in more than ten hydropower projects including Yele, Zipingpu, Guandi, and Shizhiping. SCU-DSMES was developed using an expert system mode, which includes the entire process of dam safety real-time monitoring of abnormal importing data, abnormal data changes, dam safety, and consultation with decision-making. Since it provides an entire support system platform for dam safety monitoring, it therefore greatly improves the work efficiency. It is a new, effective dam safety management model. 2 SCU-DSMES plan The general structure of dam safety monitoring and management system based on measured data is shown in Fig. 2 , usually including perception layer, data layer, analysis layer and feedback layer [24,25] . The perception layer uses the sensor to monitor the deformation, seepage and strain equivalence values of the key parts of the dam, the data layer is responsible for the data collection, pretreatment and compilation, the analysis layer is responsible for evaluating the dam safety behavior, and the feedback layer feeds back the evaluation results and response measures to the user. The analysis layer is the key and the most difficult part of the system, which generally separates the structural safety problem into many indexes, and then establishes the evaluation model of each index according to the failure mode , where f i ( g 1 , g 1 , g 3 … g i … g n ) < F i represents the index criterion function, f i represents the measured value which required by the index, g i represents the evaluation standard F i [26,27] . There are two problems in the application of this system in the past: (1) The software is mostly used for the management of the dam operation period, there is a problem of large amount or even loss of data transfer in construction and beginning storage period, and incomplete system information, at the same time only the partial use of management, resulting in the acquisition of feedback and other work efficiency is low, To this end, many units have proposed the need to establish a full life cycle of the dam safety management system. (2) Safety evaluation conclusion reliability is not high, because the dam structure and foundation are quite complex, coupled with the uncertainty of material characteristics, analysis theory is more empirical, so set the standard, especially the determination of is difficult, the analysis results are often different from the actual. F i In view of the above problems, we systematically combed the monitoring and management of the full life cycle of dam safety, and took the planning, construction, first water storage, operation as the time period, according to the different participants to draw the dam safety monitoring management map, as shown in Fig. 3 . According to the full life dam safety management work we designed the program frame diagram ( Fig. 4 ), we still take the development Management unit as the core user, as the full function of the system users and maintainers, according to other units involved in the work may provide the corresponding module viewing and use rights. Combined with the traditional four process refinement for 8 system processes, and these plans to monitor information management, security monitoring, results 3 system sub-platform, the refinement of the module increased, easy to allocate user rights, in monitoring information management to consider the full life cycle of management instruments increase or decrease, damage, worse, to provide new instruments, scrap, stop testing, and instrument parameter rate and other functions. Considering the uncertainty of the evaluation, the whole system is established according to the expert system, which condenses the safety management analysis work into four expert reasoning problems of data evaluation, trend assessment, work monitoring and security monitoring, and establishes the knowledge base, database, method base and result base for it. 3 Design of reasoning module based on expert system 3.1 Expert system The expert system belongs to the artificial intelligence system, and the core is the expression of the problem, which expresses the following three contents [28] : (1) Initial State: The current state and preconditions for the problem. (2) Problem Solving: The action allowed by the problem. (3) Goal State: The state that will eventually be implemented. After using the above expression, the solution of the problem can be regarded as starting from the initial state, repeatedly choose the available operation for state conversion, and always reach the target state. The most commonly used presentation model of the expert system book consists of three parts: the global database, the generating rule and the control system, as shown in Fig. 5 . The Global Database (GDB) is the primary data structure used by the Generating System and is where all kinds of data and relationships are stored. The Rule Base (RB) is the generation rule that acts on GDB. The basic procedures for these rules can be written as: if {premise} then {action}. The “premise” determines whether the rule is available, while the rule changes change the content of GDB when the rule matches the GDB, resulting in a new GDB. The control system (Control) is responsible for coordinating control, completing rules and selecting the path of derivation in the process of problem reasoning. The outstanding feature of AI problem expression model is good modularity, there is no direct call and called relationship between rules, and the only way to communicate between rules is to change the content of GDB. As shown in Fig. 6 (b). If the rule R1 wants to invoke R2, you can make R2 available by updating GDB, which is the main difference from the traditional process computing programming pattern, which makes the change and deletion of rules not affect other rules and procedures. When the rule base is used, we divide it into the knowledge base and the method base, the Knowledge Base contains the other estimates, the method base contains analysis modeling method, the control system according to the general control and the inference engine two modules, the general control is responsible for the overall promotion of the problem, the inference engine is responsible for the specific problem reasoning. According to the system, we design the architecture of the four most core issues of dam safety management. 3.2 Data stability identification The function of the data stability identification is to ensure the data imported to the database are correct, and to evaluate whether the monitoring instruments are working or otherwise. Fig. 7 shows the inference structure. The inference engine identifies the outliers via calling the database, knowledge base and method base, the identification results are then sent to the user through Interpretation system for auditing. When outliers are identified, measures such as re-sample, delete, or revise need to be taken. Instrument failure need to be discriminated, and be specified in the system. Before the instrument restoration, the connected module does not function. The inference is following: (1) Outliers identification The stability of data directly determines the accuracy of dam safety evaluation. Outliers may pollute the entire dataset, thereby leading to “incorrect” conclusion. At present, repeatable test data error recognition methods, such as 3 criterion, Gloria Buss criterion and so on, are used to identify the outliers. However, dam safety monitoring is dynamic, and it includes many environment factors, such as: water level, temperature and rainfall. Further, the monitoring is not repeatable. Hence, the earlier mentioned methods have shortcomings, and a more suitable method should be used. The proposed system uses the identification method of abnormal temporal comparison based on factor separation. Data are first checked using the σ criterion (Eq. σ (1) ). For the unpassed data, the system separates them according to the environment factors, and then uses Wavelet analysis to calculate Lipschitz index. Finally, a conclusion is made according to whether Lipschitz index is positive or negative. where (1) | Z j − Z ˆ j σ | < 1 is monitored data, Z j is mean of monitored data, Z j ̂ is mean square error. σ For Lipschitz index, it is defined as: if , and there exists a constant f ( t ) ∈ L 2 ( R ) and K > 0 as well as a polynomial of degree h 0 > 0 (the maximum integer smaller than ‘a’). For n = [ a ] , the following is valid: ∀ | h | < h 0 (2) | f t 0 + h − P n ( h ) | ≤ K | h | α Further, ‘a’ is Lipschitz index of function F(t) at . t 0 (2) Assessment of instrument failure By comparing the monitored data with the instrument’s measurement range, and if the monitored data are outside the instrument’s measurement range, the instrument fails. 3.3 Sequence change monitoring For each period, the sequence change monitors for any abnormal change in the monitored data, including abnormal deformation. The existence of abnormal characteristic value is assessed from the perspective of spatial deformation. Fig. 8 shows the inference structure. The inference is following: (1) Series convergence By removing the monitored data that were affected by the environment, a new series is created which includes the impact of time. The convergence of the series is then assessed using the first-order derivative , and the second-order derivative y ′ (Eq. y ′ ′ (3) ). (3) y ′ = ∂ y ∂ x ; y ′ ′ = ∂ 2 y ∂ 2 x Fig. 9 shows the three typical trend lines. Line I represents the convergence curve, line II and line III are the divergence curves. The criteria for judging the curve type are as follows: If and y ′ > 0 , or y ′ ′ ≤ 0 and y ′ < 0 , it belongs to type I, the series is convergent. y ′ ′ ≥ 0 If and y ′ < 0 , or y ′ ′ > 0 and y ′ < 0 , it belongs to type II or III, the series is not convergent. y ′ ′ < 0 (2) Eigenvalue Compare the data of an extreme value and the change rate of a statistical series with the present and previous eigenvalues, a conclusion is drawn accordingly. (3) Distribution characteristic Draw a distribution map of key parts of the dam, and get Lipschitz index. If the index is negative, there is some sudden change. Then, separate the data to the environment factors and identify the outliers. according 3.4 Dam safety monitor The dam safety monitor is used to evaluate whether the entire dam and some of its key components are safe or otherwise. Fig. 10 shows a schematic of the dam safety monitor. The inference is following: (1) Key index evaluation In single index evaluation, a key indicator is extracted according to the characteristics of dam project, which is referenced to the design report or some other similar engineering projects. It is combined with monitoring data and inspection data. When the indicator does not meet the requirement, it triggers the safety warning, such as instability of an abutment of an arch dam, or instability of a slope of an earth-and-rock fill dam. (2) Comprehensive evaluation The comprehensive evaluation is a safety evaluation of the entire dam. It takes the dam as an uncertain mechanical model, and uses the fuzzy theory to evaluate the safety of the dam. The commonly used method is the comprehensive evaluation method, but this method is too subjective, and the conclusions are weak in instruction. Hence, information entropy and clustering method are introduced in the comprehensive evaluation, the steps are [29] : According to the engineering characteristics of the dam, decompose the index C into multiple project layers, recording as . c i with c = { c 1 , c 2 , … , c n } According to the possible failure modes and the monitoring project, divide project layers into several indexes. The index layer is recorded as c i , with c i j . c i = ( c i 1 , c i 2 … c i m ) Establish the degree of membership function according to the mechanical mechanism and failure modes of the dam operation. The membership function should use the industry standard or similar engineering experience as control reference. f i j Degree of membership: (4) f i j = 1 K ≥ K A K − K c K A − K c K c ≤ K < K A 0 K < K C In Eq. (4) , K is the computing value, is the upper bound of standard, K A is the lower bound of standard. K C By combining the expert evaluation method and information entropy theory, the weight of each index can be determined using the monitoring data and expert assessment results, as follows: (5) ω = α ω 1 + 1 − α ω 2 In Eq. (5) , is the expert scoring weight, ω 1 is the information entropy weight, ω 2 is the balance factor ( α ), which indicates the subjective and objective weights in the evaluation. 0 < α < 1 According to information entropy theory, information entropy of index can be calculated as follows: c i j where (6) E ij = ∑ k = 1 m C ijk ∼ ln ( C ijk ∼ ) is the degree of membership of C i j k in year k, entropy weight of c i j , which is determined by information entropy, as follows: c i j (7) ω 2 ( c i j ) = 1 − E ij ∑ k = 1 n ( 1 − E ik ) Establish the characteristic evaluation matrix of various stages and various targets. Draw a final safety evaluation conclusion based on the clustering evaluation of data at various times. 4 System construction 4.1 Project introduction The system has been successfully applied to more than ten dam projects in China. The following is a case study of Shiziping Hydropower station in Li County, Sichuan Province, which illustrates the concrete implementation and analysis of the software. Shiziping Hydropower station is the leading reservoir for cascade development of the Zagunao River basin, and the dam is the core-wall rockfill dam, the elevation of the dam is 2544.0 m, the width of the dam is 12.0 m, the maximum dam height is 136.0 m, the total length of the dam is 310.1 m, the flood discharge structure are arranged on the left bank by diversion tunnel, spillway tunnel and emptying tunnel, and the general arrangement scheme of the flood discharge is adopted by three holes. Shiziping Hydropower Station officially started construction in May 2004, the first unit at the end of March 2010 grid-connected power generation. It experienced the “5.12” Wenchuan earthquake in 2008 and the Lushan earthquake on April 20, 2012. The safety monitoring project consists of three main parts: the headwork monitoring system, the flood discharge structure and diversion power monitoring system, and a total of 416 monitoring instruments are laid. The headwork monitoring projects are: horizontal displacement and vertical displacement of the dam surface, subsidence and horizontal displacement in the dam, foundation subsidence of the dam site, internal stress, stress and deformation of the concrete cutoff wall, seepage of the dam body and dam foundation, slope stability on both sides of the river and bypass seepage of the dam, water level, water temperature, temperature, earthquake, etc. Based on the system architecture proposed in this paper, the Intelligent Decision Support System for safety monitoring of Shiziping rockfill dam is developed (SCU-DSMES-SZP). The system include the monitoring management information system, security monitoring, three consultation platforms, documents, methods, and knowledge base which includes monitoring collection, monitoring reorganization, performance evaluation, monitoring work, and hundreds of other submodules, as shown in Fig. 11 . 4.2 Management of monitoring data Data management is an important work of the system, it provides reliable data service for subsequent security analysis, data management includes completing data acquisition, data system evaluation, and then entering database. Fig. 12 shows the processes on how the monitored data are input into the system. There are four ways to input the data: (1) manual input, (2) Excel input, (3) automated data input, and (4) PDA input. The interface for the data input is shown in Fig. 13 . When the data is in storage, start Data stability identification. When the monitoring values or effect quantity exceeds the instrument range considered to be ineffective, and the instrument is marked as “invalid” in the properties of the monitoring instrument, the point will no longer participate in the data integration and analysis, until the engineer inspection and maintenance is considered normal, modify the instrument properties as normal, only to restore the test points to participate in the data integration and analysis function. Abnormal data recognition for manual input and import data, abnormal data display in the abnormal monitoring information interface for input and audit personnel processing, processing measures to provide deletion, retention functions, click Delete Options, data from the original database deletion, click Save Options, data into the integration and technical database. July 15, 2014 monitoring shows that: there are 35 abnormal values and two failure instruments, respectively, high plasticity clay region dislocation meter J7 (vertical) and soil displacement meter ST4-5. J7 maximum measurement range of 100 mm, has reached 103.3 mm, ST4-5 range is also 100 mm, has reached 101.0 mm. All 35 abnormal values are likely to have errors. In Fig. 13 , a typical point of measurement for anomaly recognition, the red line is the monitored data in an internal part of a high dam. Instrument instability caused some data to become outliers inside the dotted box. Using the 3 method to identify outliers, σ , therefore 3 σ = 2 . 17 . However, the 3 σ = 6 . 53 method was unable to identify the outliers. When the standard was reduced to 2 σ , the outliers were identified. However, if the outlier data were reduced by 0.5 mm as shown by the green line in σ Fig. 4 , when the instrument instability caused some data to become outliers, both the 3 and σ 2 methods could not identify the outliers σ [30] . On the other hand, using the separation method based on 4-layer Daubechies decomposition, the wavelet coefficients for the four outliers are determined. Thereafter, the Lipschitz index for the four outliers can also be determined They are −0.275, −0.375, −0.421 and −0.15, and since they are all negative, it is an indication that the data are outliers. After the data reliability evaluation is completed, the intelligent trend assessment is carried out, and the monitoring shows that: there are 36 points did not converge, 16 points have a hyper-historical maximum of the year, 4 parts that are unevenly settled too much and two seepage anomaly measurement points. The failure to converge includes internal appearance deformation and other measuring points. Compared with the previous results, grouting gallery measurement is no longer in the early warning range and has tended to converge. The place has been in the focus of monitoring. After the “5.12 Wenchuan” earthquake, the Dam Foundation Grouting Gallery appeared a large crack, for this reason, increased the crack monitoring instrument. The current monitoring results reflect: During the first-stage storage process, the gallery bottom plate reinforced concrete pouring, the results of the joint meter showed that the original cracks did not see the obviously continue to open and the phenomenon of the wrong table, the process line trend is stable. Seepage anomaly is mainly after the Dam Foundation anti-seepage wall and cross-strait mountain anti-seepage curtain downstream side seepage gauge monitoring water level, pressure hole water level and winding hole water level and upstream water level correlation is strong, see Fig. 14 . 4.3 Dam safety analysis According to the tasks of safety monitor of high rockfill dam, the method library mainly includes mathematical model library, security evaluation database, sequence measured value base and the finite element calculation base, which provides necessary computational tools for dam safety analysis. The main content of the dam structure safety assessment based on monitoring data is shown in Fig. 15 , The dam slope stability analysis includes the dam slope stability analysis of typical section plane and the dam slope stability analysis of three-dimensional [31,32] . The plane stability analysis provides two methods of Sweden Arc Method and Bishop Slice Method, and the dam slope stability analysis of three-dimensional includes three-dimensional Hovland method, three-dimensional simplified Bishop method and three-dimensional Spencer method. The seepage analysis of dam body can calculate the stable seepage field in the dam body and dam foundation of various water level conditions and the unsteady seepage field in the dam body under the condition of water level change (such as water level landing), then obtain various seepage field characteristics such as head, pore water pressure, seepage velocity and hydraulic gradient, as well as the position of saturation line in the dam [33] . At the same time, it can provide the basis and initial data for the calculation of stress and strain of dam body and the stability analysis of dam slope. The stress and strain analysis of dam body can calculate the stress and displacement field of the whole Dam Foundation according to the measured or calculated saturation line, and compare it with the monitoring data, including individual indicator monitoring and comprehensive state monitoring of key parts. According to the safety calculation results, start the safety reasoning system, evaluate the safety of the dam, and carry out early warning. The early warning is divided into three levels, with green light, yellow light, red light respectively represent safety, warning and danger. The interface of dam safety monitoring is shown in Fig. 16 . The system directly in the form of interface prompts and short message to send dangerous warning messages to the set staff, the staff audit the conclusions, audit qualified and then further warning notice. According to the actual operation of Shiziping and similar projects, four objectives are determined according to parts and monitoring items, including dam stability, dam and foundation deformation, dam and foundation stress, dam and foundation seepage, which are recorded as C1 C4. In the target layer, 19 indicators are established and corresponding membership functions are set according to the monitoring arrangement and manual patrol. According to the membership function, the membership degree from 2010 to 2013 is determined in years, as shown in ∼ Table 1 . The subjective weight is obtained through expert evaluation. There are 16 experts in this expert group, including personnel of Dam Design Institute, monitoring and operation management personnel, later monitoring and analysis personnel and corresponding experts in Colleges and universities. All 16 expert evaluation opinions have passed the consistency test. Because the expert group has a reasonable hierarchical structure and is familiar with the project, The balance factor can be taken as the large value, which is set as 0.7. The results of the weight calculation are shown in α Table 2 . Two indexes are set for dam and foundation stress. The clustering results are shown in Fig. 17 . After the construction peak in 2009, the value of C31 gravel soil core wall stress index basically tends to be stable, and the current change rate is only 0.001 mpa/d; After the “5.12” Wenchuan M8 earthquake in 2008, the stress index of C32 observation Gallery appeared cracks and exceeded the standard. The change rate before 2011 was large, which was 0.02 mpa/d. After 2011, the change value decreased, which was 0.01 mpa/d. Up to now, it is basically stable. Therefore, it can be regarded as the stress adjustment stage before 2011 and the stress stability stage after 2011, This is consistent with the cluster analysis structure. The similarity between 2010 and 2011 is 0.96, the similarity between 2012 and 2013 is 0.94, and the overall similarity is 0.87. The cluster analysis results of dam and foundation seepage are shown in Fig. 18 . At present, Shiziping seepage is in good condition, and there is no abnormal change after two earthquakes. The similarity of cluster analysis in four years is 0.91, which is basically similar. It can be considered that the seepage treatment measures are effective and there is no abnormality in seepage. The cluster analysis results of dam and foundation deformation are shown in Fig. 19 . The appearance deformation in the dam has creep after the end of the construction period, and the deformation is stable after 2011. The cracking and stress are also in the development period before 2011, and enter the stable period after 2011. The changes before 2011 are also inconsistent. For example, the displacement rate of appearance subsidence in 10 years is 0.63 mm/d, and the displacement in 2011 is 0.27 mm/d. From the perspective of cluster analysis, only 2012 is consistent with 2013. The comprehensive evaluation results show that the comprehensive scores of each year are 0.80, 0.79, 0.90 and 0.93, indicating that the deformation of Shiziping dam tends to be stable with time. 5 Conclusion Dam safety is not only affect their own safety and effectiveness, but also affect the safety of the river basin. However, as the traditional management modes require enormous manpower to handle the massive data monitoring which are both time- and labor-consuming, these modes are unable to carry out the real-time evaluation, thereby missing out on the best time to solve the problem. In view of this, it is necessary to use computing techniques that can provide an effective solution. For a systematic analysis of the dam safety monitoring task, the system is fully combined with the actual monitoring systems. This combined system can carry out the entire process of real-time dam safety monitoring. It covers abnormal importing data, abnormal data changes, dam safety, and consultation with decision-making. The mode has created a new dam safety management mode using an intelligent system as its decision-making supporting platform. Traditional systems mostly use the streamline-based programming. On the other hand, the relevant dam structure theories and simulation calculation techniques are immature. Hence, with increased knowledge, it is necessary to improve both the inference techniques and the process of evaluating knowledge. Therefore, the system employs an expert system development technique, which is based on the study of the safety monitoring inference mode of a dam, and programming of relevant method library and knowledge base. In addition, the new system conducted an in-depth study on the identification and comprehensive evaluation of abnormal monitoring data and other hot spot techniques. As a consequence of the study, both the inference and the decision-making procedures were able to better fit to the reality of the dam. It also equipped the system with knowledge expansion and learning capacities. The system has been applied to more than ten power stations in China, and has greatly simplified the staff’s workload in data collection, compilation and analyses. Having achieved real-time safety monitoring, in addition to saving cost and improving engineering efficiency, it also provides a decision-making supporting platform for dam safety management. However, as the structural, geological and hydrological conditions of the dams are complicated, the establishment of indexes and obtaining standards are also complicated. So, the system is continuously collecting and studying field data in order to improve the processes of inference and evaluating knowledge. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | [
"ZHANG",
"FAN",
"WILDE",
"AKPINAR",
"BANERJEE",
"LIU",
"BRUNNER",
"BOCCHIOLA",
"JEON",
"GORINI",
"SUPAKCHUKUL",
"CHENG",
"XIE",
"LI",
"ZHANG",
"ZHONG",
"ROHANINEJAD",
"CHELIDZE",
"WANG",
"SHEIKHNEJAD",
"HL",
"AKPINAR",
"SHIFENG",
"CURT",
"DONG",
"FAN",
"ABDULAMIT"... |
8f77444107264218af72e63ba11e151f_Stand structure tree species diversity and leaf morphological plasticity in Xylocarpus mekongensis P_10.1016_j.japb.2022.02.004.xml | Stand structure, tree species diversity, and leaf morphological plasticity in Xylocarpus mekongensis Pierre among salinity zones in the Sundarbans, Bangladesh | [
"Azad, Salim",
"Mollick, Abdus Subhan",
"Setu, Firuz Anika",
"Islam Khan, Nabiul",
"Kamruzzaman, "
] | Stand structure, diversity, and plasticity of leaf morphology of Xylocarpus mekongensis were investigated among saline zones in Bangladesh Sundarbans. Stand structure and species diversity were assessed in 27 sample plots (20 m× 20 m) covering 1.08 ha of forest. A total of 11 species were recorded. Out of them, 7, 6, and 9 species were found in fresh water, moderate, and strong saline zone, respectively. Species dominance in terms of importance value showed Heritiera fomes > Excoecaria agallocha > Bruguiera sexangula > Xylocarpus mekongensis in fresh water zone; H. fomes > E. agallocha > Avicennia officinalis > Ceriops decandra in moderate saline zone. E. agallocha > Sonneratia apetala > H. fomes > C. decandra in strong saline zone. Species diversity decreased from fresh water to strong saline zone. The findings showed a wide range of leaf morphological plasticity of all the parameters within and among saline zones. Leaf index, plasticity index, and specific leaf area were maximum in fresh water zone than of others. The findings of PCA revealed clear evidences of plastic behavior of leaf morphological parameters. Thus, the plasticity of leaf morphology might contribute as an influential biological signs of forthcoming climate change events. | Introduction The Sundarbans is the biggest unit mangrove forest in the world having a wide range of species ( Azad and Matin 2012 ; Azad et al. 2019 , 2020a , 2020b ; Rahman et al. 2015 ; Sarker et al. 2016 , 2019 ). This ecosystem maintains a distinct and diverse flora and fauna. Diverse tree species and stand structural attributes may have influence on ecosystem productivity in the Sundarbans mangrove forest ( Azad et al. 2020b ). This mangrove forest is distributed differentially across salinity gradient in a stripy zonation pattern ( Chaffey et al. 1985 ; Wahid et al. 2007 ; Rahman et al. 2015 ). Mangrove forests are very dynamic, productive, and efficient wetland ecosystems in the world ( Lugo and Snedaker 1975 ; Nagarajan et al. 2008 ). Mangrove vegetations are unique through contribution in stand structure, diversity, carbon storage, nutrient cycling, and ecosystem functioning along coastlines throughout the tropics and subtropics ( Duke et al. 1981 , 1998 ; Duke 2001 ; Polidoro et al. 2010 ; Giri et al. 2011 ; Chanda et al. 2016 ; Mukhopadhyay et al. 2018 ). Vertical and horizontal distribution of stand’s component referred as stand structure. It reflects overall looks of a stand including diameter, height, stem density, and crown layers, and so on. ( Helms 1998 ). Several environmental variables like salinity, rainfall, temperature, sedimentation, tidal inundation may influence stand density and distribution tree species and individuals among salinity zones ( Azman et al. 2021 ). Species diversity and stand structure are very important for carbon storage in mangrove ecosystem ( Rani et al. 2016 ; Azad et al. 2020b ). Leaf production, decomposition, and nutrient cycling of mangrove ecosystem ( Clarke 1994 ; Coupland et al. 2005 ; Azad et al. 2021 ) play significant roles for the primary production ( Lee 1990 ; Sherman et al. 2003 ; Clough 2013 ; Hoque et al. 2015 ; Azad et al. 2020c ; Azad et al. 2020d ), which upholds mangrove communities to maintain ecological sustainability. Leaf morphology of mangrove species is a significant and dependable biological sign of worldwide climate change ( Khan et al. 2020 ; Mollick et al. 2021 ). Several plants adapt their life cycles and styles with climate change and salinity gradients and exhibit plasticity of traits ( Feller et al. 2010b ; Naidoo 2016 ). Leaf is a major organ of plant for capturing light energy in the process of photosynthesis ( Knight and Ackerly 2003 ; Suwa 2011 ; Klančnik and Gaberščik 2015 ). Plants notice various environmental signals ( Tsukaya 2013 ) with the help of leaf architecture which contribute plant fitness to cope with environmental conditions ( Tsukaya 2002 ; Xu et al. 2009 ). Leaf is generally exposed to aerial conditions, which shows sensitivity to environmental variation than other parts of plant and can demonstrate morphological plasticity influenced by soil salinity and other environmental parameters (nutrient or water availability) ( Agastian et al. 2000 ; Krauss and Allen 2003 ; Hoppe-Speer et al. 2011 ; Vovides et al. 2014 ; Nguyen et al. 2015 ; Alam et al. 2018 ; Khan et al. 2020 ; Mollick et al. 2021 ). Leaf morphological plasticity that allows plants to modify their growth and development in diverse stresses, particularly mangrove vegetation frequently modify their leaves shape, size, and arrangement to balance leaf energy. Generally, thicker and smaller leaves grow vertical orientations to escape direct sun light to reduce transpiration rate ( Krauss and Allen 2003 ; Sardans et al. 2006 ; Bastias et al. 2018 ). Thus, most of the mangrove species adapt exceptional salt adaptation mechanisms ( Tomlinson 1986 ; Feller et al. 2010a ). Various research studies suggest that leaf morphological plasticity of Heritiera fomes ( Khan et al. 2020 ; Mollick et al. 2021 ), Excoecaria agallocha L. ( Mollick et al. 2021 ) and Ceriops decandra (Griffith) Ding Hou ( Mollick et al. 2021 ), exhibit a wide range of variation among saline zones in Bangladesh Sundarbans. Khan et al. (2020) and Mollick et al. (2021) also developed leaf plasticity index (PI) of mangrove species for different parameters in different saline zones. Xylocarpus mekongensis Pierre. is a semi-deciduous mangrove species found in Australia, Bangladesh, Myanmar, Malaysia, Kenya, Thailand, the Philippines, India, and Singapore. X. mekongensis grows well in low saline (oligohaline) and moderate (mesohaline) saline zone in Bangladesh Sundarbans ( Rahman et al. 2015 ; Azad et al. 2019 ). X. mekongensis also tolerates strong saline condition (polyhaline) ( Hossain 2015 ). X. mekongensis is a late successional specialist ( Sarker et al. 2019 ) in mangrove ecosystem. This species associated with H. fomes , A. officinalis , or B. sexangular ( Iftekhar and Saenger 2008 ; Rahman et al. 2015 ). This association occupied 3.09% of total vegetation area ( Iftekhar and Saenger 2008 ). This species is considered as inhabitants for wildlife and bird. This species is also ecologically important for protection against soil erosion along the canal and river banks ( Siddiqi 2001 ). This is the most valuable timber species in the Sundarbans ( Hossain 2015 ). The leaves of this species are compound and alternate. Leaflet is 5-12 cm long. The abundance of X. mekongensis reduced with decreasing rainfall or increasing salinity or temperature ( Mukhopadhyay et al. 2018 ). Several researchers observed leaf morphological traits variations in different habitats of non-mangrove species ( Xu et al. 2009 ; Mollick et al. 2011 ; Geekiyanage et al. 2018 ; Guimarães et al. 2018 ). But there is a dearth of information regarding leaf morphological plasticity in mangrove ecosystem. Arrivabene et al. (2014) noticed leaf morphological and anatomical plasticity of Avicennia schaueriana , Laguncularia racemosa , and Rhizophora mangle in different habitats in Brazil. Vovides et al. (2014) suggested that large morphological plasticity in Avicennia germinans that might be linked with environmental circumstances in the Gulf of Mexico coastal plain. Very recently, Mollick et al. (2021) ( C. decandra , E. agallocha and H. fomes ) and Khan et al. (2020) ( H. fomes ) documented the plasticity of leaf morphology among saline zones in Bangladesh Sundarbans. More studies on mangrove adaptation with climatic factors were also suggested ( Dasgupta et al. 2017 ). Molecular techniques offer to examine the plastic behaviors of different traits due to climate change. But morphological characterization should be done prior to molecular studies as a basement ( Smith 1989 ; Giraldo et al. 2005 ; Podgornik et al. 2010 ). The plasticity of leaf morphology of few species were documented in mangrove ecosystem ( Arrivabene et al. 2014 ; Vovides et al. 2014 ; Khan et al. 2020 ; Mollick et al. 2021 ). However, the plasticity of leaf morphology of several mangrove species is incompletely understood. At the same time, mangrove species could be suitable predictors to assess the impact of upcoming climate change events. The objectives of this study were (a) to assess stand structural variation and tree species diversity in different salinity zones; (b) to quantify leaf morphological plasticity of X. mekongensis in different salinity zones in Bangladesh Sundarbans. Material and methods Study site The study was conducted in Bangladesh Sundarbans. The study sites were located in Khulna, Satkhira, and Bagerhat districts of Bangladesh. The study areas were situated within the limit from 21°31′ to 22°30′N latitudes and from 89°00′ to 89°55′E longitudes ( Figure 1 ). The Sundarbans (located in Bangladesh and India) maintain a wide range of flora and fauna ( Azad and Matin 2012 ; Rahman et al. 2015 ; Sarker et al. 2016 , 2019 ; Azad et al. 2019 , 2020a , 2020b ). Bangladesh Sundarbans is occupied sixty percent of total area (ca. 10,000 km 2 ) and the rest is covered by the neighboring country, India. This mangrove forest covers 6017 km 2 in Bangladesh. Out of the total areas of Bangladesh Sundarbans, land occupied 69% and the remaining parts are covered by rivers, channels, canals, creeks, and small streams. Bangladesh Sundarbans is maintaining three distinct saline zones viz. fresh water or low saline (oligohaline; < 5 ppt), moderate saline (mesohaline; 5-18 ppt), and strong saline (polyhaline; > 18 ppt) ( Wahid et al. 2007 ). The study sites are occupied with dominant ( H. fomes ) and co-dominant ( X. mekongensis , Bruguiera sexangula , C. decandra and E. agallocha ) species of the ecosystem. The study sites are inundated twice a day by sea water. Massive amount of freshwater is discharged annually during the monsoon by the Baleshwar, Sibsha and Passur river in the Sundarbans mangrove forest. But freshwater flow sharply decreases during dry seasons due to decreased water inflow from the Ganges. The study area is characterized by four focal seasons viz. (i) the Summer: March–May, (ii) the Rainy season: June–September, (iii) The Autumn: October–November and (iv) the Winter: December–February. The climate is recognized by a long-wet period (monsoon) and a very short dry period (winter). During winter, temperature fluctuates between 12 and 25°C but it increases between 26 and 34°C during May and June. The mean annual precipitation fluctuates from 1800 mm to 2790 mm. About 74% of total precipitation occurs during the monsoon. The mean annual relative humidity varies from 80 to 85% ( Chowdhury et al. 2016 ). Experimental design, plant materials, and data acquisition For the assessment of stand structure, a total of 27 sample plots (20 m× 20 m) were established purposively. Sampling plots covered 1.08 ha of forest, located within 50–150 m away from the river. Sampling was done in different salinity zones randomly ( Figure 1 ). Tree height and Dbh of all the trees (Dbh ≥ 2.0 cm at 1.3 m) were measured and recorded. Tree height (H) was measured by Criterion RD 1000, while Dbh was measured using diameter tape ( Azad et al. 2019 ). Species importance value IV (%) was calculated according to the formula recommended by Curtis and McIntosh (1951) . Woody species diversity (Shannon’s index) was determined by adopting the formula proposed by MacArthur and MacArthur (1961) . For leaf morphological plasticity, leaf samples were selected within a station for each saline zone purposively. Twenty-five healthy trees were selected in each station and 4–5 leaves were collected from different canopy height from each tree. Anderson (1986) described that mangrove show differences in leaf shape on vertical leaf profile. Thus, leaves were collected 0.5 and 1.0 m below top height of tree systematically. Height and diameter of sample trees and positions of leaves were indifferent among the saline zones. Fresh leaves were wrapped with polybags immediately and carried them carefully in the lab of Khulna University, Bangladesh. A total of 300 leaves were selected from the collected leaves of the three saline zones (oligohaline, mesohaline, and polyhaline) for digital scanning (100 leaves from each saline zones) ( Figure 1 ). A digital flatbed scanner was used to scan the leaves and saved in JPEG file format. NIH ImageJ software was used to measure different quantitative parameters of leaves. Different leaves parameters such as leaf length (LL,cm), leaf middle width (LMW, cm), leaf width (LW, cm), leaf down quarter width (LDQW, cm), leaf upper quarter width (LUQW, cm), petiole length (PL, cm), leaf tip angle (LTA, °), and leaf base angle (LBA, °) were defined in Figure 2 . Leaf perimeter (LP, cm) and leaf area (LA, cm 2 ) were also calculated. Leaf index (LI) was determined by using flowing formula. LI = LL/LW; where LL indicates LL and LW indicates LW. The ratio LL to PL, LA to LP, LUQW to LDQW, and LBS to LTA were also calculated. Specific leaf area (SLA) was estimated from leaf oven dry weight and leaf thickness (LT) was also measured. Each leaf sample was oven dried to estimating SLA (cm 2 g -1 ) ( Petruzzellis et al. 2019 ). A digital caliper (Mitutoyo 0-150 mm, accuracy 0.01mm) was used to measure LT ( Naidoo 2010 ). Data analysis For assessment of stand structural variables, descriptive statistics (mean and standard deviation) were carried out. ANOVA was done to examine the significant differences of various stand structural variables among three saline zones. Tukey’s posthoc test was also conducted when significant differences were found for structural variables (density, basal area, height, Dbh and woody species diversity) among salinity zones. Descriptive statistics for leaf morphological variables of X. mekongensis were also conducted. LL, LMW, LW, PL, LDQW, LUQW, LTA, LBA, LP, LA, LI, LT, and SLA were compared among saline zones by ANOVA followed by Tukey’s post hoc test. A leaf morphological PI was designed to investigate the responses of all the leaf morphological variables of X. mekongensis among the saline zones for this study ( Valladares et al. 2006 ). PI was determined by using the subsequent equation. PI = (1 – q/Q) where Q indicates the largest value and q indicates the smallest value for any given parameter. The scale PI ranges from 0 to 1. The PCA ordination of leaf morphological parameters of X. mekongensis was conducted to understand the plastic behavior to salinity zones and also to define key variables correlated with PC1 and PC2. To meet normalize data, raw data were log-transferred prior to PCA ( Hammer et al. 2001 ). R programming software (version 4.0.5) ( R Core Team 2019 ) was applied to execute statistical and graphical analysis. Results Forest structure and diversity The study revealed a total of 11 species belonging to seven families in three saline zones. The maximum number of species richness was found in Polyhaline (strong saline) zone and the lowest number of species was found in Mesohaline (moderate saline) zone. There were 7, 6, and 9 species were recorded in Oligohaline (fresh water), Mesohaline (moderate saline), and Polyhaline (strong saline) zone, respectively ( Table 1 ). A list of species dominance in terms of importance value showed that Heritiera fomes Buch-Ham (IV value: 49.8) was the dominant species and Excoecaria agallocha L (IV value: 15.3), Bruguiera sexangula (Lour.) Poir (IV value: 12.1), Xylocarpus mekongensis J. Koenig (10.1) were co-dominant species in Oligohaline zone. The list also showed that H. fomes (IV value: 34.5) and E. agallocha (IV value: 31.3) were equally dominant and Avicennia officinalis L (IV value: 11.2), Ceriops decandra (Griff.) Ding Hou (IV value: 10.2) and X. mekongensis (IV value: 10.1) were co-dominant species in Mesohaline zone. The list also showed that E. agallocha (IV value: 47.3), Sonneratia apetala Buch.-Ham (IV value: 22.3) and H. fomes (IV value: 12.2) were the top three species in Polyhaline zone ( Table 1 ). Species specific density and basal area were also given in Table 1 . Table 2 showed stand structure and tree species diversity among three saline zones. Stand density (n ha -1 ) was maximum in Oligohaline zone (2757 ± 245) and minimum in Polyhaline zone (1548 ± 98). ANOVA showed significant difference (p<0.05) of stand density among saline zones. On contrast, basal area (m 2 ha -1 ) was maximum in Polyhaline (82.27 ± 9.4) followed by Mesohaline (68.54 ± 8.6) and Oligohaline zone (58.62 ± 7.1). ANOVA showed significant difference (p<0.05) of basal area among saline zones. Tree height (m) and Dbh (cm) were also maximum in Polyhaline zone followed by Mesohaline and Oligohaline zone. ANOVA showed significant difference (p<0.05) of tree height and Dbh among saline zones. Tree species diversity (Shannon’s diversity index) was maximum in Oligohaline zone (1.9 ± 0.3) and minimum in Polyhaline zone (1.2 ± 0.1). ANOVA showed significant difference (p<0.05) of tree species diversity among saline zones ( Table 2 ). Leaf size variation among saline zones Leaf size parameters of X. mekongensis showed wide range of variation among the saline zones ( Table 3 ). LL, LMW, LW, LDQW, LUQW and PL decreased from fresh water zone to strong saline zone. Maximum LL was found in fresh water zone (16.13±0.65 cm) and minimum in strong saline zone (11.42±1.13 cm). LW was maximum in fresh water zone (5.65±0.63 cm) followed by moderate saline zone (5.47±0.37 cm) and strong saline zone (4.46±0.42). Similarly, LMW was also maximum in fresh water zone (5.54±0.63 cm) followed by moderate saline zone (5.45±0.38 cm) and strong saline zone (4.47±0.42 cm). LUQW was higher than LDQW in all three saline zones. LUQW and LDQW were also maximum in in fresh water zone (5.09±0.61 and 4.59±0.55cm, respectively) followed by moderate saline zone (4.65±0.37 and 4.45±0.38cm, respectively) and strong saline zone (3.71±0.38 and 2.6±0.41 cm, respectively). Maximum PL was found in fresh water zone (3.4±0.56 cm) and minimum in strong saline zone (2.6±0.41 cm). ANOVA showed significant difference (p<0.05) of all the leaf size parameters (LL, LW, LMW, LUQW, LDQW, and PL) among saline zones ( Table 3 ). LL, LW, and LUQW were significantly higher (P<0.05) in fresh water zone than in moderate or in strong saline zones. LMW, LDQW, and PL were significantly higher (P<0.05) in both fresh water and in moderate saline zone than in strong saline zones ( Table 3 ). Leaf shape variation among saline zones Leaf shape parameters of X. mekongensis also showed a wide range of variation among the saline zones ( Table 3 ). LBA and LTA were maximum in moderate saline zone (87.74±7.64° and 71.7±5.71°, respectively) followed by strong saline (76.93±6.08° and 68.45±4.07°, respectively) and fresh water zones (76.44±8.25° and 66.31±7.01° respectively). LA and LP decreased from fresh water zone to strong saline zone. Maximum LA and LP were found in fresh water zone (63.65±9.18 cm 2 and 49.72±5.84 cm respectively) and minimum in strong saline zone (41.43±7.37cm 2 and 39.59±5.99 cm, respectively). LI was also decreased from fresh water zone to strong saline zone. LI was maximum in fresh water zone (2.88±0.37) followed by moderate (2.62±0.16) and strong saline zone (2.4±0.19). The ratio of LA to LP (LA/LP) was maximum in fresh water zone (1.28±0.17) and minimum in strong saline zone (1.04±0.08). The ratio between LL to PL (LL/PL) and the ratio between LUQW and LDQW were maximum in fresh water zone and minimum in moderate saline zone. The ratio between LBA and LTA was maximum in moderate saline and minimum in strong saline zone ( Table 3 ). ANOVA showed significant difference (p<0.05) of all the leaf shape parameters (LBA, LTA, LA, LP, LI, LA/LP, LL/PL, LBA/LTA, and LUQW/LDQW) among saline zones ( Table 3 ). LTA and LBA was significantly higher (P<0.05) in medium saline zone than in fresh water or in strong saline zones, but there was no significant difference (p>0.05) between fresh water and strong saline zones. LA and LI were significantly higher (P<0.05) in fresh water zone than in moderate or in strong saline zones. LP was significantly higher (P<0.05) in both fresh water and moderate saline zone than in strong saline zones, but there was no significant difference (p>0.05) between fresh water and moderate saline zones ( Table 3 ). LI LI of X. mekongensis was shown in Figure 3 . The scatter plot of LI showed a clear zonation tendency among salinity zones. Leaves collected from different study areas provided a clear picture. LI in fresh water zone appeared at the top of the scattered plot. LI also showed a downward tendency from fresh water to strong salinity zone. Leaf PI PI of different leaf morphological variables of X. mekongensis showed a big variation within and among saline zones ( Figure 4 ). PI of most of the variables showed the highest values in fresh water than in moderate or in strong salinity zone. PI in PL, LA, A/P, and LL/PL showed maximum value and LUQW/LDQW showed minimum value in fresh water zone. LP showed maximum value and LUQW/LDQW showed minimum value in moderate saline zone. PL showed maximum value and LUQW/LDQW showed minimum value in strong saline zone. LP showed maximum value and LUQW/LDQW showed minimum value among saline zones ( Figure 4 ). SLA and LT SLA of X. mekongensis was maximum in fresh water zone followed by moderate and strong saline zones. ANOVA showed significant difference (p<0.05) of SLA among the salinity zones. The results also revealed a big variation in LT among saline zones. LT was maximum in strong saline zone and minimum in fresh water zone. ANOVA also showed significant difference (p<0.05) of LT among the salinity zones ( Figure 5 ). Ordination of saline zones for various morphological parameters An ordination plot of PCA was constructed based on the morphological variables measured from the collected leaves among different saline zones. First two components of PCA explained most of the total variance (60.97%). Among them PC1 explained 43.19% and PC2 explained 17.78% of total variance ( Figure 6 ). The Eigen values showed a big difference between PC1 (6.66) and PC2 (2.79) ( Table 4 ). The ordination plot explained leaf morphological plasticity among different saline zones in the Sundarbans. Leaf morphological parameters of X. mekongensis were clearly varied among saline zones that indicated three different clusters of leaf samples measured from study areas ( Figure 6 ). Different leaf morphological parameters like LL, LW, LDQW and LUQW, PL, LP, LA and ratio between LA and perimeter were significantly and positively (p < 0.01) correlated with PC1. Whereas, leaf base and tip angle were significantly and negatively (p < 0.01) correlated with PC2 ( Table 4 ). Discussion Stand structural variation to salinity zones The present study on stands structural assessment showed a significant variation among salinity zones. Stand density and Shannon’s diversity index decreases with increasing salinity. On contrast, other structural parameters (Basal area, height and Dbh) increase with increasing salinity. This may be due to huge number of seedlings colonized in oligohaline (fresh water) zone. Thus, density increased and other parameters decreased. Generally, mangrove vegetation is denser in low salinity zone ( Kathiresan and Bingham 2001 ). Mangrove seedlings require low salinity for their better growth and colonization ( Smith 1993 ) but salt tolerance increases with age ( Kathiresan and Bingham 2001 ). As stand structural parameters measured, all the individuals dbh>2.0 cm, thus a large number of saplings included in the present assessment. Fromard et al. (1998) documented a similar observation in a mature mangrove stand in Guiana. They noted higher density in lower saline zone. Mean height and mean dbh increases with increasing salinity. This may be due to huge number of S. apetala , the pioneer species in the Sundarbans were recorded in polyhaline (strong saline) zone, which may also make the differences. In general, S. apetala showed higher growth than H. fomes and E. agallocha . The difference is also due to large number of saplings counted in oligohaline (fresh water) zone. Our observation also revealed that larger tree in moderate and strong salinity zones had lower density rather than a dense community with small sized trees in fresh water zone. The results highlighted resources competitions within the stands (salinity zones) that created variation in tree size, species diversity, self-thinning or mortality ( Kamara et al. 2014 ). Kamara et al. (2014) documented that self-thinning or density depended mortality is a usual occurrence in overcrowded mangrove stand in Okinawa Island, Japan. Our observations agreed with the investigations of Kamara et al. (2014) . In our study, number of species (species richness) was the highest in strong saline zone than that of fresh or moderate saline zones. This may not be the general view of the whole Sundarbans. In our study sites, we found that the abundance of some species was very poor in strong saline zone. Average abundance (stems ha -1 ) of five species, namely R. mucronata (11.4 ± 3.8), R. apiculate (14.6 ± 5.4), X. mekongensis (17.2 ± 3.7) A. officinalis (18.3 ± 1.6), and A. cucullate (19.9 ± 3.1) was less than 16 per ha. Only three species ( C. decandra , E. agallocha and S. apetala ) were evenly distributed. Thus, Shannon’s diversity index was decreased in strong saline zone. As we measured all the individuals, dbh>2.0 cm for stand structural parameters, a large number of saplings included in the present assessment. Thus, the contribution of those five species in terms of basal area per ha was also very poor. Some of those five species may be disappeared near future from strong saline zone. Ahmed et al. (2021) also noticed some mangroves with very poor abundance in strong saline zone in the Sundarbans. Kathiresan and Bingham (2001) discussed that Rhizophora seedlings may establish in strong saline zone followed by a long rainy season, but chronic high salinity hampers the growth of seedlings and saplings and may disappear after few years. Generally, the mangroves having high salt tolerance ability can survive in strong saline zone. In our study, H. fomes dominated in fresh water (importance value: 49.8%) and moderate salinity zones (importance value: 34.5%). On contrast, E. agallocha dominated in strong saline zone (importance value: 47.3%). Alternatively, E. agallocha , B. sexangula , and X. mekongensis showed lower importance value in fresh water zone than H. fomes . Similarly, B. sexangula and X. mekongensis also showed lower importance value in moderate and strong salinity zone but importance value of E. agallocha is comparatively higher in moderate zone and S, apetala in strong saline zone among the species in the salinity zones. These results highlighted the impact of salinity on species diversity and dominancy. The importance value of H. fomes and E. agallocha in the Sundarbans is higher than that of Andaman Islands, India ( Padalia et al. 2004 ). The importance value of dominating species in the Sundarbans is also higher than that of dominating species of Srilankan coast ( Perera et al. 2013 ). Duke et al. (1984) explained that growth, diversity, and abundance of mangrove species are the consequences of low salinity. Low salinity reduces stress, thus, increase of growth in mangrove species ( Leach and Burgin 1985 ). Mangrove species store salt during their lifetime for their growth and survival and leaves become thicker because of salt accumulation which also influences stand structure and diversity in mangrove stands in salinity regimes ( Tomlinson 1986 ; Zheng et al. 1999 ). Our investigations revealed that mangrove communities among saline zones was different in relation to species composition, and their abundance which was observed by diversity index. Leaf morphological plasticity The study showed strong indication of leaf morphological plasticity of X. mekongensis in the Sundarbans, Bangladesh among saline zones. Plastic behavior of mangroves to salinity were well understood for H. fomes ( Khan et al. 2020 ), Avicennia officinalis ( Alam et al. 2018 ), and Sonneratia apetala ( Nasrin et al. 2019 ) as well. All the leaf morphological parameters especially leaf size of X. mekongensis provided evidences that the species developed plastic strategies among saline zones in the Sundarbans. Leaf morphological plasticity to salinity has documented various mangrove forests where mangroves survive with environmental heterogeneity ( Sultan 2000 ; Khan et al. 2020 ). Leaf size parameters are the most important morphological parameters that shows plasticity to environmental change ( Tsukaya 2002 ; Carins Murphy et al. 2012 ). Leaf size parameters reflected as an excellent bio-marker for salt stress adaptation ( Abbruzzese et al. 2009 ). Leaf morphological plasticity of X. mekongensis showed high leaf morphological plasticity in fresh water saline zone than that of moderate or in strong saline zone in the Sundarbans. Similar result was also found by Khan et al. (2020) in H. fomes in the Sundarbans. Khan et al. (2020) reported that H. fomes showed high leaf morphological plasticity in fresh water zone than others. In contrast, Mollick et al. (2021) documented that C. decandra showed high morphological plasticity in strong saline zone. The difference of leaf morphological plasticity among mangrove species is due to salinity stress capability of respective species ( Kathiresan et al. 2010 ). H. fomes grows well and abundantly in the low saline zone in Bangladesh Sundarbans, but scatteredly distributed in strong or moderate saline zones ( Hossain 2015 ; Rahman et al. 2015 ; Sarker et al. 2019 ; Azad et al. 2019 ; Azad et al. 2020d ). C. decandra can tolerate moderate to strong saline condition and grows well in raised areas with insufficient inundation ( Hossain 2015 ). Whereas, X. mekongensis grows well in fresh water zone and can tolerate strong saline conditions ( Rahman et al. 2015 ). The abundance of X. mekongensis declines with high salinity ( Mukhopadhyay et al. 2018 ). Salinity stress may influence species colonization in different zonation in mangrove forest ( Kathiresan and Bingham 2001 ; Rahman et al. 2015 ), and thereby, mangrove species showed leaf morphological plasticity to saline zones ( Khan et al. 2020 ). Thus X. mekongensis showed highly leaf morphological plasticity in fresh water zone than others. Generally, mangroves are exposed to salt stress and it may influence plastic behavior even within a small area ( Schmitz et al. 2007 ; Feller et al. 2010a ; Arrivabene et al. 2014 ; Tomlinson 2016 ). Salinity varies within the saline zone due to seasonal variation ( Wahid et al. 2007 ). During rainy season, salinity decreases sharply and increases again in winter season even within the saline zone. Salinity is a limiting factor for mangroves habitat ranging from fresh water zone to strong saline conditions ( Ball 2002 ; Gopal and Chauhan 2006 ; Krauss et al. 2008 ; Alam et al. 2018 ; Nasrin et al. 2019 ; Khan et al. 2020 ). Salinity may also vary in subsequent year within the zone. Wahid et al. (2007) described that during 1999-2000 salinity decreases significantly in fresh water zone due to significant increase of fresh water flow. But during 2000-2003, salinity increases gradually due to reduce fresh water flow. Salinity fluctuations within fresh water zone may influence leaf morphological plasticity of the studied species in fresh water zone. Nguyen et al. (2015) mentioned that mangroves are distributed in a banded shape along salinity regimes and show leaf morphological plasticity among saline zones ( Khan et al. 2020 ) in comparisons to terrestrial species ( Feller et al. 2010a ). Our study provides clear evidence of leaf morphological plasticity of all parameters measured among salinity zones for X. mekongensis . LI reflects plasticity of leaf morphology to a certain salinity regime ( Tsukaya 2002 ) which is considered as evolutionary novelty ( Smith 2006 ). LI of X. mekongensis showed high level of plasticity in fresh water zone than in moderate or in strong saline zone because of salt stress is a limiting factor for this species and fresh water zone is favorable for their growth and development ( Kathiresan and Bingham 2001 ; Hossain 2015 ). Besides this, fluctuating salinity ranges between seasons, especially in winter and rainy season within the saline zone influences X. mekongensis to exhibit maximum leaf morphological plasticity in low saline zone. Similarly, Khan et al. (2020) documented that LI of H. fomes exhibited very high plasticity of leaf morphology in low saline zone. They also noticed opposite scenario in strong saline zone due to salt stress of H. fomes is not favorable in moderate saline zone or in strong saline zone ( Kathiresan and Bingham 2001 ; Kathiresan et al. 2010 ; Hossain 2015 ). Therefore, LI may be applicable for measurement of leaf morphological plasticity regarding environmental change ( Mollick et al. 2011 ). High level of SLA plasticity reflects increasing productivity (photosynthesis) and broader ecological niche to a particular species ( Ishii et al. 2018 ). X. mekongensis exhibited a very high SLA plasticity in low saline zone due growing ability in full sun light ( Hossain 2015 ). These results are supported by the concept of mangrove traits plasticity ( Pigliucci and Kolodynska 2002 ; Abbruzzese et al. 2009 ; Feller et al. 2010a ; Naidoo 2016 ; Alam et al. 2018 ; Nasrin et al. 2019 ; Khan et al. 2020 ). Usually, leaf morphological plasticity of mangroves increase habitats exploitation ability of the species for their survival ( Simpson et al. 2013 ). The PCA ordination plot noticeably specified and divided the salinity zones from where the sample leaves were collected. The results of our study provided clear evidences that the leaf morphological parameters were efficiently and effectively develops a plastic strategy among salinity zones in the Sundarbans. Leaf traits plasticity develops very often due to environmental heterogeneity ( Givnish 1987 ; Tsukaya 2002 ; Carins Murphy et al. 2012 ; Khan et al. 2020 ), but genetic factors ( Mollick et al. 2011 ; Tsukaya 2013 ) may also be accountable. Conclusion The study confirmed differences in species composition and richness among saline zones. The study also confirmed species density and diversity decreases with increasing salinity. LI, PI, SLA of X. mekongensis showed plastic behavior within and among saline zones in the Sundarbans. LI, PI and SLA of most of the parameters were higher in fresh water zone. Ordination plot of PCA showed three clear cluster of collected leaf samples from three different saline zones, which reflects the shift of leaf morphological plasticity among saline zones against salt stress in its natural habitat. The study predicts to realize adaptive reflections to salinity stress of mangrove species in the world in future to climate change situations. This study also provides a basement of dynamics of mangroves among salinity zones in Bangladesh Sundarbans. This investigation will also encourage advance study for management and conservation of mangrove species to develop ecological strategies. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments The authors thank to UGC Bangladesh for partial funding to this study (UGC/SciTech/Agri (Crop-47)-26/2017/4915). The authors are also thankful to JSPS for funding to Md. Salim Azad (R11810). The authors thank to the field staffs for their help throughout field work. The authors also thank Bangladesh Forest Department for their valuable support to conduct field work with logistics. The authors are also thankful to FWT Discipline, Khulna University, Bangladesh for lab support. The work was supported by JSPS RONPAKU Program FY2018, Japan (R11810) and Bangladesh University Grants Commission (UGC/SciTech/Agri (Crop-47)-26/2017/4915). | [
"ABBRUZZESE",
"AGASTIAN",
"AHMED",
"ALAM",
"ANDERSON",
"ARRIVABENE",
"AZAD",
"AZAD",
"AZAD",
"AZAD",
"AZAD",
"AZAD",
"AZAD",
"AZMAN",
"BALL",
"BASTIAS",
"CARINSMURPHY",
"CHAFFEY",
"CHANDA",
"CHOWDHURY",
"CLARKE",
"CLOUGH",
"COUPLAND",
"CURTIS",
"DASGUPTA",
"DUKE",
... |
dcfa085d7ba747ed8eae6cddf5ea571a_Inside Back Cover_10.1016_S2772-6568(25)00037-5.xml | Inside Back Cover | [] | null | null | [] |
a26a1a1c37404596a7934e218a20367e_Treatment of Aggressive Antineutrophil Cytoplasmic AntibodyAssociated Vasculitis With Eculizumab_10.1016_j.ekir.2019.11.021.xml | Treatment of Aggressive Antineutrophil Cytoplasmic Antibody–Associated Vasculitis With Eculizumab | [
"Huizenga, Noah",
"Zonozi, Reza",
"Rosenthal, Jillian",
"Laliberte, Karen",
"Niles, John L.",
"Cortazar, Frank B."
] | null | Introduction Antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis (AAV) is a small vessel necrotizing vasculitis with a predilection for the respiratory tract and kidneys. 1 , Prompt initiation of remission induction immunosuppression is paramount to prevent irreversible organ damage. 2 Standard initial therapy for severe AAV consists of cyclophosphamide or rituximab in combination with high-dose glucocorticoids. 3 4 , In cases of severe pulmonary hemorrhage or rapidly progressive glomerulonephritis, plasma exchange is often added to facilitate rapid removal of ANCA. 5 6 Improved understanding of disease pathogenesis has provided the rationale for targeting the complement pathway in AAV. In particular, the anaphylatoxin complement component 5a (C5a) has been identified as a key pathogenic mediator of AAV because of its ability to prime and recruit neutrophils. 7 Inhibitors of C5a and the C5a receptor are being evaluated in randomized trials, but are not currently available for clinical use. 8 Here, we report the use of eculizumab, a monoclonal antibody against C5, in 2 cases of aggressive AAV with the intention of rapidly inducing remission by inhibiting C5a generation. In both patients, religious beliefs prohibiting the receipt of blood products precluded the use of plasma exchange and cyclophosphamide. 9 Case Presentation Case 1 A 61-year-old woman with a history of hypothyroidism presented to the hospital for evaluation of 3 weeks of progressive dyspnea. On presentation, she was tachypneic and had an oxygen saturation of 85% while breathing ambient air. Her hemoglobin concentration, which was previously normal, had fallen to 6.7 g/dl. There was no history of bleeding, and stool guaiac test results were negative. The serum creatinine (SCr) level was 1.1 mg/dl (unknown baseline), and urinalysis was significant for blood (2+) and protein (2+). Examination of the urine sediment revealed the presence of dysmorphic red blood cells and red blood cell casts. Chest computed tomography demonstrated diffuse ground-glass and consolidative opacities in a distribution consistent with pulmonary hemorrhage. The patient’s hypoxemia rapidly worsened, requiring high-flow nasal cannula with a fraction of inspired oxygen of 70%. Pulse i.v. methylprednisolone was initiated for a suspected pulmonary-renal syndrome, and the patient was admitted to the intensive care unit. On the second hospital day, testing for myeloperoxidase ANCA returned positive at a titer of 1024 U (negative, <2.8 U) and the hemoglobin concentration fell to 5.7 g/dl. Testing for anti–glomerular basement membrane antibodies was negative. Levels of C3 and haptoglobin were normal. The lactate dehydrogenase level was mildly elevated at 246 U/l (normal range, 110–210 U/l). No schistocytes were observed on the peripheral blood smear. The patient was a practicing Jehovah’s Witness and declined all blood products including fresh frozen plasma. Severe anemia with the inability to transfuse red blood cells and ongoing pulmonary hemorrhage with the inability to administer fresh frozen plasma precluded the use of cyclophosphamide and plasma exchange, respectively. Pulse methylprednisolone was continued, and rituximab 1000 mg i.v. was administered ( Figure 1 a). However, the patient’s respiratory status remained tenuous, and invasive mechanical ventilation was considered. Eculizumab 900 mg i.v. was administered on days 3, 10, and 17 ( Figure 1 a). After the second dose, the respiratory status rapidly improved, allowing weaning of supplemental oxygen to 4 l nasal cannula and tapering of glucocorticoids. However, ∼2.5 weeks after the final eculizumab dose, the patient’s renal function started to decline and the SCr level peaked at 3.3 mg/dl ( Figure 1 a). Given improvement in the patient’s anemia with high-dose epoetin alfa, oral cyclophosphamide was initiated. The patient’s SCr level ultimately improved to a new baseline of 1.6 mg/dl. Case 2 An 83-year-old woman with hypothyroidism and coronary artery disease was transferred to our hospital for evaluation of fatigue, weight loss, small-volume hemoptysis, and acute kidney injury. The SCr level on presentation was 2.5 mg/dl, increased from a baseline of 0.7 mg/dl two months prior. Review of the urine sediment revealed abundant dysmorphic red blood cells, and a spot urine protein-to-creatinine ratio was elevated at 1.8 g/g. Urinalysis was significant for blood (3+) and protein (2+). Computed tomography of the chest revealed bilateral ground-glass opacities, but oxygen saturation remained normal while breathing ambient air. The patient was severely anemic on presentation (hemoglobin concentration, 6.1 g/dl), which was attributed to a combination of pulmonary hemorrhage, inflammation, and renal disease. The patient was a practicing Jehovah’s Witness and declined blood transfusion. Workup was significant for myeloperoxidase ANCA at a titer of 1515 U (normal <2.8 U). Additional laboratory tests including anti–glomerular basement membrane antibodies, C3, lactate dehydrogenase, haptoglobin, and peripheral blood smear were unremarkable. The patient was initiated with rituximab and pulse i.v. methylprednisolone followed by high-dose oral glucocorticoids for the treatment of AAV ( Figure 1 b). Despite the continuation of high-dose glucocorticoids, renal function continued to deteriorate over the ensuing weeks, reaching a peak SCr level of 4.9 mg/dl. The patient’s religious beliefs coupled with severe anemia prevented the use of plasma exchange or cyclophosphamide. Eculizumab 900 mg i.v. was administered on days 24 and 31. The patient’s renal function rapidly improved, and the SCr level reached a nadir of 1.8 mg/dl on the last check ( Figure 1 b). Discussion AAV typically presents with normal levels of serum complement, and minimal complement deposition is found on renal biopsy in cases of pauci-immune necrotizing glomerulonephritis. Historically, these observations have led to the belief that complement was not a significant mediator of the inflammatory cascade in AAV. However, recent investigation has identified activation of the alternative complement pathway as necessary for disease activity ( 1 Table 1 ). Here, we provide 2 cases of aggressive AAV in which blockade of C5 cleavage yielded demonstrable clinical benefit. Xiao et al. initially demonstrated that mice deficient in complement factor B, but not those deficient in C4, were protected from the induction of necrotizing and crescentic glomerulonephritis in a murine anti-myeloperoxidase IgG transfer model. This suggests that the alternative pathway, rather than the classical or lectin pathways (both of which require C4), is needed for disease activity. Additional experiments revealed that mice deficient in C6 were not protected from disease, indicating that the anaphylatoxin C5a, and not the membrane attack complex, is the key pathogenic mediator. 7 S1 Indeed, treatment with a C5a receptor antagonist significantly abrogated disease activity in a mouse model. Given its role in the terminal inflammatory cascade in AAV, C5a blockade is an attractive therapeutic option to rapidly control disease activity ( S1 Figure 2 ). Currently, eculizumab and ravulizumab are the only available drugs in clinical use to target the complement cascade. Antagonism of C5 precludes the generation of C5a, and treatment with an anti-C5 monoclonal antibody has been shown to attenuate AAV disease activity in murine models. This was the rationale for the administration of eculizumab in the 2 patients reported herein, both of whom had an aggressive and progressive disease with limited alternative therapeutic options due to religious considerations. S2 In patient 1, pulmonary hemorrhage rapidly improved after eculizumab. Moreover, renal function rapidly deteriorated after eculizumab was discontinued. This suggests the possibility that blockade of C5 was preventing the development of necrotizing and crescentic glomerulonephritis in this patient, akin to what has been observed in animal models. Likewise, patient 2 had rapid improvement in renal function after the administration of eculizumab. It should be noted that in both cases, renal biopsy was not performed because of the presence of anemia and the inability to transfuse should a bleeding complication occur. However, a markedly positive ANCA level in the setting of pulmonary hemorrhage and a clinical syndrome of rapidly progressive glomerulonephritis has a positive predictive value that approaches 100%. Our report adds to 2 prior patients with AAV described in the literature who appear to have responded to eculizumab. S3 S4 , S5 Despite its potential efficacy in treating AAV, blockade of C5 provides more immunosuppression than is required to control disease activity. Inactivation of C5 prevents formation of the membrane attack complex, which does not appear to play a significant role in AAV ( Figure 2 ). Therefore, C5 blockade unnecessarily exposes patients with AAV to a higher risk of infection with Neisseria meningitidis even with the use of appropriate vaccination protocols and prophylactic antibiotics. Fortunately, therapeutic strategies to directly target C5a are currently being tested in clinical trials. S6 In the phase II CLEAR trial, treatment with avacopan, a small molecule inhibitor of the C5a receptor, was noninferior to that with high-dose glucocorticoids in patients with new or relapsing AAV receiving cyclophosphamide or rituximab for remission induction therapy. Patients receiving avacopan also had more rapid resolution of albuminuria. The majority of data implicating C5a in the pathogenesis of AAV have been derived predominantly from animal models, and a role of other complement components, including the membrane attack complex, cannot be completely excluded in human disease. However, the results of the CLEAR trial provide compelling evidence that C5a is also the key complement-derived mediator of AAV activity in humans. 9 The preliminary success of C5a blockade in the CLEAR trial provided the impetus for the ongoing ADVOCATE trial ( clinicaltrials.gov identifier: NCT02994927 ), a pivotal phase III trial testing whether avacopan can replace glucocorticoids during induction therapy. In addition, phase II studies are ongoing to test the safety and efficacy of IFX-1 ( clinicaltrials.gov identifiers: NCT03712345 and NCT03895801 ), a monoclonal antibody against the C5a molecule itself ( Figure 2 ). Blockade of C5a has the potential to significantly advance the treatment of AAV by obviating the need for glucocorticoids. Moreover, enhanced understanding of disease pathogenesis has revealed an important role of complement across a large spectrum of glomerulonephritides, including IgA nephropathy, C3 glomerulopathy, lupus nephritis, and anti–glomerular basement membrane disease. New agents with the ability to selectively manipulate the complement cascade are poised to revolutionize the treatment of glomerulonephritis by rapidly controlling disease activity while minimizing treatment-related toxicity. S7–S9 Disclosure All authors are currently working on the ADVOCATE trial with ChemoCentryx and the phase II trial of IFX-1 with InflaRx ( clinicaltrials.gov identifiers: NCT02994927 and NCT03895801 ). Our site previously participated in ChemoCentryx’s Phase II CLASSIC trial of avacopan ( clinicaltrials.gov identifier: NCT02222155 ). FBC and JLN have served as consultants for ChemoCentryx. Acknowledgments The authors thank the outstanding staff at the vasculitis and glomerulonephritis center for their work in patient care and data collection. Supplementary Material Supplementary File (Word) Supplementary Material Supplementary File (Word) Supplementary References. | [
"ZONOZI",
"JENNETTE",
"HOFFMAN",
"STONE",
"DEGROOT",
"JAYNE",
"XIAO",
"SCHREIBER",
"JAYNE"
] |
57ba3aa42add47069e07a35a23aaf8b3_On the dielectric and magnetic properties of Al doped Pr123 for advanced devices A comparison with Z_10.1016_j.rinp.2023.106834.xml | On the dielectric and magnetic properties of Al doped Pr:123 for advanced devices: A comparison with ZnO | [
"Sedky, A.",
"Ali, Atif Mossad",
"Sayed, M.A.",
"Almohammedi, Abdullah"
] | For ZnO comparison, we report here the dielectric and magnetic properties of PrBa2Cu3-3xAl3xOy (Al:Pr123) samples with (0.00 ≤ x ≤ 1.00). The dielectric constant (ε′), ac electrical conductivity (σac) and F-factor were decreased as x increased to 1.00, but the q-factor was increased. As compared to ZnO, the x ≤ 0.05 samples show higher values of ε′, σac and F-factor, as well as q-factor, but at x ≥ 0.10. In addition, the binding energy was increased from 0.423 to 0.672 eV as x increased to 0.30, but above that it significantly enhanced until reach to 38.825 eV. The Cole-Cole plot shows a complete circle for the samples of x ≤ 0.05, and semicircular arcs for the x ≥ 0.10 and ZnO. Moreover, the impedance of grain and grain boundaries is significantly increased as x increases to 1.00, but they are lower than of ZnO. Although ZnO exhibits poor ferromagnetic behavior, the Al: Pr123 samples show strong ferromagnetic behavior with evaluated magnetization parameters. Moreover, these outcomes strongly recommend the Al: Pr123 samples, better than ZnO, for the devices of integrated circuits, solar cells, super-capacitors and spintronics. | Introduction Despite the critical qualities of n-type semiconductors, they are required to widen their application range. Doping of transition metals is a common technique that has been used to improve their structural, dielectric, and magnetic properties [1–3] . The insertion of metal defects may aid in the generation of oxygen vacancies, increasing the electronic trap of charge carrier concentration and, therefore, nonstoichiometric conductivity [4] . These traps are mainly localized at the grain boundaries and usually absorb oxygen and capture some donor state electrons [5] . Consequently, the Schottky barrier capacitance is found to be frequency dependent due to the finite time constants of the deep trap states of the depletion layer [6] . The presence of more than one valence state for the host lattice suggests polar-ionic conduction due to the hopping mechanism of electrons from lower to higher valence states [7,8] . This mechanism of di-electricity is mainly affected by interior defects such as space charge electrons and oxygen vacancies [9–11] . This response is entirely characterized by the complex dielectric parameters versus the frequency range up to 10 GHz [12–14] . It is well known that materials with high dielectric constants have received special attention in telecommunications and integrated microwave circuits [15–17] . In contrast, materials that have a low dielectric constant are potential candidates for nonlinear optical and high-frequency devices [18–22] . For example, the dielectric medium could be improved by doping due to an increase in charge carrier concentration to meet the required applications [23,24] . In some cases, the dielectric properties at RT show dispersion due to Maxwell-Wagner type interfacial polarization, and these parameters were decreased [25] . The Cole-Cole plots show that the conductivity takes place dominantly through the grain boundary, and no other contribution through the grains is resolved to the predominance of grain boundary resistance [26,27] . According to the correlated barrier hopping (CBH) model [28] , conduction occurs via a bi-polaron hopping process wherein two electrons simultaneously hop over the potential barrier between the charged defects states, and the barrier height is correlated with the separation between them. However, defects in carriers can be produced, and single polaron hopping becomes a dominant process. Interestingly, the q-factor is an important parameter which represents the ratio of energy stored in the capacitor to the energy dissipated in the equivalent series resistance. Since q-factor is the measure of efficiency, a super-capacitor would have a high value of q-factor required to limit the energy lost. Spintronics concerns spin-charge coupling in metallic systems, and it differs from traditional electronics in that, in addition to charge state, electron spins are used as a further degree of freedom, with implications for the efficiency of data storage and transfer. Ferromagnetic materials provide a surplus accumulation of different spins in a tiny region called a domain. These majority-up and majority-down domains are randomly scattered, and an externally applied magnetic field will line up the domains in the direction of the electric field, which is convenient for spintronics devices. The ferromagnetic (RTFM) behavior of the transition metal (TM) doped ZnO close to or above RT is necessary for spintronic applications [29] . RTFM is typically caused by an exchange interaction between free delocalized carriers (holes or electrons) and the localized spins of TM or by an exchange interaction between local spin-polarized electrons of TM and Zn-conductive electrons formed below CB [30] . The ZnO can form partially filled d states or f states and partially filled spin states of unpaired electrons, which in turn supports the RTFM. Therefore, a lot of work has been done on the magnetic moment of the host lattice as well as the stable polarized state exhibited by the doping [31,32] . It has been reported that the secondary phases can decrease the RTFM of ZnO when doped with magnetic ions like Co, Mn, Fe, and Ni [33,34] . Although the exact mechanism of RTFM is still being debated, it is widely accepted that the defects induced by either oxygen vacancy or doping may play an important role in the formation of RTFM [35] . The relationship between the magnetic moment and the valence state of Pr ions has been evaluated. The effective magnetic moment of Pr-ion is 2.51μ B for PrO 2 , 2.78μ B for Pr 6 O 11 , 2.82 μ B for PrO and 3:59μ B for Pr 2 O 3 , which in turn support the magnetic behavior along with the expected valence state of Pr-ion above 3 + (Pr 3+/4+ ) [36,37] . The lack of superconductivity in the PrBa 2 Cu 3 O 7 (Pr:123) compound has been attributed to the valence states of Pr, such as Pr +3 , Pr +4 , and Pr 3+/4+ [38,39] . In addition, some other reasons, such as copper pair breaking due to Pr magnetic moment and strong hybridization between (Pr-4f) and (O-2P) conduction band electrons, may be leading to the localization of hole-carriers [40–42] . Moreover, an increase in the formation of oxygen vacancies in the Cu-O 2 planes is obtained in Pr: 123 single crystals [43–47] . It was found that Pr:123 is a semiconductor at room temperature (300 K) as well as ZnO. Further, the dc electrical conductivity of Pr:123 against temperature is nearly similar to that of ZnO [48,49] . Interestingly, the dc conductivity of Pr: 123 at 300 K is 9.74 × 10 −4 (Ω.cm) −1 , which is nearly similar to that of ZnO (12.50 × 10 −4 (Ω.cm) −1 ). Furthermore, most ZnO and related compounds suffer from some shortcomings in most advanced applications, especially in spintronics devices. These findings prompted us to think about testing Pr:123 as an alternative material that is better qualified than ZnO for advanced applications. Therefore, a series of PrBa 2 Cu 3-3x Al 3x O y samples have been synthesised, and their dielectric and magnetic properties have been investigated. It is approved that some of the Al-doped Pr:123 samples can be used as an alternative material better than ZnO for devices such as integrated circuits, solar cells, supercapacitors, and spintronics, which in turn supports the comparison. To the best of our knowledge, the present work has never been done elsewhere. Experimental details A series of PrBa 2 Cu 3-3x Al 3x O y samples with various x values (0.00 ≤ x ≤ 1.00) are synthesised by the solid state reaction method. The ingredients are Pr 6 O 11 , BaCO 3 and CuO of 4 N purity 99.99 % (Aldrich) are thoroughly mixed in the required proportions and calcined three times, with intermediate grinding at each stage at 900 °C in air for a period of 16 h, and then the furnace is slowly cooled to room temperature. The resulting powders are ground, palletized into pellets, and sintered at 940 °C for a period of 24 h, and then cooled to RT with an intervening annealing for 24 h at 600 0 C. The phase purity of the samples was tested by an x-ray diffractometer, Philips model PW 1710, with Cu-Kα radiation of wave length 1.5418 Å at 40 KV and 30 mA settings and (20°–70°) diffraction angles with a step of 0.06°. The RT dielectric properties are measured at the National Physical Center (Cairo, Egypt) using broadband dielectric spectroscopy (BDS) utilizing a high resolution Alpha analyzer with an active sample head (Novo control GmbH) at different frequencies (0.10 to 10 MHz). The pellets are sandwiched between two gold-plated stainless steel electrodes of 20 mm in diameter in parallel plate geometry. Fused silica fibers with a diameter of 50 mm are used as spacer material. The magnetic measurements are obtained. Finally, the magnetization properties are measured against a magnetic field at room temperature by using a vibrating sample magnetometer (VSM) lakeshore 7400 up to an applied field of 18,000 G. Results and discussions Structural analysis Fig. 1 shows x-ray diffraction patterns of PrBa 2 Cu 3-3x Al 3x O y samples. It is clear that the structure of the samples with x ≤ 0.30 is tetragonal and there is no splitting between (0 2 0), (0 1 5) and (1 2 3), (2 1 3). The structure changed to orthorhombic as x increased above 0.60, as evidenced by crystallographic splitting of (0 2 0), (0 1 5), (1 2 3), and (2 1 3). A few unidentified low intensity lines are recorded at angles of 41.1° (I = 18 au) for x = 0.60, and 31.388° (I = 9 au), 44.954° (I = 20 au) for x = 1.00, may mark the starting of solubility of Al in place of Cu. The orthorhombic distortion (OD) is zero for x ≤ 0.30 samples, and 0.0011 and 0.0015 for x = 0.60 and 1.00, respectively. In contrast, the wurtizite structure is confirmed for ZnO. However, the oxygen deficiency, below or slightly close to or above 7, in the Cu-O chains is mainly responsible for the type of structure, which is orthorhombic for all RE: 123 systems, but it can be changed to tetragonal if the oxygen content has decreased or increased than (7-δ). The oxygen is supposed to transfer by Pr from Cu-O 2 planes to Cu-O chains, which in turn leaves some vacancies in the planes and turns the structure from orthorhombic to tetragonal. As x increases above 0.30, we expect that Al 3+ helps in decreasing the hole-carriers and helps in oxygen loss in the Cu-O chains sufficient to restore the orthorhombic structure. As x approaches 1.00, the lattice parameters and unit cell volume, listed in Table 1 , are gradually decrease, which is related to ionic radii of Al 3+ (0.55 Å) and Cu 2+/3+ (0.69 Å).The crystallite size D hkl given by Debye-Scherrer’s equation K λ β cos θ [50–52] was listed in Table 1 (K = 0.91, λ = 1.5418 Ǻ and β is the half-maximum line width). The D hkl is calculated according to this equation in terms of XRD diffraction peaks, and then the average of those values is taken and listed in Table 1 . It is clear that D hkl was slightly increased by Al from 21.45 nm for x = 0.00 to 23.81 nm for x = 1.00, although the ionic size of Al is less than Cu, which is not clear for us at present. However, these values are lower than those of ZnO (26.97 nm). Therefore, we have used the uniform deformation model (UDM) of the Williamson-Hall (W-H) method to determine the D hkl and micro-strain ε m [53] according to; (1) β cos θ = k λ D hkl + 4 ε m sin θ The D hkl and ε m values are determined from the cut-off point on the y-axis and slope of the graph between β cosθ and 4 ε m sin θ, and listed in Table 1 . It is clear that D hkl and ε m were decreased as x increased to 1.00, which is consistent with the behavior of ionic radii of both elements. The SEM micrographs shown in Fig. 2 confirm that the grains are randomly distributed with different and irregular shapes and sizes. The shape and size of the grains of x ≤ 0.30 samples appear to be uniform, and the dark regions between the grains are very limited. With increasing x above 0.30, the grains become very close in size and are seen as agglomerations. In addition, the dark areas appear noticeably, and the identical shape of the grains starts to nearly disappear. The average grain size D SEM listed in Table 1 generally decreases as x increases to 1.00, and ranges from 2.17 m to 1.05 μm, which disagrees with the behavior of crystallite size D khl against x. The values of D khl and D SEM for most of the Al-doped Pr: 123 are in the same order of magnitudes and well comparable with those of ZnO as listed in Table 1 . Dielectric measurements The loss factor, or tan (δ), is a measure of the tendency of dielectric materials to absorb some of the energy when the field is applied, and an ideal capacitor would have an infinite q-factor, meaning that no energy is lost and the equivalent series resistance equals zero. Therefore, most of the previous work has been directed towards lower tan (δ) and a high q-factor. The real ε′ part of dielectric constant, loss factor (tan δ) and q-factor are given by [54,55] ; where C and C (2) ε ′ = Cd ε 0 S tan δ = ε ″ ε ′ = C 0 C Q = 1 t a n ( δ ) 0 are the capacitances of the sample and empty cell, respectively (C 0 = ε 0 A/d), and ε ′′ is the imaginary part. The plot of ε ′ versus f in Fig. 3 (a) shows that ε \ gradually decreased as f increased up to 10 5 Hz, after which it saturated [56,57] . The decrease of ε ′ against f can be explained by the fact that at low f , charge carriers respond faster to the field, generating some space charge due to vacancies and micro-pores in the samples, resulting in higher values of dielectric constants. But at high f , the charge carriers are unable to follow the rapid changes in the field, resulting in low values. The decrease of ε ′ against f can be explained in terms of different types of polarizations P , such as ionic P i , orientation P o , and electronic P e [58,59] . The P i occurs at (0.1–10 12 ) Hz and it is due to displacement of the positive and negative ions relative to the field. The P o occurs at (KHz and MHz), and it is due to the orientation of the permanent dipoles of the molecules But, at room temperature, P o is weak due to the inability of the dipoles to rotate fast enough; therefore, they oscillate behind the field. The P e is due to a relative displacement of the nucleus relative to its surrounding electrons, but it is sensitive up to 1015 Hz (>visible light). Therefore, ε ′ decreases, reaching a constant value at a higher f corresponding to interfacial polarisation. At high f , the space charges and also dipoles cannot follow the field, and hence there is no space charge for polarisation; therefore, a saturation of ε′ is obtained. Like wise, The decraese of ε ′ against x may be related to the difference of electro-negatively between Al (1.61) and Cu (1.90), which in turn reduces the strength of the Al-O ionic bond and decreased the dielectric polarization [60–63] . This may also be due to decreasing the number of electrons generated by charge transfer (Al 3+ + e – → Al 2+ ), which leads to a decrease in P o . This process continues to decay by adding Al, and thus the poor dielectric properties are obtained at x = 1.00. Moreover, the decrease of crystallite size leads to a decreasing number of dipoles, which results in an increase in polarization and decreases the ε ′ values for the doped samples. However, similar behaviour was obtained for the dielectric loss (tanδ) shown in Fig. 3 (b), whereas q-factor showed an inverse behavior, as shown in Fig. 3 (c). For more identification, the reported data for ZnO is included [63] . Although ε ′ of ZnO has decreased as f increases to 10 3 Hz, the ε ′ values are lower than those of x ≤ 0.05 samples, but they are higher than the other samples. But above 10 3 Hz, the values of ε′ are nearly the same for the samples of x ≤ 0.05 and ZnO. For example, ε ′= 4650 at 0.1 Hz for x = 0.05 sample, which is approximately 11-times greater than ZnO (425). This difference gradually decreased as f increased and became zero at about 10 4 Hz. However, an inverse behavior was obtained for x ≥ 0.10 sample. According to the behvior of ε ′ against x, the x ≥ 0.10 samples are convenient for high-frequency devices, wheareas the x ≤ 0.05, samples are convenient for integrated circuit as well as ZnO. Similar behavior is obtained for the tan δ, but the difference was extended to about 10 5 Hz. This behavior strongly suggests using the x ≤ 0.05 samples for dielectric devices, but only at low frequencies (10 3 –10 5 ) Hz. In addition, the q-factor was gradually increased as x increases to 1.00. Interestingly, the values of q-factor at 10 MHz for x ≥ 0.10 are 3-order of magnitudes higher than those of ZnO and x ≥ 0.05 samples, which strongly recommended them for the design of supercapacitors. The total conductivity can be obtained by [64] ; where σ (3) σ t ( ω ) = σ dc + σ ac = σ dc + B ω s dc is the dc conductivity calculated at zero f, B is constant, and s is the exponent of frequency. Fig. 4 shows the ac conductivity (σ ac ) against f. It is evident that σ ac gradually increases as f increases, but it decreases as x increases as well as ε ′. As f is raised, the displacement of carriers is reduced, and at a critical frequency ω p , σ ac follows the relation; (σ ac ∼ ω s ) with (0 ≤ s ≤ 1) for hopping conduction [65] . It is evident that the values of σ ac for x ≤ 0.05 samples are higher than those of ZnO as f increases to 10 5 Hz, but above that they are nearly the same as well as ε ′ and tan δ. The slope of linear fit between lnσ ac against ln(ω) gives the values of s and B as listed in Table 2 , in which s is between 0.635 and 0.996 for all samples (electronic conduction). The hopping frequency ω h of the charge carrier is calculated by; and shown in ω h = ( σ dc B ) 1 s Fig. 5 (a), and listed in Table 1 . It is clear that ω h drastically decreases as x increases to 0.10, followed by an increase with increasing x to 1.00. However, the values of ω m for x = 0.10 and 0.30 samples are comparable with that of ZnO (37.18 Hz), and between (2–3) orders of magnitude lower than the other samples. According to correlated barrier hopping (CBH) model, the binding energy W m, is given by [66] ; where τ (4) s = 1 - 6 K B T W m + K B T ln ω τ ° = 1 - 6 K B T W m 0 = 10 − 13 s and it is the relaxation time of atomic vibration. W m against x shown in Fig. 5 (b) and listed in Table 2 is slightly increased (0.423–0.672 eV) as x increases to 0.30, but above that it sharply increases to 3.376 and 38.825 eV for x ≥ 0.60 samples, which is very strange and cannot be explained at present. However, the values of W m for x ≤ 0.30 samples are comparable with ZnO (0.822 eV). The higher value of the F-factor for the figure of merit given by; is good evidence for the use of samples in solar cell design F = σ ac ε ′ [67,68] . Fig. 6 shows the F-factor against f for all samples, in which the F-factor was gradually increased as f increased to 10 7 Hz. In contrast, the F-factor was decreased by increasing x as well as ε′ and σ \ behaviors. It is also seen that the F-factor of x ≤ 0.05 samples is higher than that of ZnO, with an increasing f to about 10 6 Hz, but above that, it is nearly the same. Fig. 7 (a–d) shows the Cole-Cole plots of Z \\ against Z \ for all samples. It is evident that the samples of x ≥ 0.10 and ZnO show a single semicircular arc of the impedance spectrum, whereas the samples of x ≤ 0.05 show a complete circle. However, the semicircle arcs can be represented by an equivalent electrical circuit consisting of a combination of resistive and capacitive elements. However, the semi-circular arcs centred below the Z \ axis indicate non-Debye relaxation, in which the dipole moments relax through different times of relaxation [69] . For x ≥ 0.10 and ZnO samples, the frequency up to 10 7 Hz is not sufficient to separate the conduction in grains and their boundaries, and therefore a single arc of an identical τ = (1/ω) = RC could be obtained [70,68,71] . In contrast, it is sufficient to determine the impedance resistance of grains Z(g) and grain boundaries Z(gb) for x ≤ 0.05 samples. This fact suggests that the excess of Al doping probably exists in the grain boundary region, either as a very thin secondary phase, which could benefit grain boundary transport for grain growth [72,70] . A single arc is mainly due to the contribution of the core resistive grain and grain boundary [73] . This fact suggests that the excess of doping probably exists in the grain boundary region, either as a very thin secondary phase, which could benefit grain boundary transport for grain growth [74] . While, other reports [71] believed that a single arc is due to the contribution of the core resistive of grain and grain boundary, It is also found that different shapes of impedance spectra could be recorded for ZnO with different 3D-transition metal additions, such as one semicircle as obtained, two semicircles, and one arc as one quarter of a circle and two arcs [73] . However, the radius/diameter of each plot was used to determine the Z \ (g) and Z \ (gb) against x as shown in Fig. 8 and listed in Table 2 . It is found that both of them are significantly increased as x increases to 0.10, but above that it has a slight increase. To be more specific, the [Z(gb)/Z(g)] ratio is close to 2 for all samples except for x = 0.05 and 0.10, where the ratios are 2.53 and 3.21, respectively. Furthermore, the Z \ (g) and Z \ (gb) of ZnO are respectively about 19,500 and 550 times more than of x ≤ 0.05 samples, but they are about 1.02 and 10 times lower than the x ≥ 0.10. RTFM analysis Fig. 9 (a–b) depicts the magnetization against a magnetic field (M-H) for the samples. It is apparent that ZnO exhibits a very weak hysteresis loop with about 0.0005 (emu/g) saturated magnetization (M s ) and can be classified as a weak (poor) ferromagnetic behavior [75,76] . In contrast, the magnetization of Al-doped Pr:123 was gradually increased as x increased to 0.10 and showed ferromagnetic behavior of [M s = 0.654 (emu/g) and dc susceptibility (X sp ) = 6.08x10 −6 (emu.G/g)]. However, the x = 0.30 sample shows a strong ferromagnetic behavior with the highest M s and X sp of 0.826 (emu/g) and 17.30 × 10 −6 (emu.G/g), respectively, although the field is not sufficient for reach saturation/ or the Curie temperature is slightly below RT [77,78] . However, the RT ferromagnetic behavior is obtained for x ≥ 0.60 samples, but with relatively lower M s and X sp of 0.686, 0.580 (emu/g), and 8.67 × 10 −6 , 5.25 × 10 −6 (emu.G/g), respectively. Fig. 10 (a-b) and Table 3 show the variation of the corrective field H c , M s and retentively M r against x. All of them are increased by increasing x to 0.30, but above that they are decreased. Similar results were obtained for the magnetic moment μ (WM s /5585), magnetic anisotropy (HcM s /0.98) [79,80,33] , and the area under the M-H curve as shown in Fig. 10 (c–e). In contrast, squareness S q (M r /M s ) was increased as x increased to 0.05, followed by a decrease with more increase of x to 1.00. Excluding the squareness, it is seen that the magnetization parameters of Al-doped Pr:123 are 1000-times higher than those of ZnO, which strongly recommends them for alternative DMS. This agrees with the behaviors of binding energy and also tetragonal-orthorhombic structure. However, magnetization M at any field H is related to M s USING Stoner–Wohlfarth (S–W) approach as follows [81,82] ; (5a) M = M s - M s β H 2 ) β is constant related to magnetic anisotropy of the field through a sample, and it is related to the effective anisotropy constant K , and anisotropy field H eff a as [83,84] ; (5b) K eef = M s ( 15 β 4 ) H a = ( 2 K eff M s ) β and empirical M s can be obtained from the slope of M against (1/H 2 ) to approach as shown in Fig. 11 (a, b), and then H a is evaluated. The K eff should be changes according to variation of magnetic anisotropy sources such as shape and surface anisotropies, strengthening/weakening of magnetic interaction among the samples. However, the values of M s , β, K and H eff a listed in Table 3 are consistent with the above M s evaluations. Furthermore, the values of H a and K are respectively 450 and 400 times higher than those obtained for H eff c and γ, indicating hard magnetic samples. The slight increase/ decrease of the corrective and applied fields for the samples may be related to strength/weakness of pinning centers produced during the magnetization reversal process, which in turn change the surface charge density and the electron spin mechanism [85–88] . The values of β and H a for Al Pr:123 are lower than of ZnO whereas the values of K are higher. Anyhow, Pr:123 exhibits ferromagnetic behavior and Al up to 0.30 support the ferromagnetic parameters, although it is a nonmagnetic, which are very strange. We believed that when Cu eff 2+ is replaced by Al 3+ , the ferromagnetic domains of Pr:123 are supported through the interaction of Pr magnetic polarons within the extra free carriers induced by Al [84] . Beside that Al has a tendency than Cu to form the Pr-O-Al ferromagnetic cluster required for double-exchange of the ferromagnetic behavior mechanism [89–92] . If this happens, the magnetization will be extended across the entire M-H curve and will be unable to reach saturation at x = 0.30. However, the further weakness of ferromagnetic behavior may be due to solubility of Al into Pr:123 lattice, which in turn reduces the Pr-O-Al ferromagnetic cluster necessary for double-exchange of the ferromagnetic behavior mechanism. Conclusion With ZnO comparison, the dielectric and magnetic properties of Al: Pr123 samples have been investigated. As compared to ZnO, Al: Pr123 samples with x ≤ 0.05 show higher values of ε ′, σ ac and F-factor. Interestingly, the q-factor at 10 MHz for x ≥ 0.10 samples could be increased to 1000-times higher than of ZnO. In addition, the binding energy was enhanced to be 38.825 eV for x = 1.00, which is 47.23 times higher than of ZnO (0.822 eV). The Z \ (g) and Z \ (gb) are generally increased as x increases to 1.00, but they are lower than those of ZnO. Although the ZnO exhibits poor ferromagnetic behavior, a strong ferromagnetic behavior with evaluated magnetization parameters could be obtained for Al: Pr123 samples to be hard magnetic property. The Al: Pr123 samples were recommended, better than ZnO, for the devices of integrated circuits, solar cells, super-capacitors and spintronics. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/527/44. Ethical approval The present manuscript is original work and has never been under evaluation for any other journal. Authors contributions The present data were completely done by the authors, and all of them participated in the work. | [
"BHAKTA",
"SAMANTA",
"SINGH",
"NING",
"MOHAMED",
"MOHAMED",
"MANSOURMOHAMED",
"SAHAY",
"SEDKY",
"MOHAMED",
"SAADI",
"ANSARI",
"SINGH",
"ZAMIRI",
"IRSHAD",
"OMRI",
"JOSHI",
"SINGH",
"ACHARYA",
"AZAM",
"PARVEEN",
"AHMED",
"MUY",
"GUPTA",
"BENTAHER",
"WOJNAROWICZ",
"... |
6bcb731cfa3b4fd7a38bd3175a419b3d_Cable Graft Simple Superior Capsule Reconstruction Technique for Irreparable Rotator Cuff Tear Using_10.1016_j.eats.2020.01.009.xml | Cable Graft: Simple Superior Capsule Reconstruction Technique for Irreparable Rotator Cuff Tear Using a Teflon Patch | [
"Okamura, Kenji",
"Makihara, Takeshi"
] | We describe a simple superior capsule reconstruction technique for irreparable rotator cuff tear using a Teflon patch. In this technique, a triple-folded Teflon patch, suture tape, and a strong suture penetrating through the graft are fixed to the glenoid and greater tuberosity using a suture anchor. This allows for reconstruction of the superior capsule while simultaneously playing a role as a spacer. This procedure’s greatest advantage is its simplicity; it is easy to perform, has a short operative time, and avoids the need to collect autologous tissue. More time is saved, as suturing and tying are not required. We believe our study could aid orthopaedic surgeons in clinical decision-making when encountering irreparable rotator cuff repairs. | Introduction (With Video Illustration) The treatment for irreparable rotator cuff tear without osteoarthritis is controversial. Different surgical modalities such as subacromial decompression, partial rotator cuff repair, tendon transfer, and shoulder replacement with reverse prosthesis are available to treat patients with rotator cuff tear. The concept of superior capsule reconstruction with autologous fascia (fascia lata) has been introduced and shown superior results. The use of a subacromial balloon spacer recently has been reported 1-4 5 , but is not widely practiced by surgeons. We describe a simple superior capsule reconstruction technique for irreparable rotator cuff tear using a Teflon felt patch ( 6 Video 1 , Table 1 ). Preoperative Preparation Preoperatively, we prepare various patch sizes to minimize the intraoperative time. The patch is prepared from Teflon (2.9 mm thick; C. R. Bard, Inc., Murray Hill, NJ) by tri-folding (approximately 9 mm) and suturing the layers with 5-mm intervals. Five stitches passing through all layers are made per side, except on the side of the crease ( Fig 1 ). Patches are prepared to 35 mm, 40 mm, or 45 mm in length by 30 mm or 35 mm width for surgery. Surgical Technique The surgical procedure is shown in Video 1 and Table 1 . Pearls and pitfalls are shown in Table 2 . The procedure is performed with the patient in a lateral decubitus position with the shoulder at 45° abduction using 4-kg arm traction. Four portals are used (posterior, anterior, posterolateral, and anterolateral). After diagnostic arthroscopy, the long head of the biceps tendon is conserved if noted to be normal; however, if it is torn or absent, debridement of the stump is performed. Dislocated or partially torn tendons are resected. Subacromial decompression ( Fig 2 A) and debridement for the scapular neck ( Fig 2 B) are performed followed by stump debridement for the supraspinatus tendon. The subscapularis tendon and infraspinatus muscle, if reparable, are repaired with a double row or single row to the lesser tubercle and greater tubercle, respectively. Patch size is determined based on the distance between the glenoid fossa and greater tuberosity and the anteroposterior diameter of the rotator cuff defect site, both of which are measured intraoperatively in each case ( Fig 3 A). Two anchors (SwiveLock C 5.5 mm; Arthrex, Naples, FL) are inserted at the superior rim of the glenoid ( Fig 3 B). The posterior anchor is inserted from the posterior portal ( Fig 3 C) to the 11-o’clock (1 o’clock in left shoulder) position. The anterior anchor is inserted from the anterior portal ( Fig 3 D) to the 1-o’clock (11 o’clock in left shoulder) position in the right shoulder. One suture used for the anchor is of the strong suture type (FiberWire No. 2; Arthrex Inc.), whereas the other is suture tape (ULTRATAPE; Smith & Nephew, Andover, MA). The anterolateral portal is extended to 2 cm with a 15-mm-diameter cannula (Thoracoport; Medtronic Co., Minneapolis, MN) ( Fig 4 A). The sutures and tapes are passed through the Teflon patch on the outside of the body ( Fig 4 B and C). Then, the Teflon patch is gripped with forceps ( Fig 4 D) and inserted at the anterolateral portal. The Teflon patch is advanced inward until it comes in contact with the glenoid fossa ( Fig 4 E and F). A 10-mm diameter cannula is inserted into the Thoracoport to prevent excessive drainage leading to an insufficient visual field and to prevent thread tangling ( Fig 5 A). Sutures and tapes are fixed to the greater tuberosity with a suture anchor ( Fig 5 B and C). Because tying is not performed at either the glenoid or greater tuberosity sides, the patch is not fixed to bone directly by suturing. This procedure is shown in Figure 6 . Postoperative Rehabilitation Immediately after surgery, the shoulder is immobilized using a shoulder abduction brace (Global Sling; Cosmos, Japan). On postoperative day 1, patients begin passive flexion movement in a supine position by physiotherapists. On postoperative day 7, they begin passive external rotation movement. Active elevation exercise training is performed 3 weeks after surgery, and the abduction pillow is removed. One month after surgery, the brace is completely removed, allowing shoulder movement and activities of daily living. Activities involving light work are allowed 2 months after the operation. Discussion In treating irreparable rotator cuff tears, preventing the upward migration of and stabilizing the rotational center of humeral head are necessary to allow for optimal outcomes in active shoulder elevation postoperatively. This is achieved by replacing the glenoid with a spherical structure in the reverse shoulder prosthesis, and, in superior capsule reconstruction with autologous fascia, fixing the scapula and humeral head at an appropriate tension using an elastic femoral fascia patch. The balloon spacer aims to occupy the subacromial space without gaps. 1-4 5 , 6 In our technique, the scapula and humeral head are secured by sutures and a Teflon patch. This allows for functional reconstruction of the superior capsule, as can be seen from the arthroscopy observation wherein the graft suppresses the upward migration of the humeral head when it is pushed up after fixation of the graft. On radiography, upward migration of the humeral head is improved immediately after surgery ( Fig 7 ). Conversely, since the suture and the Teflon patch have low elasticity, the tension must be minimized to ensure shoulder adduction movement. The superior translation force at the shoulder abduction seems to vary depending on individual differences (e.g., the person's physique, muscular strength, balance, etc.) and is unknown; therefore, there is a possibility that the tension and suppression of humeral head upward migration is insufficient. However, the Teflon patch has a thickness of about 9 mm and can occupy the subacromial space with virtually no gap. In this case, the graft can be expected to play a role as a spacer. The fundamental concept of our technique is a combination of superior capsule reconstruction and employment of a subacromial spacer. Advantages and limitations are shown in Table 3 . The patch is held by penetrating threads; thus, there is no concern of dislodgment forward or backward movement, and we have adopted a method that does not fix the patch and bone directly by suturing. The greatest advantage of this procedure is the simplification of the surgery. The operative time is reduced, as there is no need to collect autologous tissue because artificial materials are used. In addition, no suturing and tying in the vicinity of the suture anchor insertion site where the visual field is limited is necessary. Although anchor failure and suture tears may lead to patch instability and poor results, this method avoids overstressing at the suture site because suturing is not performed and thus helps prevent suture tears caused by subacromial impingement, as the suture is not exposed to the subacromial space. Since the patch material can be confirmed by radiography, it is easy to assess the patch condition by radiography after surgery. A limitation of this surgery is related to the material used. The Teflon patch is not bioabsorbable or replaceable and cannot be biologically fused to bone or replaced with physiological tissue. Foreign body reactions occur at a constant rate ; thus, careful observation is required after surgery. Further research is needed to develop a more biocompatible material. Another limitation is that biomechanical testing for durability is necessary. Durability in this procedure may depend on anchor and bone fixation, and thread durability. However, even if one fixation site fails, the volume fraction of the patches in the subacromial space is so large that it seems unlikely that significant instability will occur. In that case, we believe that the patch should play a role as a spacer, as mentioned previously. The last limitation is that the risk of infection may be greater than in an autologous tissue. 7 Supplementary Data Video 1 Typical case of a large retracted rotator cuff tear treated by the cable graft technique and repair of infraspinatus tendon. The video illustrates steps 1 to 12 as specified in Table 1 . Suture bridge repair for infraspinatus tendon, followed by Teflon patch insertion and fixation, is demonstrated. Arthroscopic intraarticular and posterolateral subacromial view, and overall view of right shoulder of the patient in a lateral decubitus position. ICMJE author disclosure forms | [
"MIHATA",
"MIHATA",
"MIHATA",
"MIHATA",
"DERANLOT",
"STEWART",
"KIMURA"
] |
17d07023cfef4650b47cf2cf0fd45b51_Histopathological and molecular study of Neospora caninum infection in bovine aborted fetuses_10.12980_APJTB.4.201414B378.xml | Histopathological and molecular study of Neospora caninum infection in bovine aborted fetuses | [
"Kamali, Amir",
"Seifi, Hesam Adin",
"Movassaghi, Ahmad Reza",
"Razmi, Gholam Reza",
"Naseri, Zahra"
] | Objective
To estimate the extent to which abortion in dairy cows was associated with of Neospora caninum (N. caninum) and to determine the risk factors of neosporosis in dairy farms from 9 provinces in Iran.
Methods
Polymerase chain reaction (PCR) test was used to detect Neospora infection in the brain of 395 bovine aborted fetuses from 9 provinces of Iran. In addition, the brains of aborted fetuses were taken for histopathological examination. To identify the risk factors associated with neosporosis, data analysis was performed by SAS.
Results
N. caninum was detected in 179 (45%) out of 395 fetal brain samples of bovine aborted fetuses using PCR. Among the PCR-positive brain samples, only 56 samples were suited for histopathological examination. The characteristic lesions of Neospora infection including non-suppurative encephalitis were found in 16 (28%) of PCR-positive samples. The risk factors including season, parity of dam, history of bovine virus diarrhea and infectious bovine rhinotracheitis infection in herd, cow’s milk production, herd size and fetal appearance did not show association with the infection. This study showed that Neospora caused abortion was significantly more in the second trimester of pregnancy than other periods. In addition, a significant association was observed between Neospora infection and stillbirth.
Conclusions
The results showed N. caninum infection was detected in high percentage of aborted fetuses. In addition, at least one fourth of abortions caused by Neospora infection. These results indicate increasing number of abortions associated with the protozoa more than reported before in Iran. | 1 Introduction Neospora caninum ( N. caninum ), an apicomplexan protozoan, is an important factor of fetal abortion in cattle that causes economic losses in dairy herds worldwide[ 1 ]. For the first time in 1984, this parasite was described in puppies with encephalomyelitis and myositis[ 2 ]. Historically in ruminant, the first report of N. caninum infection was diagnosed in a congenitally infected lamb in England[ 3 ]. The main route of transmission in cattle is vertical (80%-100%), although cows could be infected by ingestion of oocysts excreted by dogs and abortion may be observed in any age of pregnancy[ 4,5 ]. Dogs are final and intermediate host for N. caninum [ 6 ]. It seems that the abortion rate in infected animals is three to seven times higher compared to uninfected ones. The main sign of Neospora infection in adult cow is abortion and it most occurs at 5-6 months gestation. N. caninum can also cause repeated abortion in cows[ 1 ]. Symptoms have been reported in calves younger than 2 months. These symptoms include neurological symptoms, weight loss and ataxia. In examination of the nervous system, decreased knee reflex and decreased peripheral sensation is evident. Calves may have protruding or asymmetric eyes. N. caninum occasionally cause birth defects such as hydrocephalus, spinal canal narrowing and scoliosis[ 1 ]. N. caninum DNA has been detected by polymerase chain reaction (PCR) and immunohistochemistry methods in brains of aborted fetusesin Iran[ 4,7 ]. Seroepidemiological studies have shown that the prevalence of Neospora infection is relatively high in dairy cattle[ 8–10 ] and dogs in Iran[ 11,12 ].The objectives of the present study were to estimate the extent to which abortion in dairy cows was associated with of N. caninum and to determine the risk factors of neosporosis in dairy farms in Iran. The present study is the first comprehensive assessment of abortion associated with N. caninum in dairy herds in Iran. 2 Materials and methods The study was performed in 9 provinces (including Khorasan Razavi, North Khorasan, South Khorasan, Golestan, Mazandaran, Isfahan, Chaharmahale Bakhtiari, Kermanshah and West Azerbaijan) ( Figure 1 ). The climate of these provinces varies from cold to warm area and they have almost 280 000 cattle in dairy herds. The herd size varied from farm to farm with a range of 20 to 9 000 cattle. Holstein/Friesian was the most common breed of cattle. This study was performed over a 4 –year-period (2009-2013) in 45 dairy herds. 2.1 Sample collection During 2009 to 2013, 395 aborted bovine fetuses at different stages of gestation were referred to Center of Excellence in Ruminant Abortion and Neonatal Mortality. Firstly, the aborted fetuses were necropsied and samples of brain were collected under aseptic condition. After that, one half of the brain was taken for PCR and another half for histopathology examination. Collected fetal samples were centrifuged at 2 000 × g for 10 min and stored at − 20 °C freezer until used. 2.2 DNA extraction and PCR One half of the brain was homogenized with a stirrer, and DNA was extracted from 1 g homogenate sample using the commercial kit (Cinnagen Inc., Iran) according to the manufacturer’s instructions. PCR was performed as described by Müller et al .[ 13 ]. Pair of Np6/Np21 primers (5’ GGG TGT GCG TCC AAT CCT GTA AC 3’ - 5’ CTC GCC AGT CAA CCT ACG TCT TCT 3’) was used to amplify the 337 bp DNA fragment. 2.3 Histopathological examination For the histopathological study, brain tissues of aborted fetuses were fixed in 10% neutral buffered formalin. Then they were dehydrated through graded alcohols and embedded in paraffin wax. Sections were cut in 5 μm, deparaffinized, rehydrated and stained with hematoxylin and eosin (H & E).Then the stained sections were histopathologically assessed by light microscopy. Brains were examined for determined lesions and evaluated the parasite distribution. The sections were carefully examined at ×100, ×200, ×400 and ×1 000 magnifications, respectively. 2.4 Risk factors and statistical analysis To identify the risk factors and clarify other factors associated with Neospora induced abortion in dairy cattle, the risk factors including season, parity of dam, cow’s milk production, history of bovine viral diarrhea (BVD) and infectious bovine rhinotracheitis (IBR) infection in herd, herd size, aborted fetal age, occurrence of stillbirth and fetal appearance were statistically analyzed. Statistical analysis was performed by SAS (Version 9.2). Chi- square analysis and logistic regression model were used to identify risk factors associated with N. caninum infected abortion. 3 Results Brain samples of a total 395 aborted fetuses were examined by PCR for detection of N. caninum . Out of 395 brains samples, 179 samples (45%) were positive ( Figure 2 ). Only 56 fetal brains of PCR-positive were suitable for histopathological examination. Of these 56 samples, 16 (28%) showed the characteristic lesions of Neospora infection such as non-suppurative encephalitis. Other patterns of brain damage that observed in this study were non-suppurative meningitis (5%), gliosis (focal and diffuse) (93%), satellitosis (2%), severe hyperemia (100%), hemorrhage (focal and diffuse) (51%), perivascular cuffing (63%), ischemic cell change (25%) and edema (100%). In one case, the Neospora like cyst with dimensions of 35 μm×47.5 μm was observed ( Figure 3 ). Statistical analysis indicated that risk factor variables including season, parity of dam, the history of BVD and IBR occurrence in herd, cow’s milk production, herd size and fetal appearance did not show significant association with Neospora caused abortion based on PCR test ( P> 0.05), but two variables including fetal age and stillbirth showed a significant association with PCR results ( P< 0.001) ( Table 1 ). The stillbirth incidence was significantly less in Neospora -infected fetuses than non-infected ones. In addition, this study showed that Neospora caused abortion was significantly more in the second trimester of pregnancy than other periods ( P< 0.001). All variables were analyzed to the logistic regression model, which indicated that fetal age associated with N. caninum -related abortion was the only significant variable in logistic regression model. The odds of stillbirth occurrence in non- Neospora infected fetuses were two times greater than Neospora infected ones (OR: 2.043, 95% CI : 1.156-3.609, P= 0.04). 4 Discussion In present study, we used PCR and histopathology methods to detect N. caninum in aborted bovine fetuses. Among the diagnostic procedures, PCR was more sensitive and specific than other tests such as histopathology and immunohistochemistry[ 14 ] and was less affected by autolysis and postmortem changes[ 15 ]. So far, this method has been used for determining the presence of N. caninum DNA in different embryonic tissues[ 16 ], fetal fluids[ 17,18 ] and even oocysts in feces of final hosts[ 19 ]. PCR results along with histopathological findings in aborted fetal tissues, could confirm the occurrence of abortion by N. caninum . In the European countries and USA, some studies indicated that up to 42% of aborted fetuses from dairy cattle were infected with N. caninum [ 16 ]. In some areas of Iran, studies have been done to investigate the contamination rate of the parasite in dairy herds and the infection rate of 18.4%[ 8 ], 11.9%[ 9 ], and 12.6%[ 10 ] were reported. In present study, N. caninum DNA was detected in 45% (179) of brain samples of dairy farms in different provinces of Iran. This rate of infection is higher than other frequencies that were previously reported in Iran. The main characteristic lesion of Neospora infection such as non-suppurative encephalitis[ 20–22 ] was observed in 16 (28%) out of 56 positive PCR brain samples. Other lesions in the brain were often focal to diffuse non-suppurative meningitis, mononuclear cell infiltration around acentral area of necrosis, glial proliferation and calcification[ 16,23 ]. In present study, other histologic lesions such as non-suppurative meningitis, gliosis, satellitosis, severe hyperemia, hemorrhage, perivascular cuffing, ischemic cell change, and edema were observed in brains of aborted fetuses. In our study, the Neospora like cyst with dimensions of 35 μm×47.5 μm was observed. In bovine tissues, N. caninum cysts with 5–50 μm in diameter and the thickness of the cyst wall varies from <1 to 2.5 μm was reported[ 24 ]. The observed histopathological findings provide a strong evidence for N. caninum -induced abortion in dairy herds of Iran. Many studies have been done to assess the risk factors associated with neosporosis in dairy cattle[ 25–29 ]. We investigated the associations of risk factors including parity, milk production, history of BVD and IBR infection in herd, season, herd size, age of abortions and fetal appearance with abortion due to N. caninum in dairy cows. The results were shown a significant association between fetal age and stillbirth with N. caninum infection. In the previous studies, there was also a strong relationship between fetal age and stillbirth with Neospora seroprevalence in dairy herds[ 30,31 ]. According to our results, N. caninum appears to be the most commonly detected and attributable cause of bovine abortion in dairy herds in Iran. Economic losses associated with neosporosis include abortion, neonatal mortality, early fetal death, and the fact that the disease has proved to be challenging to control, add the importance of the neosporosis as a major cause of abortion in dairy farms. Conflict of interest statement We declare that we have no conflict of interest. Acknowledgements The study was supported by research fund of Ferdowsi University of Mashhad, Mashhad, Iran. ( Grant No. 3/25975 ).All the sampling and molecular diagnoses were performed in Center of Excellence in Ruminant Abortion and Neonatal Mortality. We are very grateful to Mr. Mohhammdnejad for his technical assistance. | [
"DUBEY",
"SALEHI",
"KING",
"RAZMI",
"ALMERIA",
"MONNEY",
"HABIBI",
"NEMATOLLAHI",
"RAZMI",
"NOUROLLAHIFARD",
"HADDADZADEH",
"HOSSEININEJAD",
"MULLER",
"STUART",
"SUTEU",
"DUBEY",
"DOOSTI",
"BUXTON",
"RAZMI",
"SUTEU",
"BASSO",
"SANCHEZ",
"NISHIMURA",
"DUBEY",
"VANLEEUW... |
69fef80b30144744a32506c5f755fb80_Copyrights Page_10.1016_S2213-5979(14)00028-7.xml | Copyrights Page | [] | null | null | [] |
504f43118ade4c3493ad584638532027_A case of biopsy-proven chronic kidney disease on progression from acute phosphate nephropathy_10.1016_j.krcp.2012.04.320.xml | A case of biopsy-proven chronic kidney disease on progression from acute phosphate nephropathy | [
"Joo, Woo Chul",
"Lee, Seoung Woo",
"Yang, Dong Hyuk",
"Han, Jee Young",
"Kim, Moon-Jae"
] | Acute phosphate nephropathy (APhN) following oral sodium phosphate solution (OSP) ingestion as a bowel purgative has been frequently reported. It was recently suggested that APhN could progress to chronic kidney disease (CKD) and a history of APhN might be considered as one of the causes of CKD. However, there are few reports proving APhN as a cause of CKD. Here, we report a case of APhN that progressed to CKD, as proven by renal biopsy. | Introduction Colonoscopy is a widespread diagnostic tool for the evaluation of colonic diseases. Bowel preparation is an important factor in obtaining an accurate diagnosis. Patients use a purgative the day before the procedure to improve the accuracy of the diagnosis. Oral sodium phosphate (OSP) and polyethylene glycol (PEG) solutions are commonly used for bowel preparation because of their ease of use and their safety profile [1] . OSP has an advantage over PEG in terms of a lower fluid volume to be ingested (two bottles of 45 mL each). Initially, the adverse effects of this agent were unremarkable, especially in patients with normal kidney function. However, because of cases of OSP-induced acute renal failure, the safety of OSP has been questioned [2] . Furthermore, it was noted that acute renal failure can progress to chronic kidney disease (CKD) [3] . Here, we report a case of CKD caused by acute phosphate nephropathy (APhN) following use of OSP as a bowel purgative. Case report A 51-year-old woman was admitted to Inha University Hospital for evaluation of renal failure that was recently found at a local clinic. She had been diagnosed with essential hypertension 3 years previously, which was managed with hydrochlorothiazide 12.5 mg and candesartan 16 mg once a day. Her other previous medical history was unremarkable. She took ibuprofen 400 mg on the initial day of menstruation on a monthly basis for 30 years to relieve abdominal pain. Seven months previously, she underwent a health screening examination. Bowel preparation involved ingestion of OSP (Phospho Soda, Fleet, USA). After 8 hours of NPO, she visited the health promotion center at 09:00 hours. Blood and urine were sampled for laboratory examination. Then, esophagogastroduodenoscopy (EGD) and colonoscopy were performed under anesthesia using intravenous midazolam 5 mg. Serum blood urea nitrogen (BUN; 8.8 mg/dL), creatinine (0.86 mg/dL), calcium (9.0 mg/dL), phosphate (4.7 mg/dL), and albumin (4.3 g/dL)) were all normal at that time. EGD and colonoscopy findings were normal. However, the patient suffered from persistent nausea and anorexia after the screening examination. Her physical examination showed no abnormalities. Blood pressure was 130/70 mmHg, pulse rate was 72/minute, and body temperature was 37 °C. Blood tests yielded the following results: hemoglobin 8.3 g/dL, platelets 350,000/μL, white blood cells 8170/μL, BUN 42.5 mg/dL, creatinine 2.95 mg/dL, calcium 8.5 mg/dL, phosphate 3.7 mg/dL, total protein 7.9 g/dL, albumin 4.0 g/dL, glucose 102 mg/dL, sodium 140 mEq/L, potassium 4.6 mEq/L, chloride 103 mEq/L, and total CO 2 21.4 mEq/L. C3 was 134 mg/dL, C4 was 34 mg/dL, and antinuclear antibody, anti-neutrophil cytoplasm antibody, cryoglobulin, RA factor, anti-ds DNA antibody, and anti-GBM antibody tests were all negative. Serum iron was 40 μg/dL, total iron-binding capacity was 303 μg/dL, ferritin was 28.7 μg/mL, and stool occult blood was negative according to EGD. Serum and urine electrophoresis analyses were normal. Urine specific gravity was 1.009 and protein and blood were negative. The 24-hour urine volume was 3700 mL, with protein of 595 mg and creatinine of 1.0 g. A bone marrow biopsy showed no evidence of multiple myeloma or other diseases. Abdominal ultrasonography showed an increased renal cortical echo-texture. However, renal contour and size were normal (right 11.9 cm, left 11.3 cm, long axis; Fig. 1 ) and there was no evidence of a renal stone or hydronephrosis. The patient was hydrated with 1–2 L of normal saline for 10 day. However, her serum creatinine level remained elevated (2.74 mg/dL). To find the cause of renal failure, a kidney biopsy was performed on Day 10. Pathologic analysis of the kidney biopsy showed tubular atrophy or dilation ( Fig. 2 A). Multifocal scattered calcifications were present within the tubules, mainly in the distal tubules ( Fig. 2 B). The tubular epithelial cells were simplified and showed degenerative and regenerative changes. The interstitium showed a moderate degree of fibrosis and inflammatory cell infiltration, mainly of lymphocytes and some neutrophils ( Fig. 3 ). The tubular calcifications did not polarize and were positive to von Kossa stain ( Fig. 3 ), which confirmed their composition as calcium phosphate. The glomeruli appeared mildly enlarged and there were no capillary wall changes. Immunohistochemistry was negative for immunoglobulins and for complement factors. After kidney biopsy, a further examination was performed to rule out other causes of nephrocalcinosis. Serum intact parathyroid hormone (iPTH) decreased from an initial level of 186 pg/mL to 54.6 pg/mL after 2 months. Results for 24-hour urinary analytes were as follows: sodium, 98 mEq/day (normal range 40–220 mEq/day); uric acid, 425 mg/day (250–750 mg/day); citric acid, 16 mg/day (320–1240 mg/day); oxalate, 3.36 mg/day (16.20–53.30 mg/day); calcium, 35.0 mg/day (70.0–180.0 mg/day); and creatinine, 0.9 g/day (1.0–2.0 g/day). Supportive treatment was given and the patient's kidney function remained unchanged 3 months later, with a creatinine level of 2.21 mg/dL. Discussion This case shows that exposure to OSP as a bowel purgative is associated with the development of APhN in patients with risk factors and can progress to CKD. Furthermore, in CKD patients with unknown cause, clinicians should consider that use of an OSP bowel purgative may be one of the possible causes of the development of CKD. OSP preparations are osmotic purgatives that obligate water excretion into the intestinal lumen to maintain its isotonicity with plasma [2] . Retention of water in the bowel lumen results in peristalsis and colonic evacuation. The high osmolarity of OSP is due to its high sodium and phosphate content [2] . Each 45-mL dose of Phospho Soda contains 5 g of sodium and 17.8 g of phosphate, yielding a solution with osmolarity of 16,622 mOsm/L [2] . Various complications, including hyperphosphatemia, hypocalcemia, hypernatremia, hypokalemia, metabolic acidosis, and acute kidney injury (AKI), have been reported in association with OSP ingestion [4] . Hyperphosphatemia is considered to be a major cause of OSP-induced AKI. Therefore, Desmeules et al. used the term APhN for AKI following OSP use as a bowel purgative [5] . The incidence has been reported as 1.29–5.0% [6,7] . Calcium phosphate deposition in renal tubules and interstitium appears to be a mechanism underlying APhN and has been demonstrated histopathologically using von Kossa stain [7] . Nephrocalcinosis occurs when calcium precipitates in conjunction with either oxalate or phosphate [2] . The phosphates in calcium phosphate deposits are detected in paraffin sections using von Kossa stain [2] , which involves a precipitation reaction in which silver ions react with phosphate to yield a black product. This stain does not detect pure calcium oxalate [2] . In the case of hyperparathyroidism, calcium salts are typically found along medullary tubular basement membranes, as concretions within tubules, and in the interstitium [4] . By contrast, the calcium phosphate deposits seen in APhN are found primarily in the tubular lumen and the cytoplasm of tubular epithelial cells, with rare interstitial deposits as well [8] . Hypercalciuria and hyperoxaluria are also risk factors for nephrocalcinosis [2] . In our case, the elevated serum iPTH level seemed to be due to CKD, and biopsies in such cases show calcium phosphate deposits, primarily in the tubular lumen. Our case did not show hypercalcuria, hyperoxaluria or a history of herb ingestion. Among patients with CKD of unknown cause, it has been reported that Chinese herb nephropathy may be an etiologic agent, in which progressive tubulointerstitial nephritis and development of end-stage renal disease within 1–2 years are characteristics [9,10] . Our patient did not have a history of herb ingestion. Following intake of a large amount of phosphate, proximal tubular phosphate reabsorption is decreased, leading to a rapid increase in delivery to the distal tubule [11] . Diarrhea-induced hypovolemia leads to avid salt and water reabsorption in both the proximal tubule and the descending limb of Henle's loop, which is relatively impermeable to calcium and phosphate [12] . The net effect is a decrease in phosphate reabsorption in the proximal tubule, leading to a marked increase in calcium–phosphorus products within the lumen of the distal tubule [8] . Hypovolemia-associated tubular injury may also precondition the distal tubular epithelium, leading to surface expression of hyaluronan and osteopontin, which in turn creates a suitable environment for adherence of calcium phosphate crystals [12] . A direct toxic effect of hyperphosphatemia on renal tubules is proposed as one of the mechanisms of renal failure after OSP use. In a rat model of APhN involving intravenous administration of phosphorus, Zarger et al. observed vacuolization of tubular cells without calcium phosphate deposition in renal biopsies [16] . Old age, female gender, liver disease, renal disease, decreased bowel motility, hypertension, diabetes, and medications contributing to renal hypoperfusion, such as diuretics, angiotensin-converting enzyme inhibitors and angiotensin (AT)–II receptor blockers, could predispose to this disorder [11] . In our case, the patient had risk factors of female gender, hypertension, and AT-II receptor blocker use. The development of CKD in APhN following the use of OSP has been reported [12] . The pathophysiology has not been elucidated yet, but it appears to be associated with activation of the innate immune system in mediating the pro-inflammatory effects of tissue crystal deposition [2] . Pattern recognition receptors, such as the Toll-like receptor (TLR), mediate monosodium urate monohydrate-dependent nitric oxide and IL-1β release from chondrocytes and macrophages, respectively [2] . Given the ability of specific TLRs to recognize calcium crystals and the expression of a variety of TLRs in adult kidney, it is possible that intraluminal calcium phosphate crystal formation after OSP is recognized by specific epithelial TLRs, with subsequent activation of the innate immune response [13] . Persistence of these crystals could result in chronic inflammation with extracellular matrix deposition and interstitial fibrosis [14] . We speculate that iron deficiency anemia in our case was caused by a decreased iron intake due to persistent anorexia. There are only a few reports in terms of CKD after APhN following OSP ingestion as a bowel purgative so far in Korea. Kim et al. reported a case of APhN following OSP use as a bowel purgative in a 66-year-old female [15] . We report a case of APhN that progressed to CKD, as proven by renal biopsy, in a female patient of normal renal function prior to colonoscopy. Conflict of interest None declared. | [
"VANNER",
"HEHER",
"MARKOWITZ",
"JENNETTE",
"DESMEULES",
"HURST",
"RUSSMANN",
"MARKOWITZ",
"DEBELLE",
"NAM",
"MARKOWITZ",
"VERHULST",
"LIUBRYAN",
"WYNN",
"KIM",
"ZAGER"
] |
dd571ac4add44984b835599b3b4b5686_B-FABP-Expressing Radial Glial Cells The Malignant Glioma Cell of Origin_10.1593_neo.07439.xml | B-FABP-Expressing Radial Glial Cells: The Malignant Glioma Cell of Origin? | [
"Mita, Raja",
"Coles, Jeffrey E.",
"Glubrecht, Darryl D.",
"Sung, Rohyun",
"Sun, Xuejun",
"Godbout, Roseline"
] | Brain fatty acid-binding protein (B-FABP) is normally expressed in radial glial cells, where it plays a role in the establishment of the radial glial fiber network required for neuronal migration. B-FABP is also expressed in astrocytoma tumors and in some malignant glioma cell lines. To address the role of B-FABP in malignant glioma, we have studied the growth properties of clonal populations of malignant glioma cells modified for B-FABP expression. Here, we demonstrate that expression of B-FABP in B-FABP-negative malignant glioma cells is accompanied by the appearance of radial glial-like properties, such as increased migration and extended bipolar cell processes, as well as reduced transformation. Conversely, B-FABP depletion in B-FABP-positive malignant glioma cells results in decreased migration, reduction in cell processes, a more transformed phenotype. Moreover, expression of B-FABP in astrocytomas is associated with regions of tumor infiltration and recurrence. Rather than being a direct manifestation of the tumorigenic process, we propose that the ability of high-grade astrocytoma cells to migrate long distances from the primary tumor reflects properties associated with their cell of origin. Thus, targeting B-FABP-expressing cells may make a significant impact on the treatment of these tumors. | null | [
"OHGAKI",
"PASQUIER",
"COLLIGNON",
"SANAI",
"GODBOUT",
"FENG",
"KURTZ",
"SCHMECHEL",
"GOLDMAN",
"GLATZ",
"LAWRIE",
"DAVIDSON",
"CELIS",
"ADAMSON",
"LIANGYDIEHN",
"SCHMID",
"KONDRAGANTI",
"BIGNER",
"LAUFFENBURGER",
"FUKATA",
"LI",
"WATANABE",
"YANG",
"KALOSHI",
"LIANGY... |
c958d32df3d84fe19cc3b5a4b70232e2_A chemically defined substrate for the expansion and neuronal differentiation of human pluripotent s_10.1016_j.scr.2015.05.002.xml | A chemically defined substrate for the expansion and neuronal differentiation of human pluripotent stem cell-derived neural progenitor cells | [
"Tsai, Yihuan",
"Cutts, Josh",
"Kimura, Azuma",
"Varun, Divya",
"Brafman, David A."
] | Due to the limitation of current pharmacological therapeutic strategies, stem cell therapies have emerged as a viable option for treating many incurable neurological disorders. Specifically, human pluripotent stem cell (hPSC)-derived neural progenitor cells (hNPCs), a multipotent cell population that is capable of near indefinite expansion and subsequent differentiation into the various cell types that comprise the central nervous system (CNS), could provide an unlimited source of cells for such cell-based therapies. However the clinical application of these cells will require (i) defined, xeno-free conditions for their expansion and neuronal differentiation and (ii) scalable culture systems that enable their expansion and neuronal differentiation in numbers sufficient for regenerative medicine and drug screening purposes. Current extracellular matrix protein (ECMP)-based substrates for the culture of hNPCs are expensive, difficult to isolate, subject to batch-to-batch variations, and, therefore, unsuitable for clinical application of hNPCs. Using a high-throughput array-based screening approach, we identified a synthetic polymer, poly(4-vinyl phenol) (P4VP), that supported the long-term proliferation and self-renewal of hNPCs. The hNPCs cultured on P4VP maintained their characteristic morphology, expressed high levels of markers of multipotency, and retained their ability to differentiate into neurons. Such chemically defined substrates will eliminate critical roadblocks for the utilization of hNPCs for human neural regenerative repair, disease modeling, and drug discovery. | Introduction Several neurodegenerative diseases and neural-related disorders are characterized by the damage to cells in the central nervous system (CNS). Human neural progenitor cells (hNPCs) derived from human pluripotent stem cells (hPSCs, including human embryonic stem cells [hESCs] and human induced pluripotent stem cells [hiPSCs]) can proliferate extensively and differentiate into all the neural lineages and supporting cells (i.e. neurons, astrocytes, and oligodendrocytes) that compromise the central nervous system ( Chambers et al., 2009; Elkabetz et al., 2008; Shin et al., 2006 ). As such, there is great interest in the use of hNPCs for a variety of applications. First, hNPCs provide a unique opportunity to explore complex neural development processes in a simplified and accessible system. For example, hNPCs can provide an unlimited source of neurons that can be used for a multitude of research studies ranging from cellular electrophysiology to protein biochemistry. Second, hNPCs and their derivatives generated from patients with genetic neural diseases can be used to provide important insights into disease pathology, progression, and mechanism ( Marchetto et al., 2010; Imaizumi and Okano, 2013 ). Third, the ability to generate large quantities of human neural cells will enable the development of compounds and the screening of potential drugs for neurotoxicity ( Betts, 2010; Bosnjak, 2012; Wilson et al., 2014 ). Lastly, because of the limited regenerative potential of the CNS, there is great promise for the use of hNPCs in cell replacement therapies ( Yuan et al., 2011; Kakinohana et al.; Lu et al.; Hefferan et al., 2012 ). However, the use of hNPCs for such applications requires the development of efficacious and cost-effective defined culture systems for their large-scale expansion and differentiation. The growth and differentiation of hNPCs depend on their microenvironment, including the chemical and physical properties of the extracellular matrix (ECM). However, current substrates used for hNPC expansion and differentiation, such as laminin (LN), are expensive, difficult to isolate, vary between lots, and contain xenogenic components which limit their use for clinical applications. Moreover, it has been reported that the heterogeneous composition of currently available matrices can lead to variable hNPC expansion rates, non-homogenous hNPC expansion, and inability of hNPCs to respond to differentiation signals ( Bouhon et al., 2006; Li et al. ). These limitations are a significant bottleneck in the clinical application of these cells where large quantities of homogenous hNPC and neuronal populations are required. In contrast, synthetic, polymer-based substrates that are inexpensive and easily fabricated represent a reliable alternative for the expansion and differentiation of hNPCs. Polymeric biomaterials have been utilized as substrates for the growth of a variety of adult stem cell types such as hematopoietic ( Bagley et al., 1999; Banu et al., 2001; Berrios et al., 2001; Ehring et al., 2003 ) and mesenchymal stem cells ( Curran et al., 2006; Kotobuki et al., 2008; Zhao et al., 2006; Fan et al., 2006; Richardson et al., 2008 ). More recently, we and others have developed polymeric materials that support the in vitro expansion of hPSCs in defined conditions ( Villa-Diaz et al., 2010; Brafman et al., 2010; Mei et al., 2010; Zhang et al., 2013 ). However, polymeric materials as artificial matrices to support the growth and differentiation of hNPCs have not been developed. In this study, we used a high-throughput screening approach to systematically screen a diverse library of synthetic polymers for their ability to support hNPC growth. Using this approach, we identified one polymer, poly(4-vinyl phenol) (P4VP), that supported the long-term growth and multipotency of hNPCs. In addition, P4VP was able to support the directed neuronal differentiation of hNPCs. This is the first example of long-term culture and neuronal differentiation of hNPCs on a chemically defined substrate free from exogenous extracellular matrix proteins (ECMPs). Materials and methods hNPC generation and culture hNPCs were derived from HUES9, H9, HES3 and RiPSC ( Warren et al., 2010 ) hPSCs as previously described ( Brafman, 2014 ). Briefly, to initiate neural differentiation, hPSCs were cultured on Matrigel (BD Biosciences) in TeSR TM 2 defined medium (Stem Cell Technologies). Cells were then detached with treatment with Accutase (Millipore) for 5 min and resuspended in neural induction media [(1% N2/1% B27 without vitamin A/DMEM:F12), 50 ng/ml recombinant mouse Noggin (R&D Systems), 0.5 μM Dorsomorphin (Tocris Bioscience)] and 5 μM Y-27632 (Stemgent). Next, 7.5 × 10 5 cells were pipetted to each well of a 6-well ultra low attachment plates (Corning). The plates were then placed on an orbital shaker set at 95 rpm in a 37 °C/5% CO 2 tissue culture incubator. The next day, the cells formed spherical clusters (embryoid bodies [EBs]) and the media was changed to neural induction media. The media was subsequently changed every other day. After 5 days in suspension culture, the EBs were then transferred to a 10 cm dish coated (3 × 6 wells per 10 cm dish) with growth factor reduced Matrigel (1:25 in KnockOut DMEM; BD Biosciences) for attachment. The plated EBs were cultured in neural induction media for an additional 7 days. Neural rosettes were cut out by dissection under an EVOS (Life Technologies) microscope. Dissected rosettes were incubated in Accutase for 5 min and then triturated to single cells with a 1 ml pipet. Rosettes were then plated onto poly-L-ornithine (PLO; 10 μg/ml; Sigma) and mouse laminin (LN; 5 μg/ml; Sigma) coated dishes at a density of 12,500 cells/cm 2 in neural expansion media [(1% N2/1% B27 without vitamin A/DMEM:F12), 10 ng/ml mouse FGF2 (R&D Systems), and 10 ng/ml mouse EGF2 (R&D Systems)]. For routine maintenance, hNPCs were passaged onto PLO/LN-coated plates at a density of 10,000 cells/cm 2 in neural expansion media. Polymer array fabrication Arrays of polymers were fabricated as previously described ( Brafman et al., 2010; Brafman et al., 2012 ). Briefly, glass slides were cleaned, silanized, and then functionalized with a polyacrylamide gel layer. Polymers were dissolved in the appropriate solvents (PBS, DMSO, DMF, or toluene) at a final concentration of 40 mg/ml. A list of polymers screened is provided in Supplementary Table 1 . Polymers 1–10 were synthesized by free radical polymerization, polymers 11–80 were purchased from Sigma-Aldrich, and polymers 81–89 were purchased from PolySciences. A contact arrayer (SpotBot® 2, Arrayit®) was used to print the polymers. The printing conditions were a 1000 ms inking time and a 250 ms stamping time. Each spot had a diameter of 150–200 μm and neighboring spots were separated by a center-to-center distance of 450 μm. Slides were inspected manually under a light microscope for consistent and uniform polymer deposition. LN (250 μg/ml) was spotted as control and served as a reference to compare experiments from non-identical arrays. Prior to their use, array slides were soaked in PBS while being exposed to UVC germicidal radiation in a sterile flow hood for 10 min. Arrays of polymers were fabricated as previously described ( Brafman et al., 2010; Brafman et al., 2012 ). Briefly, glass slides were cleaned, silanized, and then functionalized with a polyacrylamide gel layer. Polymers were dissolved in the appropriate solvents (PBS, DMSO, DMF, or toluene) at a final concentration of 40 mg/ml. A list of polymers screened is provided in Supplementary Table 1. Polymers 1–10 were synthesized by free radical polymerization, polymers 11–80 were purchased from Sigma-Aldrich, and polymers 81–89 were purchased from PolySciences. A contact arrayer (SpotBot® 2, Arrayit®) was used to print the polymers. The printing conditions were a 1000 ms inking time and a 250 ms stamping time. Each spot had a diameter of 150–200 μm and neighboring spots were separated by a center-to-center distance of 450 μm. Slides were inspected manually under a light microscope for consistent and uniform polymer deposition. LN (250 μg/ml) was spotted as control and served as a reference to compare experiments from non-identical arrays. Prior to their use, array slides were soaked in PBS while being exposed to UVC germicidal radiation in a sterile flow hood for 10 min. Culturing of hNPCs on polymer arrays hNPCs were passaged (1.0 × 10 6 cells per slide) directly onto the slides and allowed to settle on the spots for 24 h prior to rinsing with neural expansion media 3 times to remove residual cells and debris. hNPCs were cultured on the arrays for 7 days in neural expansion media. Culture media and growth factors were replenished daily. Array slide staining, imaging, and quantification. After 7 days of culture, the arrays were fixed for 15 min at room temperature (RT) with fresh paraformaldeyde (4% (w/v)). The arrays were washed twice with PBS and permeabilized with 0.2% (v/v) Triton-X-100 in PBS for 20 min at 4 °C. Cultures were then washed twice with PBS. Primary antibodies were incubated overnight at 4 °C and then washed twice with PBS at RT. Secondary antibodies were incubated at RT for 1 h. Antibodies used are listed in Supplementary Table S5 . Nucleic acids were stained for DNA with Hoechst 33342 (2 μg/ml; Life Technologies) for 5 min at room temperature. Arrays were imaged using a CellInsight TM CX5 (Thermo Scientific) automated high-content screening platform. The system was programmed to visit each spot on the array, perform autofocus, and acquire Hoechst, FITC (SOX1), and Cy5 (NESTIN) images. Cell counts and stain intensities were measured using Thermo Scientific TM HCS Studio TM 2.0 Software using the built-in object identification and cell intensity algorithms. After 7 days of culture, the arrays were fixed for 15 min at room temperature (RT) with fresh paraformaldeyde (4% (w/v)). The arrays were washed twice with PBS and permeabilized with 0.2% (v/v) Triton-X-100 in PBS for 20 min at 4 °C. Cultures were then washed twice with PBS. Primary antibodies were incubated overnight at 4 °C and then washed twice with PBS at RT. Secondary antibodies were incubated at RT for 1 h. Antibodies used are listed in Supplementary Table S5. Nucleic acids were stained for DNA with Hoechst 33342 (2 μg/ml; Life Technologies) for 5 min at room temperature. Arrays were imaged using a CellInsight TM CX5 (Thermo Scientific) automated high-content screening platform. The system was programmed to visit each spot on the array, perform autofocus, and acquire Hoechst, FITC (SOX1), and Cy5 (NESTIN) images. Cell counts and stain intensities were measured using Thermo Scientific TM HCS Studio TM 2.0 Software using the built-in object identification and cell intensity algorithms. hNPC expansion and neuronal differentiation on polymer-coated slides. In order to coat the polyacrylamide-coated glass slides with the ‘hit’ polymers, 120 μl of each polymer (40 mg/ml) was placed dropwise into the center of the glass slide. A coverslip was placed on top of the glass slide to allow for uniform coating of the polymer. The polymer-coated slide was incubated for 1 h at 37 °C. The coverslip was then removed and the slide was washed 3 times with PBS prior to use. The presence of coated polymer was verified by FTIR-ATR ( Supplementary Fig. 1 ). FTIR spectra were acquired on a Nicolet 6700 with Smart-iTR using a N 2 purged sample chamber. The acquisition parameters were: 128 scan and 4 cm − spectra resolution. 1 In order to coat the polyacrylamide-coated glass slides with the ‘hit’ polymers, 120 μl of each polymer (40 mg/ml) was placed dropwise into the center of the glass slide. A coverslip was placed on top of the glass slide to allow for uniform coating of the polymer. The polymer-coated slide was incubated for 1 h at 37 °C. The coverslip was then removed and the slide was washed 3 times with PBS prior to use. The presence of coated polymer was verified by FTIR-ATR (Supplementary Fig. 1). FTIR spectra were acquired on a Nicolet 6700 with Smart-iTR using a N 2 purged sample chamber. The acquisition parameters were: 128 scan and 4 cm − spectra resolution. 1 For hNPC expansion studies on polymer-coated slides, hNPCs were passaged (1.2 × 10 6 cells per slide) directly from PLO/LN plates onto the polymer-coated slides. hNPCs were maintained in neural expansion media. Culture media and growth factors were replenished daily. hNPCs were routinely passaged onto new polymer-coated slides every 5 days. For hNPC neuronal differentiation studies on polymer-coated slides, hNPCs were cultured in neuronal differentiation media [(0.5% N2/0.5% B27 without vitamin A/DMEM:F12), 20 ng/ml BDNF (R&D Systems), 20 ng/ml GDNF (R&D Systems), 1 μM DAPT (Tocris Bioscience), and 0.5 mM, dibutyrl-cAMP (db-cAMP; Sigma)] for 4 weeks. Quantitative PCR (QPCR) RNA was isolated from cells using the NucleoSpin® RNA Kit (Clontech). Reverse transcription was performed with qScript cDNA Supermix (Quanta Biosciences). Quantitative PCR was carried out using TaqMan® Assays or SYBR® green dye on a BioRad CFX96 Touch TM Real-Time PCR Detection System. For the QPCR experiments run with TaqMan® Assays a 10 min gradient to 95 °C followed by 40 cycles at 95 °C for 5 s and 60 °C for 30 s min was used. For QPCR experiments run with SYBR® green dye, a 2 min gradient to 95 °C followed by 40 cycles at 95 °C for 15 s and 60 °C for 1 min was used. The list of TaqMan® assays and primer sequences used is provided in Supplementary Table S4 . Gene expression was normalized to 18S rRNA levels. Delta Ct values were calculated as C t target − C t 18s . All experiments were performed with three technical replicates. Relative fold changes in gene expression were calculated using the 2 − method ( ΔΔCt VanGuilder et al., 2008 ). Data are presented as the average of the biological replicates ± standard error of the mean (SEM). RNA was isolated from cells using the NucleoSpin® RNA Kit (Clontech). Reverse transcription was performed with qScript cDNA Supermix (Quanta Biosciences). Quantitative PCR was carried out using TaqMan® Assays or SYBR® green dye on a BioRad CFX96 Touch TM Real-Time PCR Detection System. For the QPCR experiments run with TaqMan® Assays a 10 min gradient to 95 °C followed by 40 cycles at 95 °C for 5 s and 60 °C for 30 s min was used. For QPCR experiments run with SYBR® green dye, a 2 min gradient to 95 °C followed by 40 cycles at 95 °C for 15 s and 60 °C for 1 min was used. The list of TaqMan® assays and primer sequences used is provided in Supplementary Table S4. Gene expression was normalized to 18S rRNA levels. Delta Ct values were calculated as C t target − C t 18s . All experiments were performed with three technical replicates. Relative fold changes in gene expression were calculated using the 2 − method ( ΔΔCt VanGuilder et al., 2008 ). Data are presented as the average of the biological replicates ± standard error of the mean (SEM). Immunofluorescence Cultures were gently washed twice with staining buffer (PBS w/1% (w/v) BSA) prior to fixation. Cultures were then fixed for 15 min at room temperature (RT) with fresh paraformaldeyde (4% (w/v)). The cultures were washed twice with staining buffer and permeabilized with 0.2% (v/v) Triton-X-100 in stain buffer for 20 min at 4 °C. Cultures were then washed twice with staining buffer. Primary antibodies were incubated overnight at 4 °C and then washed twice with stain buffer at RT. Secondary antibodies were incubated at RT for 1 h. Antibodies used are listed in Supplementary Table S5 . Nucleic acids were stained for DNA with Hoechst 33342 (2 μg/ml; Life Technologies) for 5 min at room temperature. Imaging was performed using an automated confocal microscope (Olympus Fluoview 1000 with motorized stage). Quantification of images was performed by counting a minimum of 9 fields at 20 × magnification. Image quantification of the data is presented as the average of these fields ± standard deviation (SD). Cell length and cell area measurements were conducted on 45 cells from 3 fields at 20 × magnification at each time point using Image J. Cultures were gently washed twice with staining buffer (PBS w/1% (w/v) BSA) prior to fixation. Cultures were then fixed for 15 min at room temperature (RT) with fresh paraformaldeyde (4% (w/v)). The cultures were washed twice with staining buffer and permeabilized with 0.2% (v/v) Triton-X-100 in stain buffer for 20 min at 4 °C. Cultures were then washed twice with staining buffer. Primary antibodies were incubated overnight at 4 °C and then washed twice with stain buffer at RT. Secondary antibodies were incubated at RT for 1 h. Antibodies used are listed in Supplementary Table S5. Nucleic acids were stained for DNA with Hoechst 33342 (2 μg/ml; Life Technologies) for 5 min at room temperature. Imaging was performed using an automated confocal microscope (Olympus Fluoview 1000 with motorized stage). Quantification of images was performed by counting a minimum of 9 fields at 20 × magnification. Image quantification of the data is presented as the average of these fields ± standard deviation (SD). Cell length and cell area measurements were conducted on 45 cells from 3 fields at 20 × magnification at each time point using Image J. Population doubling time Population doubling time of hNPCs cultured on LN and P4VP substrates was calculated using the following equation: PDT ( h ) = (T2 − T1) / (3.32 * [log(N2) − log(N1)]). Flow cytometry Cells were dissociated with Accutase for 5 min at 37 °C, triturated, and passed through a 40 μm cell strainer. Cells were then washed twice with FACS buffer (PBS, 10 mM EDTA, and 2% FBS) and resuspended at a maximum concentration of 5 × 10 6 cells per 100 μl. One test volume of antibody was added for each 100 μl cell suspension ( Supplementary Table S5 ). Cells were stained for 30 min on ice, washed, and resuspended in stain buffer. Cells were analyzed and sorted with a FACSCanto (BD Biosciences). Flow cytometry data was analyzed with FACSDiva software (BD Biosciences). Isotype negative controls are listed in Supplementary Table S5 . Cells were dissociated with Accutase for 5 min at 37 °C, triturated, and passed through a 40 μm cell strainer. Cells were then washed twice with FACS buffer (PBS, 10 mM EDTA, and 2% FBS) and resuspended at a maximum concentration of 5 × 10 6 cells per 100 μl. One test volume of antibody was added for each 100 μl cell suspension (Supplementary Table S5). Cells were stained for 30 min on ice, washed, and resuspended in stain buffer. Cells were analyzed and sorted with a FACSCanto (BD Biosciences). Flow cytometry data was analyzed with FACSDiva software (BD Biosciences). Isotype negative controls are listed in Supplementary Table S5. Results Polymer microarray screen with hNPCs We have previously described the development of a high-throughput microarray technology for the systematic investigation of the effects of polymeric biomaterials on stem cell fate ( Brafman et al., 2010 ). Briefly, we used a contact DNA microarray spotting instrument to deposit nanoliter amounts of polymer solutions onto glass microscope slides coated with a layer of acrylamide gel (~ 10 μm thick). The spotted polymer chains are mechanically interlocked with the underlying acrylamide gel and anchored in place after solvent evaporation. Each polymer spot is 150–200 μm in diameter with a center-to-center distance of 450 μm. Cells are globally seeded onto the arrays and due to the non-fouling nature of the acrylamide hydrogel cells only adhere to sites of polymer deposition. As we have previously shown ( Brafman et al., 2010; Brafman et al., 2012; Brafman et al., 2009a; Brafman et al., 2013; Brafman et al., 2009b ), paracrine signaling between neighboring spots is limited and each spot can be treated an independent ‘well’. Combined with high-content automated microscopy which allows for quantitative single-cell analysis, this technology can be used to screen the effect of thousands of unique polymer compositions on the fate of any cell type of interest on a single microscope slide. We utilized this array technology to identify polymers that support hNPC adhesion, growth, and multipotency ( Fig. 1 A ). hNPCs were derived from hPSCs and expanded on laminin (LN)-coated substrates in the presence of bFGF and EGF as previously described ( Brafman, 2014 ). Prior to seeding onto the array, expanded hNPCs were routinely assessed for their characteristic morphology and maintenance of markers of hNPC multipotency such as SOX1 and NESTIN ( Fig. 1 A). Biomaterials often mediate cell adhesion through integrin-mediated interactions between the cells and extracellular matrix proteins (ECMPs) that have been absorbed from the surrounding media ( Shin et al., 2003 ). To eliminate the undefined absorption of these soluble ECMPs, we performed all of our screens in the presence of a medium (see Materials and methods ) which contained no soluble ECMPs. As such, any observed interaction between a specific polymer and hNPCs would be independent of exogenously absorbed soluble ECMPs. To seed the polymer arrays, hNPCs were enzymatically dissociated into single cells, and cell suspensions were allowed to settle onto the polymer spots for 24 h. Subsequently, the medium was replaced to remove non-adhering cells and debris. Seeding the slide with 1.0 × 10 6 hNPCs allowed for the attachment of 5–10 cells per spot and provided sufficient area for growth. hNPCs were allowed to grow for 7 days in the presence of FGF2 and EGF, which maintain hNPCs in a proliferative, multipotent state. After 7 days, the cells were stained for the hNPC multipotency markers SOX1 and NESTIN and high-content imaging was used to count cells and measure SOX1 and NESTIN intensities at a single-cell level ( Fig. 1 A). Using the polymer array technology, we screened a library of diverse polymers with varying functional groups, charge density, hydrophobicity, and molecular weight ( Supplemental Table 1 ). For instance, since heparin modulates the activity of bFGF and EGF ( Burgess and Maciag, 1989; Bellosta et al., 2001 ), growth factors that play crucial roles in self-renewal of NPCs, this library contains several heparin-mimicking polymers with sulfate and carboxyl functional groups. Our approach consisted of two rounds of screening, resulting in the identification of several ‘hits’, which we subsequently scaled-up and assessed for their ability to support long-term hNPC expansion ( Fig. 1 B). In the first round of screening, each polymer was tested individually at a single concentration (40 mg/ml). As a control we used LN, which supports the adhesion and growth of hNPCs. After 7 days, cells were stained with a DNA stain (Hoechst) and the number of cells per spot was counted by automated high-content imaging ( Fig. 2 A and Supplemental Table 2 ). For the first round of screening, a ‘hit’ was defined as a polymer supporting the average cell number (across 5 replicate spots) ≥ 75% of the average cell number on the control LN spots. By this criterion, we identified 17 polymer ‘hits’ in our first round of screening ( Fig. 2 B). In the second round of screening, these top ‘hits’ were rescreened and assessed for their ability to support the short-term growth (determined by cell number) and multipotency (determined by SOX1 and NESTIN expression) of hNPCs ( Fig. 2 C and Supplemental Table 3 ). For the second round of screening, a ‘hit’ was defined as a polymer supporting the average cell number and SOX1/NESTIN expression (across 8 replicate spots) ≥ 90% of hNPCs grown on control LN spots ( Fig. 2 D). From this second screen, 4 polymers demonstrated the ability to support growth and maintenance of multipotency at similar levels as LN ( Fig. 2 E). Using the polymer array technology, we screened a library of diverse polymers with varying functional groups, charge density, hydrophobicity, and molecular weight (Supplemental Table 1). For instance, since heparin modulates the activity of bFGF and EGF ( Burgess and Maciag, 1989; Bellosta et al., 2001 ), growth factors that play crucial roles in self-renewal of NPCs, this library contains several heparin-mimicking polymers with sulfate and carboxyl functional groups. Our approach consisted of two rounds of screening, resulting in the identification of several ‘hits’, which we subsequently scaled-up and assessed for their ability to support long-term hNPC expansion ( Fig. 1 B). In the first round of screening, each polymer was tested individually at a single concentration (40 mg/ml). As a control we used LN, which supports the adhesion and growth of hNPCs. After 7 days, cells were stained with a DNA stain (Hoechst) and the number of cells per spot was counted by automated high-content imaging ( Fig. 2 A and Supplemental Table 2). For the first round of screening, a ‘hit’ was defined as a polymer supporting the average cell number (across 5 replicate spots) ≥ 75% of the average cell number on the control LN spots. By this criterion, we identified 17 polymer ‘hits’ in our first round of screening ( Fig. 2 B). In the second round of screening, these top ‘hits’ were rescreened and assessed for their ability to support the short-term growth (determined by cell number) and multipotency (determined by SOX1 and NESTIN expression) of hNPCs ( Fig. 2 C and Supplemental Table 3). For the second round of screening, a ‘hit’ was defined as a polymer supporting the average cell number and SOX1/NESTIN expression (across 8 replicate spots) ≥ 90% of hNPCs grown on control LN spots ( Fig. 2 D). From this second screen, 4 polymers demonstrated the ability to support growth and maintenance of multipotency at similar levels as LN ( Fig. 2 E). Long-term culture of hNPCs on defined polymer substrate In order to investigate the scalability of the 4 ‘hit’ polymers physicochemical properties and ability to support long-term hNPC growth, acrylamide gel-coated slides were coated with the ‘hit’ polymers by thermal evaporation (see Materials and methods ). The presence of polymer coating was verified by FTIR-ATR ( Supplemental Fig. 1 ). hNPCs were enzymatically detached from LN-coated substrates and passaged onto the polymer-coated slides. Detachment or spontaneous differentiation, as indicated by changes in morphology was observed on all the ‘hit’ polymers with the exception of poly(4-vinyl phenol) (P4VP; Fig. 3 A and B ). In order to investigate the scalability of the 4 ‘hit’ polymers physicochemical properties and ability to support long-term hNPC growth, acrylamide gel-coated slides were coated with the ‘hit’ polymers by thermal evaporation (see Materials and methods ). The presence of polymer coating was verified by FTIR-ATR (Supplemental Fig. 1). hNPCs were enzymatically detached from LN-coated substrates and passaged onto the polymer-coated slides. Detachment or spontaneous differentiation, as indicated by changes in morphology was observed on all the ‘hit’ polymers with the exception of poly(4-vinyl phenol) (P4VP; Fig. 3 A and B). Next, we tested the ability of P4VP to support hNPC expansion over 10 passages (~ 50 days). hNPCs cultured on P4VP substrates maintained their characteristic morphology over the 10 passages ( Fig. 4 A ). Additionally, hNPCs cultured on PV4P exhibited similar growth dynamics ( Fig. 4 B) and doubling time ( Fig. 4 C) to cells cultured on LN substrates. Moreover, the rate of cell growth on P4VP substrates remained stable during the 10 passages ( Fig. 4 D). In fact, cell counts taken at each passage revealed that 1 × 10 6 hNPCs could theoretically be expanded to 1 × 10 9 hNPCs over 10 passages ( Fig. 4 D). Maintenance of a hNPC phenotype was assessed by quantitative RT-PCR for hNPC makers SOX1 , SOX2 , and NESTIN ( Fig. 4 E). hNPCs grown on P4VP maintained expression of these markers at levels similar to that of cells grown on LN substrates. Along similar lines, flow cytometry ( Fig. 4 F) and immunostaining ( Fig. 4 G) revealed that hNPCs cultured on P4VP-coated substrates for 10 passages continued to express high levels of the hNPC markers SOX1, SOX2, and NESTIN. Together, these results demonstrate the ability of P4VP to support the long-term culture of hNPCs. Because P4VP could support the long-term growth of hNPCs, we wanted determine if P4VP could be used as a matrix for the derivation of hNPCs. To that end, we plated EBs directly onto LN- and P4VP-coated substrates. In comparison to EBs plated onto Matrigel-coated plates, EBs plated directly onto LN or P4VP matrices failed to form neuroepithelial-like rosettes ( Supplemental Fig. 2 A). Similarly, while dissected rosettes plated onto LN substrates resulted in the formation of hNPCs, rosettes replated on P4VP-coated surfaces did not result in the generation of hNPCs. However, dissociated rosettes initially replated onto LN substrates and then subsequently transferred to P4VP substrates resulted in the formation of cells representative of hNPCs ( Supplemental Fig. 2 B). In fact, hNPCs continually cultured on PLO/LN or transitioned to P4VP substrates displayed similar morphology ( Supplemental Fig. 2 B), growth rate ( Supplemental Fig. 2 C and D), and expression of hNPC markers SOX2 and NESTIN ( Supplemental Fig. 2 E). Because P4VP could support the long-term growth of hNPCs, we wanted determine if P4VP could be used as a matrix for the derivation of hNPCs. To that end, we plated EBs directly onto LN- and P4VP-coated substrates. In comparison to EBs plated onto Matrigel-coated plates, EBs plated directly onto LN or P4VP matrices failed to form neuroepithelial-like rosettes (Supplemental Fig. 2A). Similarly, while dissected rosettes plated onto LN substrates resulted in the formation of hNPCs, rosettes replated on P4VP-coated surfaces did not result in the generation of hNPCs. However, dissociated rosettes initially replated onto LN substrates and then subsequently transferred to P4VP substrates resulted in the formation of cells representative of hNPCs (Supplemental Fig. 2B). In fact, hNPCs continually cultured on PLO/LN or transitioned to P4VP substrates displayed similar morphology (Supplemental Fig. 2B), growth rate (Supplemental Fig. 2C and D), and expression of hNPC markers SOX2 and NESTIN (Supplemental Fig. 2E). Neuronal differentiation of hNPCs on defined polymer substrate Because of the dynamic nature of cell–substrate interactions during stem cell differentiation ( Brafman et al., 2013; Marthiens et al., 2010; Dalby et al., 2014; Wojcik-Stanaszek et al., 2011 ), the same synthetic substrate may not be suitable for both hNPC expansion and neuronal differentiation. To that end, we assessed the ability of P4VP to support the directed neuronal differentiation of hNPCs. Specifically, hNPCs were seeded onto P4VP substrates and differentiated to neurons through the withdrawal of bFGF and EGF, and addition of neuronal inducing factors brain-BDNF, GDNF, db-cAMP, and the Notch inhibitor DAPT. After 4 weeks in the presence of neuronal induction factors, cells cultured on P4VP substrates acquired a neuronal morphology ( Fig. 5 A ). Immunofluorescence revealed that a high percentage of the cells stained positive for the pan-neuronal markers NeuN ( Fig. 5 B), neurofilament-68 (NF-L; Fig. 5 C), microtubule-associated protein 2 (MAP2; Fig. 5 D) and β-Tubulin-III (β3T; Fig. 5 E). In fact, quantitative RT-PCR revealed that cells differentiated on P4VP expressed higher levels of MAP2 and β3T than those differentiated on LN substrates ( Fig. 5 F). Along similar lines, neuronal differentiation of NPCs on P4VP resulted in a higher number of MAP2 and β3T positive neurons than NPCs differentiated on LN substrates ( Fig. 5 G). Collectively, this data indicates that in addition to supporting the long-term expansion of hNPCs that P4VP can also serve as an effective neuronal differentiation matrix. Mechanism of hNPC attachment and growth on P4VP substrates To elucidate the mechanism of hNPC attachment and growth on P4VP substrates, we performed time-lapse imaging of initial hNPC attachment on P4VP and LN-coated substrates ( Fig. 6 A ). Quantitative analysis of the dynamics of cell length ( Fig. 6 B) and cell area ( Fig. 6 C) revealed that initial hNPC attachment and spreading significantly differed on P4VP and LN substrates. While hNPCs immediately adhered and spread on LN substrates, cells on P4VP remained rounded for several hours prior to achieving a similar level of adhesion and spreading as hNPCs cultured on LN substrates. To determine if these differences in cell spreading rates could be explained by differences in endogenous ECMP expression, we analyzed the kinetics of the expression levels of genes encoding several ECMPs that have previously been shown to influence hNPC fate ( Li et al., 2012; Ma et al., 2008; Gil et al., 2009; Li et al., 2014 ) including collagen I ( COL1 ), collagen IV ( COL4 ), fibronectin ( FN ), laminin ( LN ), and vitronectin ( VTN ) ( Fig. 6 D). The expression levels of endogenous ECMPs of hNPCs cultured on LN substrates peaked upon initial adhesion (1 h) but quickly decreased upon subsequent culture. On the other hand, hNPCs cultured on P4VP showed prolonged expression of endogenous ECMPs with expression decreasing only after 12 h of culture. This time point coincided with the time point at which cells cultured on P4VP showed a similar level of spreading as hNPCs cultured on LN substrates. We also measured the expression levels of the integrin cell surface proteins known to mediate cell adhesion to these ECMPs ( Humphries et al., 2006 ) ( Fig. 6 E). Similar to the dynamics of the endogenous ECMP expression, the expression levels of α-integrins 1–3, 5 and ν, and β integrins 1, 3, and 5 of hNPCs cultured on LN peaked upon initial adhesion (1 h) but rapidly declined upon subsequent culture. In contrast, peak expression of these integrins in hNPCs cultured on P4VP substrates was delayed and did not significantly decrease until after 12 h in culture. Together, these differences in cell spreading and attachment as well as endogenous ECMP and integrin expression dynamics suggest a possible mechanism by which initial hNPC attachment on P4VP is supported through an integrin-ECMP independent interaction. In turn, this temporary interaction allows the cells to secrete their own ECMPs and assemble a microenvironment to support further spreading and adhesion. Discussion The use of hNPCs for disease modeling, drug screening, or cell therapies requires the development of efficacious and cost-effective defined substrates for their expansion and neuronal differentiation. In this study, we employed an unbiased, high-throughput screening approach to systematically screen a diverse library of chemically defined polymers for the in vitro expansion and neuronal differentiation of hNPCs. Although most of the polymers screened were unable to support the long-term expansion of hNPCs, we identified one polymer, P4VP, which was able to support the long-term culture of hNPCs over multiple passages. In fact, hNPCs cultured on these synthetic substrates maintained their characteristic morphology while growing at a similar rate and expressing similar levels of multipotent NPC markers as cells cultured on LN substrates. Additionally, in the presence of neuronal inducing factors, hNPCs efficiently differentiated to cells representative of neurons. The mechanism by which P4VP supports hNPC expansion is not entirely clear. Live cell imaging revealed that hNPCs cultured on P4VP had delayed cell attachment and spreading kinetics. Additionally, we found that hNPCs cultured on P4VP showed increased and prolonged expression of several endogenous ECMPs and the integrins known to mediate cell attachment to these specific ECMPs. Because the media used for hNPC culture was free of exogenous ECMPs, we speculate that P4VP fosters initial hNPC adhesion in an ECMP-independent manner, possibly through electrostatic interactions between the cells and the substrate ( Smetana et al., 1992; Kirby et al., 2003; Lampin et al., 1997; Lofti et al., 2013 ). Subsequently, the cells secrete their own ECMPs, which delays attachment and spreading, but ultimately provides a suitable microenvironment for cell attachment and continued growth. Although P4VP could support the expansion and differentiation of hNPCs, it did not serve as a robust substrate for the derivation of hNPCs. Specifically, unlike EBs plated onto Matrigel-coated plates which spread out and formed neuroepithelial-like rosettes, EBs plated directly onto P4VP- or LN-coated substrates failed to form rosette-like structures. Along similar lines, while dissected rosettes generated on Matrigel-coated plates led to the formation of hNPCs upon replating on LN substrates, rosettes plated on P4VP-coated surfaces did not result in the generation of cells representative of hNPCs. However, dissociated rosettes that were initially replated onto LN substrates and then subsequently transferred to P4VP substrates led to the generation of cells representative of hNPCs. This suggests that hNPCs may require a brief transition period in which dissected rosettes need to be initially cultured on LN substrates after which cells can be transitioned to P4VP substrates. In the future, polymer array screens can be performed to identify such synthetic substrates that directly support the derivation of hNPCs. Interestingly, we found that neuronal differentiation of hNPCs was significantly enhanced on P4VP compared to cells differentiated on LN-coated culture surfaces. During in vivo neural development, the ECM undergoes dynamic changes in response to soluble signaling molecules to regulate cell proliferation, migration, and differentiation ( Pavlov et al., 2004; Barros et al., 2010; Wade et al., 2014 ). Along similar lines, several in vitro studies have shown that ESC-derived NPCs have specific substrate requirements to promote self-renewal versus those needed to instruct neuronal specification ( Ma et al., 2008; Goetz et al., 2006 ). As such, ECMPs, such as LN, have distinct and temporally specific effects during ESC-derived NPC expansion and neuronal differentiation ( Ma et al., 2008; Goetz et al., 2006 ). Because cell–substrate interactions change as stem cells shift from a state of self-renewal to one of differentiation, we speculate that P4VP provides a more permissive environment in which hNPCs can respond to the exogenous soluble signaling microenvironment and synthesize the necessary ECMP environment needed for self-renewal or neuronal differentiation. Arrays of polymers have been used by several groups as a high-throughput means to develop polymer-based matrices that support the self-renewal, proliferation, and differentiation of a variety of stem cell types ( Brafman et al., 2010; Anderson et al., 2004; Hansen et al., 2014 ). Moreover, such technologies have been successfully used to identify specific biomaterial properties that precisely direct stem cell fate ( Mei et al., 2010 ). In one such study, it was demonstrated that specific chemical functional groups could be used to direct the differentiation of human mesenchymal stem cells into either osteoblasts or adipocytes ( Benoit et al., 2008 ). The future clinical application of hNPCs for the study and treatment of neurodegenerative disorders will require a similar ability to direct their differentiation into certain fates such as regionally specific neuronal subtypes ( Liu and Zhang, 2011 ). For example, the generation of basal forebrain cholinergic neurons from hNPCs would aid in understanding the mechanisms of Alzheimer's disease ( Duan et al., 2014 ), while hNPC-derived spinal cord motor neurons have the potential to treat spinal cord injuries and motor neuron diseases such as amyotrophic lateral sclerosis ( Qu et al., 2014 ). In the future, unbiased, high-throughput screening approaches, such as the one described in this study, combined with high-throughput surface characterization and regression models ( Algahtani et al., 2014 ), will be useful to engineer synthetic substrates with precise physicochemical properties to precisely direct hNPCs towards these neuronal subtypes. Several groups have reported the culture and differentiation of fetal or adult neural stem cells (NSCs) on polymer-based substrates ( Park et al., 2014; Little et al., 2008; Bhang et al., 2007; Lundin et al., 2011; Saha et al., 2007 ). It is important to point out that the NSCs used in these studies, which can be isolated from numerous species and various regions in the fetal and adult nervous system, are biologically and developmentally distinct from the hNPCs used in this study ( Kornblum, 2007; Temple, 2001 ). Specifically, the differentiation potential and, thereby, the therapeutic application of NSCs is much more limited than hNPCs, which can differentiate into all the neural lineages (i.e. neurons, astrocytes, and oligodendrocytes) that compromise the central nervous system (CNS) ( Chambers et al., 2009; Elkabetz et al., 2008; Shin et al., 2006 ). Because of these inherent biological differences in NSCs and hNPCs, it is unclear if these substrates that have been previously used for the growth and differentiation of NSCs would have similar efficacy if used with hNPCs. To our knowledge, the synthetic matrix developed in this study is the first demonstration of the use of a defined polymer-based substrate for the culture and differentiation of hNPCs. In order to have a sufficient number of hNPCs for regenerative medicine and drug screening, efficient methods for their large-scale expansion and differentiation need to be developed. In fact, cell doses for stem cell-based neural therapies have been reported in excess of 6 billion cells per patient ( Bretzner et al., 2011; Schwartz et al., 2012; Chen et al., 2013 ). Although in this study theoretical calculations suggested that 1 × 10 6 hNPCs cultured on P4VP could be expanded up to 1 × 10 9 cells in 10 passages (~ 50 days), practical expansion and differentiation of hNPCs to these numbers are not feasible with current 2D culture systems. Alternatively, microcarriers (MCs), which provide a high-surface area to volume ratio, enable high density cell expansion and scale-up in stirred bioreactors ( Reuveny, 1990; Sart et al., 2013 ). Recently, the use of protein-coated MCs in stirred suspension bioreactors has been reported for the expansion of several hPSC lines and their derivatives ( Sart et al., 2013; Fan et al., 2013; Ting et al., 2014 ). In the future, the culture of hNPCs on polymer-coated MCs in suspension bioreactors may enable their expansion and differentiation to the numbers required for regenerative medicine purposes. Conclusions In this study, we used a high-throughput screening process to identify a synthetic polymer, P4VP, which can support the long-term self-renewal and proliferation of hNPCs at a similar level to cells cultured on purified ECMPs. Moreover, neuronal differentiation of hNPCs was more efficient on P4VP substrates than these traditional ECMP-based substrates. P4VP is chemically defined and available off-the-shelf, thus overcoming the limitations associated with culture on purified ECMPs. Overall, the polymeric biomaterial developed in this study offers a cost-effective, scalable, and robust platform to support the in vitro expansion and neuronal differentiation of hNPCs to the quantities needed for disease modeling, drug screening, and cell-based therapies. The following are the supplementary data related to this article. Supplemental Fig. 1 Characterization of polymer-coated slides. (A) FTIR spectrum of polyacrylamide [PAAM] slides coated with poly(4-vinyl phenol) [P4VP], poly(azelaic anhydride) [PAzA], poly(styrene-co-allyl alcohol) [PS-co-AA], and poly(styrene-maleic acid) [PSMA]. (B) Summary of peaks from FITR spectrum. Supplemental Fig. 2 Derivation of hNPCs on LN and P4VP-coated substrates. (A) Phase contrast images of EBs plated onto Matrigel, LN, and P4VP substrates. Rosettes derived on Matrigel-coated plates were dissociated and replated on LN-coated substrates. The resulting hNPCs were continuously cultured on LN or transitioned to P4VP substrates. Both hNPC populations were indistinguishable in terms of (B) morphology, (C) growth rate (mean ± SEM), (D) doubling time (mean ± SEM), and (E) gene expression of hNPC markers SOX2 and NESTIN (mean ± SEM). Populations were compared using Student's t -test. N.S. = not statistically significant. Supplemental Table 1 List of polymers used for microarray synthesis. Supplemental Table 2 Raw data of heat map generated in Fig. 2 A. Supplemental Table 3 Raw data of heat map generated in Fig. 2 C. Supplemental Table S4 TaqMan® gene expression assays and other primers used in this study. Supplemental Table S5 Antibodies used in this study. Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.scr.2015.05.002 . Acknowledgments We thank Mrityunjoy Kar and Shyni Varghese (Department of Bioengineering, University of California-San Diego) for assistance with FTIR-ATR. This research was supported in part by a gift from Michael and Nancy Kaehr, California Institute of Regenerative Medicine (RT2-01889), and funds from the Arizona State University School of Biological and Health Systems Engineering. | [
"ALGAHTANI",
"ANDERSON",
"BAGLEY",
"BANU",
"BARROS",
"BELLOSTA",
"BENOIT",
"BERRIOS",
"BETTS",
"BHANG",
"BOSNJAK",
"BOUHON",
"BRAFMAN",
"BRAFMAN",
"BRAFMAN",
"BRAFMAN",
"BRAFMAN",
"BRAFMAN",
"BRETZNER",
"BURGESS",
"CHAMBERS",
"CHEN",
"CURRAN",
"DALBY",
"DUAN",
"EHRI... |
18dfe5c7c10a4a859422133e2031c212_INDEPENDÊNCIA FUNCIONAL DE ADOLESCENTES COM HEMOFILIA ATENDIDOS NO HOSPITAL DO HEMOPE SEGUNDO A GRAV_10.1016_j.htct.2021.10.785.xml | INDEPENDÊNCIA FUNCIONAL DE ADOLESCENTES COM HEMOFILIA ATENDIDOS NO HOSPITAL DO HEMOPE, SEGUNDO A GRAVIDADE DA DOENÇA | [
"Moraes, LBL",
"Guimaraes, TMR",
"Costa, NCM",
"Botelho, RS",
"Amaral, CLBD",
"Costa, IM"
] | Introdução
A hemofilia é uma doença hemorrágica, genética, rara, com herança recessiva ligada ao cromossomo X. É caracterizada pela deficiência ou anormalidade da atividade coagulante do fator VIII (hemofilia A) ou do fator IX (hemofilia B). De acordo com os níveis dos fatores deficientes, se <1%, 1% a 5% ou >5 a 40%, é classificada como grave, moderada ou leve, respectivamente. A apresentação clínica é semelhante nos dois tipos, caracteriza-se por sangramento intra-articular (hemartrose), hemorragia muscular, em outros tecidos ou cavidades. As hemartroses são responsáveis por 80% dos sangramento, e de forma repetida, numa articulação-alvo, provocam degeneração articular progressiva e deformidade física, conhecida como artropatia hemofílica, levando a vários graus de incapacidade física. O Function Independence Score for patients with Haemophilia (FISH) é um instrumento que mede a independência funcional e avaliação do estado articular de pessoas com hemofilia, baseado na observação do desempenho das atividades de vida diária, capaz de detectar os tipos de danos causados por hemartrose recorrente, permitindo conhecer o impacto da doença na qualidade de vida do paciente, e pode ser realizado por um enfermeiro treinado. O paciente é avaliado em 3 categorias: A. Cuidados pessoais (alimentar-se e arrumar-se,banho e vestir-se),B. Transferências (sentar-se e levantar-se; agachamento) e C. Locomoção (caminhar, subir e descer escadas, correr). Cada atividade é graduada de 1 a 4 de acordo com a quantidade de auxílio necessário: 1. Não consegue realizar a atividade ou precisa de assistência completa; 2.Precisa de assistência parcial ou ambiente modificado; 3. Realiza a atividade sem auxílio, mas com um leve desconforto; e 4.realiza a atividade sem nenhuma dificuldade. O escore total varia de 8 a 32.
Objetivo
Avaliar o grau de independência funcional de adolescentes com hemofilia, atendidos no hospital do Hemope, segundo a gravidade da doença.
Métodos
Estudo transversal com abordagem analítica. O estudo foi realizado através da coleta dos dados secundários do FISH dos adolescentes com hemofilia atendidos na consulta de enfermagem no ambulatório de coagulopatias, no ano de 2019. A população do estudo foram 92 adolescentes, faixa de 12 a 19 anos. A coleta de dados foi realizada de janeiro a fevereiro de 2021. A análise dos dados foi realizada pelo teste t, de amostra independente bicaudal, para comparações das médias das variáveis contínuas entre os grupos analisados e valores de p < 0,05 foram considerados significativos.
Resultados
1. Variáveis sociodemográficas e clínicas: Verificou-se 74(80%) dos adolescentes foram atendidos e tiveram avaliação funcional atualizada pelas enfermeiras. Todos eram do sexo masculino, idade média 15,5 ±2,4 anos, faixa etárias 12-15 e 16-19 anos com prevalência igual em 50%. A maioria apresentava hemofilia A(79,7%), tipo moderado (47,3%) e grave (40,5%). 2. Escore do FISH: A média do FISH 30,4 ± 2,6(variação17-32), considerada alta, e mais da metade dos adolescentes foram avaliados em todas as atividades como totalmente independentes. Segundo a gravidade da doença, verificou-se maior pontuação nos casos Leves (32 ± 0) vs.Graves (29,9 ± 2,5; p = 0,0001) e Moderados (30,4 ± 2,9; p = 0,004), demonstrando uma diminuição significativa na capacidade funcional de acordo com a gravidade da doença, segundo literatura. As atividades com maior comprometimento foram ‘agachamento'e ‘tomar banho’.
Considerações finais
Os adolescentes apresentaram um elevado escore de FISH em comparação aos estudos internacionais, e principalmente uma maior pontuação nos casos leves, demonstrando uma diminuição significativa na capacidade funcional de acordo com a gravidade da hemofilia. | null | [] |
35aaced1f95c4abeaf1a6877c2ee5590_Establishment of nasal breathing should be the ultimate goal to secure adequate craniofacial and air_10.1016_j.jpedp.2017.09.017.xml | Establishment of nasal breathing should be the ultimate goal to secure adequate craniofacial and airway development in children | [
"Torre, Carlos",
"Guilleminault, Christian"
] | null | Conforme mostrado há muitos anos e estudado novamente por Chambi‐Rocha et al. nesta edição do jornal, a respiração bucal crônica durante o desenvolvimento craniofacial ativo de uma criança pode resultar em alterações anatômicas que afetam diretamente as vias aéreas. 1 Essas alterações podem resultar em maior instabilidade das vias aéreas e colapso, que possivelmente levam a outros problemas, como distúrbios respiratórios do sono. 2 Investigações anteriores em crianças com respiração bucal mostraram uma correlação com crescimento orofacial anormal. 3 Há ainda uma interação contínua entre a respiração nasal e a sucção, deglutição e mastigação adequadas para aprimorar o crescimento orofacial. 4 Isso é especialmente importante em crianças, cujo complexo nasomaxilar cresce continuamente a partir da infância até o período pré‐púbere e vai até o término da puberdade. De fato, o crescimento orofacial máximo ocorre durante os dois primeiros anos de vida e até os seis anos aproximadamente 60% da face adulta já se desenvolveram. Portanto, o estabelecimento de respiração nasal adequada no início da vida é essencial para maximizar o crescimento do complexo esquelético e das vias aéreas superiores. 5 6,7 A interação contínua entre o complexo nasomaxilar e a mandíbula durante a respiração nasal também é importante para direcionar o crescimento de todo o complexo fácil‐esquelético em um sentido direto e horizontal. Essa interação reduz a angulação do plano oclusal, que encurta o comprimento das vias aéreas, cria espaço intraoral para acomodar a língua, leva a um velo palatino mais curto e possivelmente melhora a função dos músculos dilatadores das vias respiratórias para ajudar a manter as vias respiratórias abertas. Portanto, é razoável supor que o objetivo final para maximizar o potencial de desenvolvimento craniofacial e das vias aéreas adequado deve ser o estabelecimento de respiração nasal contínua. Isso, em conjunto com outras funções orais, como sucção, deglutição e mastigação, é função essencial que estimulará continuamente a cartilagem intermaxilar do nascimento até 13‐15 anos. Enquanto ativa, essa sincondrose permitirá crescimento facial por meio de um mecanismo de ossificação osteocondral. 8,9 7 Fitzpatrick et al. foram os primeiros a descrever como a respiração bucal contínua leva a um aumento significativo na resistência das vias aéreas superiores. Então, nos anos 1980, várias experiências inovadoras lançaram luz sobre essas observações, quando um grupo de macacos Rhesus recém‐nascidos teve suas passagens nasais bloqueadas durante os primeiros seis meses de vida. 3 No fim desse período, observou‐se que os macacos apresentaram estreitamento das arcadas dentárias, redução do comprimento do arco maxilar, mordida cruzada anterior, 10 overjet maxilar e aumento da altura facial anterior. Os registros da eletromiografia de diferentes músculos orofaciais e do pescoço também revelaram uma indução abrupta dos padrões de descarga rítmica, muito diferente da baixa amplitude quase contínua e das descargas dessincronizadas na maior parte dos indivíduos em repouso. Curiosamente, ao término do período de seis meses, os macacos Rhesus puderam respirar normalmente por seus narizes, o que levou à normalização da descarga muscular e restauração do crescimento orofacial adequado. 11,12 Em humanos, a restauração da respiração nasal normal continua a ser um desafio, mesmo após a correção de problemas anatômicos que contribuem para a resistência nasal. Há várias teorias para explicar os motivos pelos quais é tão desafiador treinar novamente uma pessoa a respirar pelo nariz após anos de respiração bucal. Quando há “desuso nasal”, há uma perda de propriocepção e uma “deaferentação” funcional que impede o retorno à troca de ar nasal normal mesmo após a correção dos fatores anatômicos que contribuem para a obstrução nasal. A respiração bucal crônica também resulta em “hipoventilação” do nariz, que poderá levar a acumulação de células inflamatórias na mucosa nasal, que causa resistência nasal. 13 Por fim, as mesmas alterações anatômicas no crescimento orofacial que resultam da respiração bucal crônica, em especial do estreitamento das arcadas dentárias, podem limitar o espaço intranasal e também podem causar desvio do septo nasal secundário à sua compressão pelo palato arqueado alto em uma orientação céfalo‐caudal. 14 15 Comparar humanos a macacos Rhesus pode nos ajudar a entender os motivos pelos quais há pouco espaço para erro em nossa espécie e pelos quais é importante abordar a respiração nasal anormal e as fracas funções orais no início da vida para maximizar o potencial de crescimento do esqueleto craniofacial. O desenvolvimento da fala no Homo sapiens , bem como a mudança para o bipedalismo, levou ao alongamento das vias aéreas com o desenvolvimento de uma orofaringe de 2‐4 mm mal apoiada, que não tem um bloqueio da epiglote contra palato. Além disso, para facilitar a fala, houve uma migração anterior do forame magno e uma regressão do complexo maxilomandibular para estabelecer a proporção 1:1 do trato das cordas vocais superiores, necessária para a produção adequada da fala. 16 A regressão do complexo esquelético ocorreu à custa dos dentes. Em comparação com os humanos, que têm 32 dentes, outras espécies de macacos, como o chimpanzé, podem ter até 44. O resultado desse complexo esquelético comprometido foi encurtamento da língua, que fez com que se tornasse um elemento obstrutivo ao fazer parte das vias aéreas superiores. 17 Em macacos e na maioria das outras espécies, a língua fica confinada à cavidade oral e não bloqueia as vias áreas. 18 A respiração nasal durante o sono é essencial para estimular ventilação adequada, ativar os reflexos que ajudam a manter a tonicidade dos músculos que estabilizam as vias aéreas superiores e evitar a instabilidade nas vias aéreas que resultam da respiração bucal. Abordar a respiração bucal durante o sono é essencial ao considerar que, ao nascer, a criança dorme quase 80% do tempo e mesmo aos seis anos continuará a ter um período de sono prolongado, em que gasta até 25% do tempo. Estudos que monitoram a respiração nasal e bucal durante o sono mostraram que indivíduos normais respiram pelo nariz 96% de seu tempo. 19 Essa observação foi confirmada por outros estudos que mostram que crianças normais entre quatro e seis anos respiram entre 0 a 10% do tempo pela boca durante o sono, com uma média de 4%. 20 Ao considerar tudo isso, é, portanto, essencial abordar quaisquer problemas, como respiração bucal crônica, que contribua para fraco desenvolvimento esquelético e das vias aéreas na criança. Nessas circunstâncias, os pacientes podem não ter espaço suficiente para acomodar a língua ou outras estruturas como as amígdalas palatinas e linguais, que podem se tornar elementos obstrutivos durante o sono. O espaço limitado nas vias aéreas, resultante de um fraco desenvolvimento esquelético, também pode impedir os pacientes de manter a permeabilidade adequada das vias aéreas, pois eles progridem por meio dos estágios naturais do sono e seus músculos relaxam. A combinação de tudo isso pode resultar, por fim, na limitação do fluxo de ar durante o sono, que leva a despertares frequentes e quedas nos níveis de saturação de oxigênio no sangue, o que define o que conhecemos como apneia obstrutiva do sono. Conflitos de interesse Os autores declaram não haver conflitos de interesse. | [
"CHAMBIROCHA",
"MCNAMARA",
"FITZPATRICK",
"LINDERARONSON",
"VARGERVIK",
"GUILLEMINAULT",
"SOUKI",
"LIU",
"CAMACHO",
"HARVOLD",
"VARGERVIK",
"MILLER",
"LEE",
"GELARDI",
"AKBAY",
"DEDHIA",
"MENDES",
"BEHLFELT",
"MICHELS",
"FITZPATRICK"
] |
e3ba714d7a00497fb957624a8e4b3678_Corrosion behavior of a high-strength steel E690 in aqueous electrolytes with different chloride con_10.1016_j.jmrt.2022.11.146.xml | Corrosion behavior of a high-strength steel E690 in aqueous electrolytes with different chloride concentrations | [
"Li, Zhaoliang",
"Song, Jialiang",
"Chen, Junhang",
"Yu, Qiang",
"Xiao, Kui"
] | In this paper, the corrosion behavior of E690 steel with different chloride concentrations was studied. Combined with the corrosion weight loss, corrosion morphology, corrosion products, electrochemical phenomenon, and the transformation process of corrosion products, the influence of different Cl− concentrations on the corrosion characteristics of E690 steel was studied. The results showed that the corrosion rate of E690 steel increased with the increase of Cl− concentration, and the corrosion rate reached the highest at 3.5% Cl− concentration, and then decreased. The corrosion morphology and electrochemical properties of E690 steel were consistent with the corrosion rate. At the initial stage of corrosion, The concentration of Cl− affected the corrosion resistance of E690 steel by affecting the reaction process of anode and cathode. When Cl− is less than 3.5%, Cl− had an anodic depolarization effect on E690 steel and accelerated anodic dissolution. When Cl− reaches 3.5%, the diffusion of oxygen was hindered, and the existence of film of Fe(OH)2 also reduced the dissolved oxygen in the film environment on the metal surface, leading to the weakening of corrosion. In addition, Cl− affects the corrosion resistance of E690 steel by affecting the transformation of corrosion products. When Cl− in solution reaches 3.5%, β-FeOOH appears in corrosion products of E690 steel. Compared with γ-FeOOH, β-FeOOH is highly reductive and can be rapidly reduced to Fe3O4, thus accelerating the corrosion reaction. | 1 Introduction Marine corrosion and protection of steel materials under high humidity and high salt fog environment were paid more and more attention [ 1 , 2 ]. As a new type of high strength low-alloy steel, it was of great research value to study the corrosion behavior of E690 in Marine atmosphere environment. In order to study the mechanism of Marine atmospheric corrosion of carbon steel, Yang et al. [ 3 ] discussed the corrosion behavior of carbon steel material under 3.5% NaCl thin liquid film of different thickness. It was found that the corrosion process of thin liquid film was mainly controlled by concentration polarization and activation polarization. The corrosion rate of metal was controlled by concentration polarization in thin liquid film, and by activation polarization reaction in thick liquid film. There were some research work on the corrosion behavior of weathering steel series in simulated atmospheric environment. Xue et al. [ 4 ] studied the corrosion behavior of low carbon steel in 0.1 M NaHCO 3 + 0.1 M NaCl solution under different dissolved oxygen concentrations. The results showed that the corrosion loss of steel increased after adding chloride ions. In the initial stage, the dissolution of chloride ions on the oxide film kept the corrosion potential at a low value, and the carbon steel tended to actively dissolve. With the accumulation of corrosion products, the corrosion potential of steel was improved. Due to the degradation of chloride ions, the rust layer was loose and porous, which reduced the reduction resistance of oxygen. Huang et al. [ 5 ] studied the corrosion behavior of low carbon steel containing NiCu and found that NiCu steel would form thick and uniform inner rust layer after being immersed in NaCl solution for 7 days, and Cr-rich areas were observed in the inner rust layer. Yang et al. [ 6 ] studied the corrosion behavior of weathering steel Q400NQR1 and 09CuPCrNi in 2.0% NaCl neutral solution. It was found that Q400NQR1 generated strong protective inner rust layer in the middle of corrosion, and inner rust layer of 09CuPCrNi was difficult to stabilize and appeared uniform corrosion phenomenon. For E690 steel, Zhang et al. [ 7 ] studied the corrosion behavior of E690 offshore platform steel under simulated marine atmospheric environment, and found that the rust layer of the experimental steel was composed of four substances, Fe 3 O 4 , α-FeOOH, β-FeOOH and γ-FeOOH. The chemical composition and the density of rust layer played a decisive role in the later corrosion behavior of materials. Wu et al. [ 8 ] studied the corrosion performance of E690 bainite steel in 0.5% NaCl environment. The results showed that the smaller the grain size was corroded, and the faster the film formation was beneficial to the formation of rust layer. Cu precipitation accelerated the initial corrosion of steel, and the effect became smaller after the formation of dense rust layer at the later corrosion stage. Xing et al. [ 9 ] studied the oxygen concentration corrosion behavior of E690 steel in 3.5% NaCl solution, and found that the area ratio of anode and cathode was the main factor affecting oxygen concentration corrosion, and the dissolved oxygen in the cathode region was the main factor affecting oxygen concentration corrosion. The influence of oxygen concentration difference corrosion of rust layer was related to the environment. In the condition of oxygen deficiency, it had a protective effect on the metal under the rust layer, while in the condition of oxygen rich, the rust layer would participate in the cathodic reaction and accelerated the metal corrosion. Tian et al. [ 10 ] studied the electrochemical corrosion and stress corrosion cracking behavior of E690 steel in artificial seawater. The SCC process at low thiosulfate concentration was mainly caused by hydrogen embrittlement due to thiosulfate-induced hydrogen osmotic promotion. Lu et al. [ 11 ] studied the corrosion evolution behavior of E690 under potentiostatic anodic polarization. It was found that the positive shift of anode potential accelerated two dissolution stages, including electrochemical oxidation from Fe to Fe 2+ and electrochemical oxidation from Fe 2+ to Fe 3+ . The shape and structure of rust layer would change the composition of corrosion products. Hao et al. [ 12 ] studied the stress corrosion phenomenon of E690 materials under different Cl − concentrations simulated ocean alternating dry and wet environment. It was found that with the increase of Cl − concentration, the SCC sensitivity of E690 high strength steel first increased, then decreased, and then increased. When NaCl concentration was 3.5%, the SCC sensitivity of E690 high strength steel was the highest. Yang et al. [ 13 ] studied the corrosion behavior of E690 steel under simulated industrial atmospheric environment, and believed that the improvement of corrosion resistance of steel was related to the structure of rust layer and alloying element distribution. The rust layer of the E690 was divided into inner and outer layers. Cr and Cu elements were enriched in the inner rust layer, which improved the corrosion resistance of steel. However, comprehensive consideration of the electrochemical characteristics of corrosion and product transformation was lacking. In this paper, NaCl solution was used to simulate the marine atmospheric environment, and the marine atmospheric environment acceleration model was used to simulate the corrosion behavior of E690 in the outdoor atmosphere. The corrosion behaviors of E690 steel under different Cl − concentrations were studied from two aspects: the control steps of corrosion reaction and the transformation rule of corrosion products. The corrosion characteristics of E690 at different corrosion stages under different Cl − concentrations were investigated. 2 Experimental methods The material was E690 high strength steel, and its chemical composition was shown in Table 1 . Metallographic microscope was used to observe the microstructure of the E690. As shown in Fig. 1 , E690 is bainite with uniform structure. The weekly immersion samples were polished with 150 #, 240 #, 400 # and 800 # sandpaper successively, and the initial weight was weighed with a balance with an accuracy of 0.1 mg. The samples were washed with deionized water, dehydrated with ethanol solution, and then stored in dryer. In this experiment, the weekly immersion test was used as an accelerated means to simulate the corrosion behavior in marine atmosphere. The cycle of weekly immersion test was 60 min, and the period was 24, 48, 96, 192, 360 and 720 h. Different Cl − concentrations of 1, 2, 3.5, 5 and 7 wt% were set to simulate corrosion behavior under different marine and atmospheric environments. Open-circuit potential, dynamic polarization curve and electrochemical impedance spectrum were measured by Autolab PGSTAT302N electrochemical tester manufactured by Metrohm. The electrochemical sample was welded and encapsulated with epoxy resin, and the reserved area was 1 cm 2 . The experiment was a three-electrode system with electrochemical samples of different materials, a reference electrode with saturated calomel electrode (SCE) and an auxiliary electrode with platinum electrode. The temperature was 25 °C, and the electrolyte was simulated solution under different marine atmosphere which was the same as the weekly immersion test. The scanning speed of polarization curve is 0.5 mV/s. The frequency range of electrochemical impedance measurement was 10 5 Hz–10 mHz, and the excitation signal amplitude was 5 mV sine wave. The surface morphology of the specimen was observed by MET microscope and FEI Quanta250 scanning electron microscope (SEM). The phase structure of the corrosion products was analyzed by Dmax-RC rotating anode X-ray diffractometer (XRD). Cu target K α1 was selected as the radiation source, the tube pressure was 40 kV, the current was 150 mA, the scanning range was 10–100°, the step width was 0.02°, and the scanning rate was 10°/min. 3 Results 3.1 Weight loss The corrosion weight loss of E690 with different Cl − concentrations were fitted and analyzed according to the Formula (1) , and the fitting results were shown in Table 2 . W was the corrosion weight loss of metal material per unit area, g/cm 2 ; t was the exposure time of metal material, m; A was the material constant, representing the magnitude of the initial corrosion rate, which was related to the environment. n was the material constant, representing the development trend of corrosion, which varied with the environment. (1) W = A t n The fitting weight loss curve of E690 after the weekly immersion test with different Cl - concentrations were shown in Fig. 2 . The fitting parameters were shown in Table 2 . Where R represented the fitting correlation coefficient. Through fitting calculation, it was found that the corrosion weight loss of E690 steel showed an accelerated growth trend, indicating that no stable and dense rust layer was formed in the corrosion process of E690, and the rust layer did not provide significant protection for the matrix [ 2 14 ]. This was related to the element content of E690 steel and Cl − corrosion environment. The Cr, Ni and Cu elements of E690 steel played an important role in the transformation of corrosion products. In a long period of corrosion environment, it was beneficial to stabilize the transformation of corrosion product α-FeOOH, but the degradation of Cl − destroyed the structure of rust layer, affected the transformation process of corrosion product, and leaded to the formation of amorphous corrosion product β-FeOOH, resulting in loose and porous rust layer and gradual increased of corrosion rate. The influence of Cl − concentration on corrosion rate was different. With the increase of Cl − concentration, the corrosion rate of E690 steel increased. The corrosion rate reached the highest in 3.5% NaCl, and then decreased in 5 and 7% NaCl. Due to the rapid formation of numerous closed microbatteries on the surface of the material, the trend of corrosion rate increased and then decreased. The increase of Cl − concentration enhanced the conductivity of the liquid film, and corrosion occurred in the center of the droplet to form corrosion products. However, the continuous increased of Cl − concentration reduced the dissolved oxygen of the thin liquid film on the metal surface, especially the dissolved oxygen in the solution, so the corrosion would be weakened at high concentration. 3.2 Corrosion products The XRD patterns of E690 corrosion products after weekly immersion with different Cl − concentrations for 360 h were showed in Fig. 3 . It showed that the corrosion products were composed of Fe 3 O 4 , α-FeOOH, γ-FeOOH and β-FeOOH. When Cl − concentration increased to 3.5%, a small amount of β-FeOOH appeared. A small amount of Cr and Ni was added to E690 steel. The increase of Cr content was beneficial to refine α-FeOOH and inhibited the effect of corrosive anions, especially the invasion of Cl − [ 15–17 ]. As a relatively stable element, the addition of Ni changed the corrosion potential of steel in a positive direction, increased the stability of steel, effectively inhibited the invasion of Cl − , and promoted the formation of protective rust layer [ 18 ]. In the presence of more Cl − solution, the inhibition of corrosion resistant alloying elements on Cl − was weakened, and Cl − caused the formation of β-FeOOH. With the increased of Cl − concentration, the corrosion rate of the material increased. Fig. 4 showed the XRD patterns of corrosion products after 96 and 360 h weekly immersion in 3.5% NaCl solution. After 96 h weekly immersion test, the corrosion products on the surface of the material were composed of Fe 3 O 4 , α-FeOOH, γ-FeOOH and β-FeOOH. With the extension of test time, the composition of corrosion products did not change, but the contents of γ-FeOOH and β-FeOOH decreased significantly, and the β-FeOOH decreased more obviously, and the content of Fe 3 O 4 increased. At the metal and rust layer interface, the γ-FeOOH and β-FeOOH were reduced to Fe 3 O 4 , which accelerated the corrosion reaction. Moreover, β-FeOOH had higher reactivity and had a higher reduction reaction priority than γ-FeOOH [ 19 ]. Although the initial generation of unstable β-FeOOH and metastable γ-FeOOH could be transformed into α-FeOOH with relatively stable structure, thus reducing the atmospheric corrosion rate of steel [ 20 , 21 ], which had a certain protective effect on the matrix [ 22 ]. However, β-FeOOH had been converted to Fe 3 O 4 . The presence of Fe 3 O 4 leaded to uneven distribution and pores, which could not effectively prevent the entry of corrosive media, resulting in an increasing corrosion rate of steel in the later period. 3.3 Corrosion morphology Fig. 5 showed the morphology of corrosion products under microscope after 360 h of testing under different Cl − concentrations. The corrosion products were black and yellow-brown at various concentrations. Combined with XRD, it was known that the main components were the same, but the product states were different under different Cl − concentrations. At 1% NaCl concentration, the corrosion products were yellow-brown, which were loose and had pores, and were related to the abundance of Fe 3 O 4 . The corrosion products at 7% NaCl concentration alternated with yellow-brown and black products, and the products were granular. This showed the transformation stage of the corrosion products β- FeOOH and γ- FeOOH to α-FeOOH and Fe 3 O 4 [ 23–26 ]. However, the corrosion product states at 3.5% NaCl concentration showed the initial transition states of β-FeOOH and γ-FeOOH, and the product boundaries were clearly defined. Fig. 6 showed the SEM corrosion morphology after different Cl − concentrations for different times. There was a difference in the relative content of the corrosion products in clusters and lamellar form, which was related to the transformation morphology of the corrosion products caused by different Cl − concentrations. With the increase of immersion time, lamellar corrosion products changed to cluster corrosion products, which closely related to β-FeOOH. At the later stage of the immersion test, the cluster products were partially connected to form a similar network with tight network and small pores. In 3.5% NaCl environment, the rust layer products were more loose, and the rust layer had poor protection. Although the network structure formed by α-FeOOH reduced the corrosion of the substrate by the medium, Fe 3 O 4 leaded to pores and holes in the rust layer, so the corrosion process could not be effectively prevented [ 27 , 28 ]. 3.4 Electrochemical behavior The polarization curve results of E690 in NaCl solutions with different Cl − concentrations were showed in Fig. 7 . The polarization curve was fitted and the fitting results were obtained as shown in Table 3 . There were similar electrochemical corrosion characteristics under different Cl − solutions for E690 steels. The cathode exhibited oxygen reduction reaction controlled by the diffusion of dissolved oxygen, while the anode exhibited dissolution reaction controlled by the charge transfer step. The results of corrosion potential and current density were consistent with the results of corrosion weight loss. With the increase of Cl − concentration, the corrosion potential decreased and then increased, and the corrosion current density increased and then decreased. This indicated that the presence of Cl − could accelerate the anodic dissolution process. However, with the continuous increase of Cl − concentration, the cathodic reaction slowed down, so that the corrosion trend of E690 slowed down. Fig. 8 showed the electrochemical impedance diagram of E690 with different Cl − concentrations. Nyquist diagram of E690 with different Cl − concentrations were a large real-part compressed capacitance arc. The capacitive arc in impedance diagram reflected the electrical resistance of electrochemical reaction at metal interface and capacitance information of double electric layer at metal interface. The radius of the electrochemical impedance arc decreased and then increased, which was consistent with the corrosion current density characteristics of the polarization curve. Through fitting analysis of impedance diagram, the equivalent circuit of the analog circuit R e (Q 1 R r ) as shown in Fig. 9 was used to simulate, and better matching results could be obtained. Where, R e represented solution resistance, Q 1 represented capacitance of electrochemical reaction and R r represented charge transfer resistance. The results of equivalent circuit fitting were shown in Table 4 . The values characterizing the polarization impedance of the materials could be analyzed by R r . E690 steel showed poor corrosion resistance at 3.5% NaCl concentration. With the increase of Cl − concentration in the corrosion process, the dissolved oxygen in the thin liquid film on the metal surface was reduced, and the corrosion was weakened. 4 Discussion The corrosion rate of E690 steel in NaCl solution changed with the prolongation of corrosion period, which was related to Cl − on the surface corrosion products. According to the corrosion characteristics of E690, the reaction mechanism was shown in Fig. 10 . At the initial stage of corrosion, E690 steel was dissolved as an anode and the anodic reaction process occurred: (2) F e = F e 2 + + 2 e − At the initial stage of cathode reaction, oxygen was easy to reach the anode surface and the cathode reaction process occurred: (3) O 2 + 2 H 2 O + 4 e − = 4 O H − As the corrosion progresses, Na + and Fe 2+ moved to the cathode region, Cl − and OH − moved to the anode region, and combined to form Fe(OH) 2 in the middle region. This process is shown in Fig. 10 (a). Although Cl − did not participate in the reaction [ 29 ], it leaded to local damage of the oxide and hydroxyl protective film on the steel surface [ 30 ]. During this process, the film of Fe(OH) 2 became unstable and was oxidized to FeOOH. (4) F e ( O H ) 2 + O H − = γ − F e O O H + H 2 O + e − In this process, part of Fe(OH) 2 is oxidized to Fe 3 O 4 . (5) 3 F e ( O H ) 2 + 1 2 O 2 = F e 3 O 4 + 3 H 2 O During the corrosion process, Cl − caused the partial dissolution of Fe(OH) 2 , which promoted the continuous development of corrosion products, thus accelerating the corrosion of matrix. However, with the increase of Cl − concentration, the diffusion of oxygen would be hindered, thus inhibiting depolarization process. More Cl − in the corrosive medium reduced the reduction reaction rate of the cathode process [ 31 ]. Before NaCl reached 3.5%, Cl − had an anodic depolarization effect on the corrosion of carbon steel, accelerating the anodic dissolution and accelerating the corrosion process. When NaCl reaches 3.5%, with the increase of Cl − concentration, oxygen diffusion would be hindered and the cathode reaction rate would be reduced. In addition, the presence of film of Fe(OH) 2 reduced the dissolved oxygen in the film environment on the metal surface, leading to the weakening of corrosion. When the concentration of Cl − in solution was greater than 3.5%, β-FeOOH appeared in the corrosion products of E690 steel. The formation of β-FeOOH was related to the presence and concentration of Cl − , and β-FeOOH had strong reducibility, which promoted the corrosion process [ 19 ]. This process is shown in Fig. 10 (b). With the progress of corrosion, insoluble solid corrosion products gradually formed on the surface of the matrix, gradually accumulated and formed a protective rust layer on the surface of the matrix. The rust layer generated on the surface of low alloy steel not only has physical protection, but also can affect and participate in the Cathodic reduction [ (6) F e ( O H ) 2 + O H − + C l − = β − F e O O H + H 2 O + C l − + e − 32 ]. (7) 3 γ − F e O O H + e − = F e 3 O 4 + H 2 O + O H − Fe 2+ in the rust layer also reacted with γ-FeOOH to form Fe 3 O 4 . (8) F e 2 + + 2 γ − F e O O H + O H − = F e 3 O 4 + 2 H 2 O At the interface between metal and rust layer, γ-FeOOH and β-FeOOH were reduced to Fe 3 O 4 , which accelerated the corrosion reaction. This process is shown in Fig. 10 (c). When γ-FeOOH and β-FeOOH coexist in the same corrosion system in the rust layer, β-FeOOH had higher reactivity and had a higher reduction reaction priority than γ-FeOOH. β-FeOOH and γ-FeOOH could also be further converted to α-FeOOH [ 33 , 34 ]. 5 Conclusions In this paper, the corrosion behavior of E690 steel under different Cl − concentrations was studied. Combined with corrosion weight loss, corrosion morphology, corrosion products, and electrochemical behavior, the transformation process of corrosion products with different Cl − concentrations was studied, and its influence on corrosion characteristics of E690 steel was obtained. The following conclusions were drawn: (1) The corrosion morphology and electrochemical properties of E690 steel were consistent with the corrosion rate at different Cl − concentrations. With the increase of Cl − concentration, the corrosion rate of E690 steel increased, and then decreased. The corrosion resistance of E690 steel is the lowest in 3.5% NaCl. (2) At the initial stage of corrosion, The concentration of Cl − affected the corrosion resistance of E690 steel by affecting the reaction process of anode and cathode. Although the corrosion rate of E690 in different Cl − concentrations was controlled by cathode (oxygen diffusion control). However, before NaCl reached 3.5%, Cl − had an anodic depolarization effect on the corrosion of carbon steel, accelerating the anodic dissolution and accelerating the corrosion process. When NaCl reaches 3.5%, with the increase of Cl − concentration, oxygen diffusion would be hindered, and the cathodic reaction process of generating OH − would be inhibited. and the cathode reaction rate would be reduced. In addition, the presence of film of Fe(OH) 2 also reduced the dissolved oxygen in the film environment on the metal surface, leading to the weakening of corrosion. (3) During the transformation of corrosion products, Cl − affected the corrosion resistance of E690 steel by affecting the transformation regularity. When the concentration of Cl − was greater than 3.5%, β-FeOOH appeared in the corrosion products of E690 steel. Compared with γ-FeOOH, β-FeOOH was more reductive and could be rapidly reduced to Fe 3 O 4 , thus accelerating the corrosion reaction. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements This work was supported by the National Key Research and Development Program of China (Grant No. 2017YFB0304602 ) and the State Key Laboratory of Metal Material for Marine Equipment and Application ( SKLMEA-K201908 ). | [
"LIN",
"WEN",
"YANG",
"XUE",
"HUANG",
"YANG",
"ZHANG",
"WU",
"XING",
"TIAN",
"LU",
"HAO",
"YANG",
"WANG",
"SUN",
"KAMIMURA",
"YANG",
"DIAZ",
"NISHIMURA",
"LIU",
"WANG",
"LIU",
"ZHANG",
"EVANS",
"MENZIES",
"MISAWA",
"LI",
"ZHANG",
"AVCI",
"KRUGER",
"SUN",
... |
a6b78b4a53e94e99bfee40749f589e8b_Clinical features and outcomes of patients in different age groups with non-valvular atrial fibrilla_10.1016_j.ijcha.2022.101009.xml | Clinical features and outcomes of patients in different age groups with non-valvular atrial fibrillation receiving oral anticoagulants | [
"O, U Fan",
"Chong, Tou Kun",
"Wei, Yulin",
"Paudel, Bishow",
"Giudici, Michael C.",
"Wong, Chi Wa",
"Lei, Wai Kit",
"Chen, Jian",
"Wu, Wei",
"Liu, Kan"
] | Background
Patients with non-valvular atrial fibrillation (NVAF) need prophylactically antithrombotic therapies to reduce the risk of stroke. We hypothesized that the prognostic benefits of prophylactic antithrombotic therapies outweighed the bleeding risk among very elderly (≥85 years old) patients.
Methods
We analyzed clinical characteristics and outcomes of patients with NVAF in different age groups who had received different prophylactic antithrombotic therapies. We enrolled 3895 consecutive NVAF patients in the Macau Special Administrative Region (Macau SAR) of China from January 1, 2010, to December 31, 2018. Among 3524 patients [including 1252 (35.53%) very elderly patients] who completed the entire study, 2897 (82.21%) patients had a CHA2DS2-VASc score ≥ 2, 2274 (64.53%) had HAS-BLED score < 3, and 1659 (47.08%) had both of the above. The follow-up time was 3.80 (median, interquartile range 1.89–6.56) years. The primary outcome was the first occurrence of ischemic stroke, major bleeding, clinically relevant non-major gastrointestinal bleeding (CRNM-GIB), and all-cause mortality.
Results
A total of 2012 patients (57.09%) received no antithrombotic (NAT), 665 (18.87%) received antiplatelet (AP) agents, 371 (10.53%) received vitamin K antagonist (VKA), and 476 (13.51%) received non-vitamin K antagonist oral anticoagulants (NOACs). Eventually, 610 (17.31%) patients experienced thromboembolic events, with 167 (4.74%) strokes and 483 (13.71%) transient ischemia attack (TIA)/strokes. Bleeding events occurred in 614 (17.42%) patients, with 131 (3.72%) major bleeding, 381 (10.81%) CRNM-GIB and 102 (2.89%) minor bleeding events. All-cause deaths occurred in 483 (13.71%) patients. Compared with patients receiving NAT, patients receiving NOACs and VKA had fewer strokes (hazard ratio [HR]: 0.038; 95 %CI 0.004–0.401;
p
= 0.006 and HR: 0.544; 95 %CI 0.307–0.965;
p
= 0.037, respectively), and lower all-cause mortality (HR: 0.270; 95 %CI 0.170–0.429;
p
< 0.001 and HR: 0.531; 95 %CI 0.373–0.756;
p
< 0.001, respectively). Of note, very elderly patients with NVAF receiving NOACs had fewer strokes (adjust hazard ratio [adjHR]: 0.042; 95 %CI 0.002–1.003;
p
= 0.050) and lower all-cause mortality (adjHR: 0.308; 95 %CI 0.158–0.601;
p
= 0.001). Meanwhile, despite higher CRNM-GIB events (adjHR: 1.736; 95 %CI 1.042–2.892;
p
= 0.034), major bleeding events (adjHR: 1.045; 95 %CI 0.366–2.979;
p
= 0.935) did not significantly increase. VKA neither reduced strokes (adjHR: 1.015; 95 %CI 0.529–1.948;
p
= 0.963), nor improved all-cause mortality (adjHR: 0.995; 95 %CI 0.641–1.542;
p
= 0.981) in very elderly patients with NVAF.
Conclusions
Antithrombotic treatment (VKA and NOACs) reduces stroke and improves prognosis in patients in different age groups with NVAF. The prognostic benefits of NOACs outweigh their bleeding risks in very elderly patients with NVAF. | 1 Introduction The prevalence of atrial fibrillation (AF) increases with age [1] . Although non-valvular atrial fibrillation (NVAF) is associated with increased mortality and morbidity, essentially from stroke and systemic thromboembolism, very elderly (≥85 years old) patients with NVAF often hesitate to take oral anticoagulants (OACs) due to the overriding concern of OACs associated bleeding risk. While very elderly patients represent an essential portion of the population that needs to be studied for clinical anticoagulation decisions, they have been paradoxically underrepresented in available randomized clinical trials [2–5] . Whether the benefits of prophylactic antithrombotic therapies outweigh the risks among very elderly patients remains inconclusive. The world's older population continues to grow at an unprecedented rate, and it becomes more compelling than ever to examine the “real-world” benefits and risks of very elderly NVAF patients receiving prophylactic antithrombotic therapies. From January 1, 2010, to December 31, 2018, we enrolled and treated a total of 3524 NVAF patients in an anticoagulation cardiology specialty clinic in the Macau Special Administrative Region (Macau SAR) of China, including 1252 (more than 35%) patients older than 85 years, to determine their clinical outcomes from different antithrombotic treatments. 2 Methods 2.1 Definitions of clinical endpoint and risk assessment tools [6–9] Clinical outcome : Defined as ischemic stroke (ICD-10: I63.0-I63.9), major bleeding, clinically relevant non-major gastrointestinal bleeding, and all-cause deaths (ICD-10: R96, R98, R99, and I46.1). Defined as fatal bleeding, symptomatic bleeding in a critical area or organ such as intracranial (intracerebral hemorrhage, ICD-10: I60.x, I61.x), and intraspinal, intraocular resulting in vision changes, retroperitoneal, intraarticular, pericardial, or intramuscular with compartment syndrome [ICD-10: I85.0, I98.3, D62.9, K62.5, K92.2, K25-28 (subcodes 0–2 and 4–6)]. Major bleeding (MB): Defined as overt gastrointestinal bleeding not meeting criteria for MB but requiring medical intervention, hospitalization, temporary interruption, or delayed anticoagulation dosing, pain, or impairment of daily activities. Clinically relevant non-major gastrointestinal bleeding (CRNM-GIB): Classified into three risk levels according to the HAS-BLED score, the low risk = HAS-BLED score of 0; intermediate risk = HAS-BLED score of 1 or 2, and high risk = HAS-BLED score of 3 or more. Bleeding risk scoring (HAS-BLED score): Classified into three risk levels according to the CHA Stroke risk scoring (CHA 2 DS 2 -VASc score): 2 DS 2 -VASc score as follows, CHA 2 DS 2 -VASc score of 0 for men or 1 for women: recommend no antithrombotic therapy; CHA 2 DS 2 -VASc score of 1 for men or 2 for women: recommend antithrombotic therapy with oral anticoagulation or antiplatelet treatment but preferably oral anticoagulation, and CHA 2 DS 2 -VASc score of ≥ 2 for men or ≥ 3 for women: recommend oral anticoagulation. In clinical practice, both CHADS 2 score and CHA 2 DS 2 -VASc score were used for stroke risk assessment. However, in order to facilitate unified analysis, all patients were analyzed using CHA 2 DS 2 -VASc score as a stroke risk scoring tool in this study. 2.2 Patients and study design This was a retrospective observational study conducted at Kiang Wu hospital in Macau SAR. We utilized an electronic healthcare information system to gather all medical information of patients who received medical care in either in-patient or out-patient settings. All eligible patients were ≥ 18 years old, of Chinese nationality, diagnosed with NVAF (ICD-10: I48.0–148.9) via either a 12-lead electrocardiogram (ECG) or 24-hour ECG monitor (Holter). Consecutive patients diagnosed with NVAF admitted between January 1, 2010, to December 31, 2018, were followed through December 31, 2019. All enrolled patients survived more than 6 months after AF diagnosis. Patients with the following were excluded: (1) Valvular AF such as post mechanical valve replacement or moderate to severe rheumatic mitral valve stenosis; (2) AF caused by reversible factors including acute myocardial infarction, acute myocarditis, pericarditis, pulmonary embolism, electrocution, or binge drinking; (3) Any primary coagulation disorders. Of 3895 screened patients, 3524 patients were eventually enrolled in this study. Patients were categorized into four groups based on age and stroke prevention strategy, respectively. Eligible patients were classified into four age groups: < 64 years old, 65–74 years old, 75–84 years old, and ≥ 85 years old. Patients aged 75–84 years old were defined as the “elderly” group, and ≥ 85 years old were defined as the “very elderly” group. Eligible patients were also classified into four antithrombotic strategy groups: no antithrombotic (NAT), antiplatelet (AP), vitamin K antagonist (VKA) and non-vitamin K antagonist oral anticoagulants (NOACs) ( Fig. 1 ). 2.3 Follow-up and data collection We collected patients' demographic data and medical history, including hypertension, coronary artery disease, vascular diseases, diabetes mellitus, heart failure, renal function, previous stroke/transient ischemic attack (TIA), peripheral thromboembolism, and bleeding events. We documented all the dosages and duration of anticoagulant medication, comorbidities, laboratory data, ECG, and X -ray reports. Each patient in this research was tracked via the electronic healthcare information system and followed-up at an anticoagulation cardiology specialty clinic until the patient developed thromboembolism, bleeding, or death events. In addition, all information related to anticoagulant treatment and clinical outcomes was collected. To ensure sufficient time to collect data, each indexed case was followed up for at least half a year, or a death endpoint event occurred. The study protocol was approved by the Scientific Ethics Committee of Kiang Wu Hospital of Macau, SAR (file no. 2017–001). 2.4 Statistical analysis Continuous variables are described as mean values and standard deviation (SD). The description of discontinuous variables uses median and interquartile ranges (IQR: 25th, 75th percentile). Statistical analysis for continuous variables was made using the students t - test or analysis of variance (ANOVA). Categorical variables are expressed as percentages. Baseline categorical variables were compared using a Chi-square ( χ ) test when appropriate; otherwise, a Fisher exact test was used. In addition, multiple comparisons between different groups were tested for statistical significance using Fisher's least significant difference 2 -test (LSD t -test) or Bonferroni method's t test based on distribution. The risk of antithrombotic strategy-associated adverse events for patients with AF was assessed using Cox regression analysis. In very elderly subgroup analysis, the Cox regression model was adjusted for age, CHA Z 2 DS 2 -VASc score, HAS-BLED score, weight and renal function. p - value < 0.05 was defined as significant in statistical inference tests. The statistical analyses were performed with Microsoft Excel and IBM SPSS Statistics. 3 Results 3.1 Demographic characteristics and comorbidities Demographic and clinical characteristics are summarized in Table 1 (based on age group) and Table 2 (based on antithrombotic agents’ group). More comorbidities occurred as patients aged. The most prevalent comorbidity was hypertension (2281/3524, 64.73%), followed by diabetes mellitus, heart failure, chronic kidney disease, and coronary artery disease. The median (25th, 75th percentile) of comorbidity burden number was 2 (1, 3) in this study. There was a noticeable high number of NAVF patients with moderate morbidity burden (3–5 comorbidities) in the very elderly group (49.22%, = 0.001). A similar high numbers were observed of NVAF patients with a high morbidity burden (≥6 comorbidities) in the very elderly group (63.16%, p < 0.001). p 3.2 Stroke/bleeding risk scores and antithrombotic therapy Overall, 82.21% of the patients with NVAF scored CHA 2 DS 2 -VASc score of 2 or more, 30.47% scored CHA 2 DS 2 -VASc score of 5 or more ( Fig. 2 A ), and 35.47% scored HAS-BLED score of 3 or more ( Fig. 2 B ). Proportionally more patients in the very elderly group scored higher in each category, all at the high stroke risk score and more than half of them at the high bleeding risk score. In terms of a therapy, in this study, 24.04% (847/3524) of all enrolled patients with NVAF were prescribed OACs. Across all age groups, the mean proportion of patients on NAT was 56.29% (49.60–60.94%), and 14.78% (185/1252) of very elderly patients were on anticoagulation ( Fig. 2 C ). In particular, oral anticoagulation prescription rates were lower than average among very elderly NVAF patients compared to other age groups through the years ( < 0.05). Furthermore, the anticoagulation prescription rate increased progressively, from 12.73% to 43.62%, from 2010 to 2018, and each age group showed an upward trend by year. Among all enrolled patients with NVAF, the proportions of OACs use increased from 12.31% to 40.24%, while in patients with CHA p 2 DS 2 -VASc score ≥ 2 (2897/3524, 82.21%), the proportions of OACs use increased from 12.73% to 43.62%. 3.3 Endpoints 3.3.1 Stroke events A total of 610 (17.31%) patients experienced thromboembolic events. Among them, 483 patients (13.71%) had experienced TIA/stroke events, and 167 (4.74%) patients had an ischemic stroke. The majority of the stroke patients were those over 85 years old (80.24%, 134/167, < 0.001), and they also scored significantly high on the risk of stroke scale with a median score of 4.66 ± 1.44 ( p < 0.001) ( p Table 3 ). 3.3.2 Bleeding events A total of 614 (17.42%) patients occurred bleeding events. And there were 131 patients (3.72%) who had experienced MB events; 381 patients (10.81%) had had CRNM-GIB, and 102 patients (2.89%) had had minor bleeding events during the follow-up. Most patients with MB [41.98% (55/131)] and CRNM-GIB [51.71% (197/381)] were over 85 years old ( Table 3 ). 3.3.3 All-cause mortality There were 483 patients had died during the follow-up, for whom the CHA 2 DS 2 -VASc score was 4.35 ± 1.71, and HAS-BLED score was 2.77 ± 1.11. The rate of death increased by age ( = 0.25, r < 0.001), by CHA p 2 DS 2 -VASc score ( = 0.13, r < 0.001), and by HAS-BLED score ( p = 0.23, r < 0.001). Very elderly patients with NVAF accounted for 66.87% (323/1252) of all-cause deaths. Except for the group under 64 years old, the prognosis of patients in other age groups taking OACs was better than that of NAT or AP ( p < 0.05) p ( Table 4 ) . 3.4 Outcome and survival analysis The Kaplan-Meier curves for all-cause mortality according to different antithrombotic therapy in whole study group were shown in Fig. 3 A . The patients treated with VKA or NOACs have a similar cumulative survival rate ( = 0.072) which was higher than those in other groups (AP or NAT) ( p = 41.41, χ 2 < 0.001). There is no significant difference between NAT and AP in survival analysis ( p = 0.70, χ 2 = 0.405) ( p Table 5 ). In comparison to NAT, VKA and NOACs resulted in significant reduction in all-cause mortality (HR: 0.531; 95 %CI 0.373–0.756; < 0.001 for VKA and HR: 0.270; 95 %CI 0.170–0.429; p < 0.001 for NOACs) and stroke (HR: 0.544; 95 %CI 0.307–0.965; p = 0.037 for VKA and HR: 0.038; 95 %CI 0.004–0.401; p = 0.006 for NOACs). Furthermore, AP, VKA, NOACs did not increase MB compared with NAT, while AP and NOACs increased CRNM-GIB (HR: 1.809; 95 %CI 1.421–2.304; p < 0.001 for AP and HR: 2.123; 95 %CI 1.569–2.872; p < 0.001 for NOACs) p ( Fig. 4 ) . The correlations between clinical outcomes and antithrombotic therapy in the whole study group were shown in Table 6 . Compared with NAT or AP, NVAF patients treated with VKA or NOAC had a better prognosis, with lower all-cause death ( = 58.05, χ 2 < 0.001), lower stroke incidence ( p = 31.50, χ 2 < 0.001), lower CRNM GIB incidence ( p = 25.71, χ 2 < 0.001), and no increase in MB events ( p = 2.91, χ 2 = 0.406). p For very elderly patients with NVAF, lower dosages of NOACs were used or lower INR target of VKA was achieved (e.g., dabigatran 110 mg twice daily, rivaroxaban 15 mg once daily, edoxaban 30 mg once daily, apixaban 2.5 mg twice daily, warfarin INR target 1.6–2.6) [10–14] . Decisions to prescribe reduced dose NOACs or low INR target VKA are made based on the specific considerations on age, weight, renal function and use of specific concomitant medications. The Kaplan-Meier curves for all-cause mortality according to different antithrombotic therapy in the very elderly subgroups were shown in Fig. 3 B . The subgroup of very elderly patients with NVAF treated with NOACs has the highest cumulative survival rate ( = 9.31, χ 2 = 0.002). In contrast, patients treated with NAT, AP, or VKA have a similar cumulative survival rate ( p greater than 0.05) ( p Table 5 ). In comparison to NAT, only NOACs resulted in a significant reduction in all-cause mortality ( adj HR: 0.308; 95 %CI 0.158–0.601; < 0.001). NOACs decreased stroke compared with NAT ( p adj HR: 0.042; 95 %CI 0.002–1.003; = 0.050) without increasing MB. Both AP and NOACs increased CRNM-GIB ( p adj HR: 1.478; 95 %CI 1.081–2.020; = 0.014 for AP and p adj HR: 1.736; 95 %CI 1.042–2.892; = 0.034 for NOACs) ( p Fig. 5 ). The correlation between clinical outcomes and antithrombotic therapy in the very elderly subgroup was shown in Table 6 . VKA neither reduced the risk of stroke in very elderly patients with NVAF, nor improve all-cause mortality, which was likely due to low TTR 23.20 ± 22.94 (%), and multiple comorbidities, especially renal insufficiency (most of their renal function were stage 5 and VKA may be the only anticoagulation therapy available to consider). 4 Discussion The present study showed that the prognostic benefits of NOACs outweighed their bleeding risks in very elderly patients with NVAF. Compared to NAT and AP, NOACs reduced stroke and improved the prognosis of very elderly patients with NVAF. Among all studied NVAF patients the most prevalent comorbidity is hypertension, followed by diabetes mellitus, heart failure, chronic kidney disease, and coronary artery disease. Aging is associated with increased comorbidities, more strokes, and higher incidences of bleeding in NVAF patients ( Table 1 , Table 3 ). Although the interest of NVAF patients in anticoagulants has grown in recent years, the use of OACs in elderly and very elderly NVAF patients is still far less than expected [15] . Indeed, despite CHA 2 DS 2 -VASc ≥ 2, more than 40% of very elderly NVAF patients had not received OACs. Clinical risk stratification (with CHA 2 DS 2 -VASc and other scoring criteria) itself does not guarantee practical medication utilization or compliance [16] . In clinical practice, we found that other than dementia, poor quality of life, the poor prognosis from other primary diseases, the overriding concern of OACs associated bleeding risk and uncertainty OACs-related clinical benefits were the common reasons for the lack of OACs in very elderly patients or affect the decision-making of anticoagulation treatment after doctor-patient communication. Therefore, those patients at the highest risk of stroke paradoxically tend to be not treated with anticoagulants. Although many previous randomized trials have demonstrated the efficacy, safety, and benefit of OACs for stroke prevention in NVAF patients, to date very elderly patients have been underrepresented in those studies [17–20] . Also, whether there are different risks and benefits between NOACs and VKA in the very elderly patient population remains largely unclear [21] . As the world's older population grows rapidly, it becomes even more important to determine the optimal anticoagulant choices for these very elderly patients who have increased both stroke and bleeding risk. Our study was conducted in a relatively older NVAF patient population, which enrolled more than 35% of very elderly patients with NVAF from Macau SAR, an area well-known for longevity. Meanwhile, very elderly patients in Macau SAR have less economic pressure in choosing anticoagulants since all the medications are fully covered by commercial medical insurance and social security [22] . Therefore, the choice of antithrombotic therapy in patients in our study was mainly based on clinical considerations, which provides an exceptional opportunity to exclude financially confounding factors in previous investigations [20,23] . As a result, this community-based study with a sizable population of very elderly patients with NVAF demonstrates that NOACs effectively prevent stroke without significantly increasing the incidence of MB. Despite higher CRNM-GIB events in patients (compared with NAT), the prognostic benefits of NOACs outweigh their side effects ( Fig. 4 , Fig. 5 ), which support the results from some preliminary observations and meta -analysis results [24–26] . We performed initial AF management consultation and continuous clinic follow-up (once a month) of all patients in an anticoagulation cardiology specialty clinic, including two non-invasive cardiologists (UO and JC), two nurse practitioners, and one outpatient pharmacist. The first visit included a comprehensive consultation on the willingness of the patient or family members, self-management ability, previous bleeding or embolism events, or abnormal liver and renal function. We usually reached a consensus on a long-term anticoagulation plan for very elderly patients with NVAF. To adjust the subsequent antithrombotic strategies, each patient receiving OACs was followed up at least once a month to monitor possibly dynamic changes of hemoglobin, liver/kidney function, fecal occult blood to potentially adjust the antithrombotic plan. For particular patients with special needs like mobility inconvenience, we arranged both outreach medical services and video call consultation services. Personalized care appears to play an essential role in maintaining medication compliance: more than 90% of patients in our study had continued to sustain their antithrombotic plan during the entire follow-up duration. Another personal care strategy in the present study is the dedicated dosage adjustment of NOACs. Some previous studies [27–30] suggest that inappropriate dose reduction has been associated with a higher risk for embolism in patients with NVAF. Therefore, we conditionally adjusted the dosage of NOACs strictly following the recommendations from updated guidelines and consensus (age, glomerular filtration rate [eGFR], weight, history of bleeding or need to be combined with a strong P glycoprotein inhibitor or antiplatelet medicine) in whole study subjects [13,14,31] . The individual lower dosage of NOACs medication plans in our very elderly patients with NVAF included dabigatran 110 mg twice daily, rivaroxaban 15 mg once daily, edoxaban 30 mg once daily, apixaban 2.5 mg twice daily. From the results of our study, adjusted dosages of NOACs still effectively reduced stroke events and improved prognosis. Despite more comorbidities and higher clinical complexity, very elderly NVAF patients receiving NOACs experienced few strokes and bleeding events during the entire follow-up. Over the past decades, novel medications and therapies have been administered to elderly patients with NVAF. The research efforts [32–36] have been increased recently to minimize OACs dosage or consumption frequency. From the ELDERCARE-AF trial [36] , very elderly NVAF patients were randomly assigned to receive edoxaban 15 mg or a placebo daily. Their result suggested lower-than-recommended dosage NOACs might be a reasonable choice for the very elderly NVAF patients at high risk of bleeding. Other study results [37,38] also supported that a lower dosage of anticoagulants in very elderly patients with NVAF could become a therapeutic option for stroke prevention, especially for those at high risk of ischemic events. The results from our studies also suggest that an appropriate dosage reduction of NOACs based on individualized risk assessment appears to be a promising approach for those very elderly NVAF patients with both high risks of bleeding and stroke. In the present study, different from previous publications [39,40] , VKA did not show clear benefits in reducing stroke prevention or all-cause mortality in very elderly patients with NVAF ( Table 6 ). One of the most important reasons VKA was originally chosen is the co-existing renal insufficiency (eGFR < 15 ml/min/1.73 m 2 ) in very elderly patients. Not surprisingly, those patients often had more comorbidities and a long home medication list. Medication noncompliance and a labile INR have been more frequently found during their clinical follow-up. Medication interaction, labile INR, and no-shows on scheduled clinics worsened this condition. Our results suggest that well-controlled clinical trial results may not guarantee practical medication utilization and compliance, to achieve sufficient risk reduction goals in real-world practice. Compared to VKA, NOACs have less adverse medication interaction and no need for INR monitoring, therefore becoming a more attractive medication choice for these very elderly patients with NVAF. Developing novel NOACs that can be readily adjusted per renal dysfunction may become a vital research focus in the future. 5 Study limitations The main limitations of this study are related to its retrospective nature and possible selection bias. We began to include patients from 2010. CHADS 2 score had been used for anticoagulation therapy indication until 2014. From 2014, we started to use updated guidelines with CHA 2 DS 2 -VASc score. However, all patients in this study were evaluated for stroke risk using CHA 2 DS 2 -VASc score, which may underestimate the anticoagulation rate in this study. To ensure the integrity of the risk factor assessment and other clinical data, we included only the patients from a southern Chinese population in the Macau Special Administrative Region (Macau SAR) of China, and their genetic backgrounds may be homogeneous. Meanwhile, due to relatively low incidence of stroke events and major bleeding events, the results from our study warrants further validation by multi-center, prospective trials to further define the roles of OACs in a large-scale of very elderly patients with diversity in ethnicity, gender, and age. 6 Conclusion Antithrombotic treatment (VKA and NOACs) reduce stroke and improve prognosis in patients in different age groups with NVAF. The prognostic benefits of NOACs outweigh their bleeding risks in very elderly patients with NVAF. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment This study was funded by Grant (File no. 087/2015/A3) from The Science and Technology Development Fund, Macau SAR government (to Tou Kun Chong, MD). All authors have reported that they have no relationships relevant to the contents of this paper to disclose. Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.ijcha.2022.101009 . Appendix A Supplementary data The following are the Supplementary data to this article: Supplementary data 1 Supplementary data 2 | [
"KOTALCZYK",
"PATTI",
"PATTI",
"EHRLINDER",
"CHENG",
"KAATZ",
"JANUARY",
"SAGRIS",
"DAYI",
"KANG",
"CONNOLLY",
"CHAO",
"SALIH",
"HIASA",
"FOODY",
"GIUGLIANO",
"PATEL",
"GRANGER",
"RUFF",
"CHANG",
"MARIACHIARA",
"LIP",
"DALGAARD",
"MELKONIAN",
"SANTOS",
"SHEN",
"KA... |
7446f00cf6274145a0d74e8acbe15391_Multifaceted chemical and bioactive features of AgTiO2 and AgSeO2 coreshell nanoparticles biosynthes_10.1016_j.heliyon.2024.e28359.xml | Multifaceted chemical and bioactive features of Ag@TiO2 and Ag@SeO2 core/shell nanoparticles biosynthesized using Beta vulgaris L. extract | [
"Elattar, Khaled M.",
"Al-Otibi, Fatimah O.",
"El-Hersh, Mohammed S.",
"Attia, Attia A.",
"Eldadamony, Noha M.",
"Elsayed, Ashraf",
"Menaa, Farid",
"Saber, WesamEldin I.A."
] | Due to increasing concerns about environmental impact and toxicity, developing green and sustainable methods for nanoparticle synthesis is attracting significant interest. This work reports the successful green synthesis of silver (Ag), silver-titanium dioxide (Ag@TiO2), and silver-selenium dioxide (Ag@SeO2) nanoparticles (NPs) using Beta vulgaris L. extract. Characterization by XRD, SEM, TEM, and EDX confirmed the successful formation of uniformly distributed spherical NPs with controlled size (25 ± 4.9 nm) and desired elemental composition. All synthesized NPs and the B. vulgaris extract exhibited potent free radical scavenging activity, indicating significant antioxidant potential. However, Ag@SeO2 displayed lower hemocompatibility compared to other NPs, while Ag@SeO2 and the extract demonstrated reduced inflammation in a carrageenan-induced paw edema animal model. Interestingly, Ag@TiO2 and Ag@SeO2 exhibited strong antifungal activity against Rhizoctonia solani and Sclerotia sclerotium, as evidenced by TEM and FTIR analyses. Generally, the findings suggest that B. vulgaris-derived NPs possess diverse biological activities with potential applications in various fields such as medicine and agriculture. Ag@TiO2 and Ag@SeO2, in particular, warrant further investigation for their potential as novel bioactive agents. | 1 Introduction The burgeoning field of nanotechnology, manipulating matter at the atomic and nanoscale, offers a revolutionary toolkit for precision medicine and diverse applications [ 1 , 2 ]. In healthcare, the prospect of targeted drug delivery using nanocarriers directly to afflicted cells ignites hope for enhanced efficacy and minimized side effects [ 3–6 ]. Nanobiosensors, with their exquisite sensitivity, hold promise for early-stage disease detection at the cellular level, paving the way for timely intervention [ 7 , 8 ]. Furthermore, the field paves the way for tissue regeneration, potentially enabling the repair and restoration of damaged structures [ 9 , 10 ]. However, the impact of nanotechnology transcends healthcare. In the realm of environmental remediation, engineered nanomaterials act as potent catalysts for pollutant degradation, offering solutions for a cleaner future [ 11–15 ]. The quest for sustainable energy finds allies in nanotechnology, with innovations such as efficient solar cells and novel energy storage solutions on the horizon [ 16 , 17 ]. Even water purification and disease diagnostics benefit from this technology, with nanofilters and biosensors poised to provide cleaner water and more accurate disease detection methods [ 18 , 19 ]. Moreover, nanotechnology revolutionizes materials science, offering lighter, stronger, and more durable options, while simultaneously contributing to the sustainable production of biofuels [ 20 , 21 ]. Recognizing the potential concerns surrounding the nascent technology, researchers are actively pursuing its responsible development, ensuring the "microscopic revolution" unfolds with safety and ethical considerations at its core [ 22 ]. Beta vulgaris L., also known as beetroot, table beet, garden beet, or simply beet, is a widely cultivated biennial plant typically red (although available in yellow, white, or striped forms) belonging to the Amaranthaceae family [ 23 ]. Beyond its various health benefits [ 24 ], B. vulgaris offers a wealth of nutrients, including potassium, vitamin C, and folates, alongside non-nutritive components like dietary fibers and polyphenols. Remarkably, beetroot is ranks among the ten most potent vegetables in terms of antioxidant capacity, attributed to its total phenolic content of 50–60 mmol/g dry weight [ 25 ]. This impressive antioxidant activity stems from a significant amount of phenolic compounds, including catechin hydrate, protocatechuic acid, epicatechin, ferulic acid, vanillic acid, p -hydroxybenzoic acid, p -coumaric acid, syringic acid, and caffeic acid [ 26 ]. Additionally, beetroot serves as a source of valuable water-soluble nitrogenous pigments called betalains, extensively used in the modern food industry [ 27 , 28 ]. B. vulgaris boasts a high content of antioxidants [ 29 ], shielding the body from free radical-induced cellular and molecular damage. This translates to a range of health benefits, including improved blood circulation [ 30 ], reduced inflammation [ 31 ], a potent cytotoxic effect [ 32 ], and a strengthened immune system [ 33 ]. Additionally, B. vulgaris exhibits diverse biological activities, encompassing hepatoprotective, anti-inflammatory [ 34 ], hepatoprotective [ 35 , 36 ], nephroprotective [ 36 , 37 ], cardiovascular protective [ 36 , 38 ], antidiabetic [ 39 ], antibacterial [ 40 ], and anticancer properties [ 41 , 42 ]. Green synthesis represents a preferred method for fabricating nanoparticles, utilizing natural materials such as plants, microbes, and algae extracts [ 43–45 ]. Compared to traditional chemical or physical approaches, this method offers enhanced eco-friendliness, cost-effectiveness, and safety [ 44 , 46 ]. The synthesis process involves reducing metal ions using biological components like polyphenols, which act as both reducing and stabilizing/capping agents, preventing nanoparticle aggregation [ 47 , 48 ]. Current research actively explores developing new and improved green synthesis methods, investigating their potential applications in various fields [ 49 , 50 ]. Mycotoxins, fungal metabolites commonly found on various plant parts, pose significant risks to humans and animals, making them major and unavoidable food contaminants [ 51 ]. These toxins, produced by contaminating fungal species like Sclerotia sclerotium , Rhizoctonia solani, and Macrophomina phasolina , cause mycotoxicosis, a chronic disease with potentially detrimental effects [ 52–55 ]. Mycotoxins such as aflatoxins, ochratoxins, deoxynivalenol, zearalenone, and fumonisins can have various adverse impacts, including liver toxicity, immune system issues, kidney damage, and birth defects in both animals and humans [ 51 , 56 ]. This urgent threat to food safety necessitates the development of strategies to either inhibit or deactivate fungal contamination in food products. Hemolysis, the rupture of red blood cells leading to anemia, jaundice, and kidney failure, can be inhibited by various mechanisms. These include membrane stabilization, antioxidant activity, and modulation of specific pro-inflammatory pathways like NF-κB signaling or cytokine production. Anti-inflammatory effects can similarly work through diverse mechanisms, such as suppressing TNF-α and IL-6 signaling, modulating immune cell activity, offering antioxidant protection, and regulating gene expression [ 57–59 ]. Understanding these molecular mechanisms holds potential for developing novel therapeutic strategies for various hemolytic and inflammatory diseases, such as sickle cell anemia and arthritis. The promise of silver nanoparticles (Ag NPs) for antimicrobial applications comes with potential downsides. Researchers raise concerns about their toxicity through oxidative stress, inflammation, genotoxicity, and cytotoxicity [ 60 ]. In an attempt to enhance their functionality, combining Ag NPs with other materials like TiO 2 and SeO 2 in core-shell composites seems promising [ 61 ]. However, this introduces new safety challenges. These composites can pose additional toxicity risks due to ion release, ROS generation, and size-dependent effects [ 62–64 ]. Understanding these multifaceted toxicities is critical to harnessing the benefits of Ag NPs while ensuring their safe and responsible use in various applications. Following our previous work on green synthesis of metallic NCs using B. vulgaris extract [ 43 ], this study explores the development and characterization of bimetallic Ag@TiO 2 , and Ag@SeO 2 NCs through the same eco-friendly approach. We hypothesize that these bimetallic NCs exhibit enhanced biological activities compared to single-metal Ag NPs. We comprehensively investigate the chemical and biological properties of Ag, Ag@TiO 2 , and Ag@SeO 2 . Notably, this study pioneers the exploration of the antioxidant, antifungal, anti-hemolytic, and anti-inflammatory activities of these beetroot-derived metallic oxide NCs. These promising results pave the way for developing novel drugs to treat inflammation, fungal infections, and hemolysis. 2 Materials and methods 2.1 Materials ABTS and DMSO were purchased from Sigma Aldrich (St. Louis, USA). Potassium persulfate was purchased from (Fluka, Biochemical Inc., Bucharest, Romania). Methanol was purchased from El-Nasr Pharmaceutical Chemicals (Cairo, Egypt). Silver nitrate (AgNO 3 ) was purchased from PIOCHEM for laboratory chemicals (CAS Number: 7761-88-8; purity: 99.5%). Selenium dioxide (SeO 2 ) and titanium dioxide (TiO 2 ) were purchased from (Merck Schuchardt OHG, 85662, Hohenbrunn, Germany). 2.2 Instruments Routine state-of-the-art technologies ( i.e ., UV–Vis spectrophotometry, SEM, TEM, XR, and FTIR) were used. The reads of the absorbance of various tests in this work were accomplished using a Spekol 11 spectrophotometer (analytic Jena AG, Jena, Germany) with a wavelength range of 200–1100 nm. The topography, surface morphology, and elemental compositions of the nanomaterials were examined using SEM and energy dispersive X-ray (EDX) spectroscopy on a Czech FEI SEM-type instrument at an accelerating voltage of 25 kV. TEM analysis was performed on a Thermo Scientific Talos F200i using carbon-coated grids (Type G 200, 3.05 μm diameter, TAAP, USA) was used to analyze the nanomaterials. In addition, TEM-JEM2100-JEOL, Japan was used to analyze the fungus species treated and untreated with Ag@SeO 2 bmNPs. X-ray diffraction (XRD) analysis was performed on a Pan Analytical Philips. Fourier-transform infrared spectroscopy (FTIR) analysis was performed using a Thermo-Fisher Nicolet IS10, USA spectrophotometer. 2.3 Preparation of plant extract and green synthesis of the nanomaterials Preparation of B. vulgaris extract: Fresh B. vulgaris plant material was washed, silenced, and dried. An equivalent amount (w/v) of distilled water (ratio 1:10) was added to the dried plant material and soaked at 25 °C overnight. The mixture was then filtered using a Whatman No. 1 filter and used immediately for subsequent experiments. Synthesis of silver nanoparticles: Separately, silver nanoparticles were synthesized using the B. vulgaris extract following a previously reported protocol [ 43 ]. Briefly, silver nitrate (AgNO 3 ) and titanium dioxide (TiO 2 ) or selenium dioxide (SeO 2 ) were added to the plant extract in a ratio (5:1). The mixture was heated and stirred until a color change was observed (within 5–15 min), indicating nanoparticle formation. The nanomaterials were then purified by centrifugation (10000 rpm, 30 °C), and washed three times with 70% ethanol [ 65 ]. 2.4 ABTS assay The established ABTS method [ 66 ] was chosen to evaluate the antioxidant potential of the samples. The preparation of the ABTS radical solution involved mixing equal volumes (1:1 v/v) of ABTS solution (7 mM) and potassium persulfate solution (2.45 mM). The mixture was then incubated in the dark at room temperature for 12–16 h. Subsequently, dilution was performed to achieve an absorbance of 0.700 at 734 nm. Individual solutions of the plant extract, Ag NPs, and bimetallic NCs were prepared at varying concentrations. An equal volume of the diluted ABTS solution was then added to each sample tube. These mixtures were kept in the dark at 25 °C for 30 min before measuring their absorbance at 734 nm. Finally, the antioxidant activity was calculated using Equation (1) : Where; ‘Ac' is the absorbance of the ABTS radical solution without the antioxidant sample, and ‘At’ is the absorbance of the mixture of the ABTS radical solution and the antioxidant sample. The IC (1) Radical scavenging activity ( % ) = ( Ac − At ) / Ac × 100 50 values were calculated from a Fit a non-linear regression curve plotted for the percentage of radical scavenging activity versus the sample concentrations. The values of IC 50 (μg/mL) were expressed as mean ± standard deviation (SD), in which all the tests were run in triplicates. 2.5 Animal model and housing The study employed male Wistar rats, six weeks old and weighing an average of 169 ± 9.4 g. Prior to experimentation, the rats were provided a one-week acclimation period to their new environment. Throughout the study, they had ad libitum access to rodent chow and tap water. Housing conditions were meticulously controlled, maintaining a temperature of 24 ± 1 °C, humidity of 50 ± 10%, and a 12-h light/12-h dark cycle. All procedures adhered to the guidelines and received approval from the Animal Care and Use Committee (MU-ACUC) of Mansoura University in Egypt. 2.5.1 Anti-hemolytic assay An anti-hemolytic assay [ 67 ] was conducted using blood collected from healthy rats via cardiac puncture into heparinized tubes. Plasma was separated from the erythrocytes by centrifugation. The remaining buffy coat was thoroughly washed three times with sterile saline solution (0.89% w/v NaCl, pyrogen-free) using 10 times the volume of the buffy coat in each wash to eliminate any residual plasma. Following each wash, the erythrocytes were centrifuged at 2500 rpm for 10 min to ensure a consistent preparation. The tested materials, B. vulgaris extract (57.7 mg/mL), Ag NPs (15.66 mg/mL), Ag@TiO 2 NC (21.62 mg/mL), and Ag@SeO 2 NC (35.16 mg/mL), were individually added to a 10% erythrocytic suspension in phosphate-buffered saline (1X, pH 7.4). All samples were incubated at 37 °C for 45 min. Control groups included: A negative control: containing only saline solution and erythrocyte suspension without any testing material. A positive control: containing distilled water to induce maximum hemolysis. The erythrocyte suspension was centrifuged to separate the intact cells from the lysed cells present in the supernatant. Hemoglobin, the red pigment within erythrocytes, serves as a marker for cell lysis. Therefore, the extent of hemolysis was determined by measuring the absorbance of the supernatant at 540 nm, which corresponds to the absorption wavelength of hemoglobin. To account for background hemolysis, the percentage of hemolysis in the saline control group was subtracted from all other groups. The final percentage of hemolysis for each sample was calculated using Equation (2) : (2) Hemolysis ( % ) = Absorbance of sample / Absorbance of water control × 100 2.5.2 Carrageenan-induced paw edema The paw edema protocol followed Morris [ 68 ]. Carrageenan (1%) was prepared in sterile saline, heated to 90 °C without boiling until fully dissolved, and cooled to room temperature. Selected rats received a subplantar injection of 0.1 mL carrageenan solution into the right hind paw. Tested materials were administered via intraperitoneal injection using a vernier caliper. Paw thickness was measured for each rat at baseline (before carrageenan injection) and 0.5, 1, 2, 3, 4, and 5 h after administering the following nanomaterials (57.7 mg/mL B . vulgaris extract, 15.66 mg/mL Ag mNPs, 21.62 mg/mL Ag@TiO 2 bmNPs, and 35.16 mg/mL Ag@SeO 2 bmNPs). Additionally, a control group received indomethacin for comparison. The paw was calculated by subtracting the baseline paw thickness from each subsequent measurement. Data were then compared between treated and control groups. The percentage of protection in anti-inflammatory activity can be determined using Equation (3) : (3) Protection ( % ) = ( 1 − ( mean of treated group / mean of the control group ) ) × 100 Control is the average of the anti-inflammatory activity measurements for the group of rats that were not treated with the test compound. Treatment is the average anti-inflammatory activity of the treated group. 2.6 Antifungal activity Two aggressive plant fungal pathogens, namely Rhizoctonia solani (ARC-NW23) , and Sclerotinia sclerotium (ARC-NW35) were obtained from Seed Pathology Research Department, Plant Pathology Research Institute, Agricultural Research Centre, Giza, Egypt. They were utilized as microbial models for evaluating the potential antifungal activity exerted by the biosynthesized nanomaterials. MIC Determination of Nanomaterials against Plant Fungal Pathogens: The minimum inhibitory concentration (MIC) of the green Ag NPs, Ag@TiO 2 NC, and Ag@SiO 2 NC (150–1860 μg/mL) against the plant fungal pathogens was evaluated using a broth microdilution assay [ 69 ]. Briefly, sterilized flasks containing potato dextrose agar (PDA) medium were prepared, and varying concentrations of each nanomaterial were individually incorporated into the molten agar before pouring into Petri dishes. After solidification, a 1 mm diameter disk of each previously grown fungus was individually inoculated onto the center of each plate. Incubation conditions were optimized for each pathogen: R. solani (ARC-NW23) incubated at 25 °C for 5 days, while S. sclerotium (ARC-NW35) incubated at 18 °C for 7 days. Fungal growth was monitored daily. The MIC was determined as the lowest concentration of nanomaterial that completely inhibited visible fungal growth compared to a control plate containing a PDA medium without any nanomaterial. 2.7 Statistical analysis Experiments were performed in triplicate for each condition. Data were analyzed using IBM SPSS Statistics version 26 (Armonk, NY, USA). All results are presented as mean ± standard deviation (SD) from at least three independent experiments. 3 Results and discussion 3.1 Physical characterizations of beet roots-mediated metallic nanomaterials 3.1.1 Ultraviolet–visible spectrophotometry (UV–Vis) B. vulgaris L. aqueous extract successfully mediated the synthesis of silver (Ag), Ag-selenium dioxide (Ag–SeO 2 ), and Ag-titanium dioxide (Ag@TiO 2 ) nanoparticles. The initial reddish-brown color of the extract changed to brown for Ag-mNPs, yellow for Ag@SeO 2 bmNPs, and gray for Ag@TiO 2 bmNPs, visually indicating nanoparticle formation ( Fig. 1 ). UV–visible spectroscopy confirmed this, revealing distinct shifts in peak wavelengths due to interactions between metal ions and plant extract components. Ag-mNPs displayed a slightly higher surface plasmon resonance (SPR) peak than expected, suggesting possible aggregation or interaction with plant components. Ag@SeO 2 bmNPs exhibited multiple peaks, while Ag@TiO 2 bmNPs showed a unique peak, indicating differing compositions and interactions. All nanoparticles retained residual absorption peaks at 245–247 nm, likely due to residual plant extract components. The maximum absorption peaks were recorded at λ max ca. 598.0 nm for Ag-mNPs, λ max ca. 428.0 nm for Ag@SeO 2 bmNPs (combined absorption), and λ max ca. 675.0 nm for Ag@TiO 2 bmNPs (combined absorption of Ag and TiO 2 ) [ 42 ]. Different plant extracts have different phytochemical compositions, and the biomolecules in these extracts act as reducing and capping agents during the formation of NPs [ 44 , 70 ]. 3.1.2 Transmission electron microscopy (TEM) Transmission electron microscopy (TEM) offers valuable insights into the size, size distribution, and shape/morphology of nanoparticles (NPs) [ 6 ]. Additionally, TEM can reveal clues about NP interactions, such as aggregation or core-shell structure formation [ 71 ]. In this study, TEM analysis of AgNPs, Ag@TiO 2 NC, and Ag@SeO 2 NC prepared from B. vulgaris aqueous extract demonstrated well-dispersed and uniform spherical NPs with diameters ranging from 10 to 50 nm ( Fig. 2 ). TEM images of AgNPs specifically revealed typically spherical nanoparticles with sizes between 5 and 20 nm, exhibiting minimal to no aggregation. However, controlling the precise size, size distribution, and shape of these NPs proved challenging ( Fig. 2 a–c). Notably, aggregation is known to decrease the surface area and catalytic activity of AgNPs [ 72 ]. These observations align with previous research demonstrating that lattice fringes arise from the diffraction of the TEM beam by the AgNPs' crystal lattice, with the spacing of these fringes corresponding to the AgNPs' lattice constant [ 42 ]. In certain cases, AgNPs may exhibit a distorted crystal structure, potentially due to factors such as the presence of surfactant molecules during synthesis or the NPs' small size [ 42 , 72 ]. The small size and large surface area of silver nanoparticles (AgNPs) contribute to their high surface energy [ 73 ]. This, in turn, makes them attractive to each other, leading to aggregation. The surface tension of AgNPs can also contribute to aggregation. Additionally, the inherent surface tension of AgNPs, which is more significant for smaller nanoparticles, further promotes aggregation [ 73 ]. TEM analysis of the Ag@TiO 2 core/shell magnetic NC revealed sizes ranging from 14.59 to 21.48 nm with a uniform distribution of smaller AgNPs (5–10 nm) decorating the TiO 2 core ( Fig. 2 d–f). Compared to AgNPs, these Ag@TiO 2 NCs displayed significantly less aggregation. While their larger size might suggest a higher surface energy, their uniform size distribution, and spherical shape mitigate this by minimizing their overall surface energy, leading to greater stability and reduced aggregation tendency. In this composite, the Ag core provides catalytic activity, while the TiO 2 shell shields the silver from oxidation and aggregation [ 74 ]. Similarly, TEM images of Ag@SeO 2 core/shell magnetic NC showed spherical NPs with diameters ranging from 10 to 20 nm, again featuring a uniform distribution of smaller AgNPs (5–10 nm) on the SeO 2 core ( Fig. 2 g–i). The nanoscale of these materials plays a crucial role in their biological and pharmacological activities. Their high surface area-to-volume ratio amplifies their catalytic and optical properties [ 75 ]. The antifungal activity of metallic/bimetallic NPs is strongly influenced by their morphology, particularly size and shape [ 76 ]. Larger NPs often possess a greater surface area, facilitating interactions with bacterial cell walls and membranes. This enhanced interaction can disrupt cell membranes, leading to leakage of cellular contents and ultimately, cell death [ 77 ]. For Ag@SeO 2 NPs, the characteristic "crystal lattice fingerprint" acts as a unique identifier for this specific material. Notably, the low level of aggregation observed in Ag@SeO 2 NPs is advantageous as it prevents the formation of large clumps that could hinder reactivity and reduce surface area [ 78 ]. Interestingly, AgNPs appear smaller (5–10 nm) when uniformly distributed on the surface of TiO 2 or SeO 2 NPs compared to their usual size range (5–20 nm). This suggests that at least some AgNPs may be partially embedded within the TiO 2 and SeO 2 matrix. Such interactions with other materials can indeed affect the size, shape, and surface area of NPs, with AgNPs potentially becoming partially encapsulated within the matrix of materials like TiO 2 or SeO 2 , leading to a perceived decrease in size [ 79 ]. 3.1.3 Scanning electron microscopy (SEM) SEM was implemented to examine the shape/morphology, PS, and PSD of biosynthesized nanomaterials [ 6 ]. SEM micrographs were obtained for AgNPs, Ag@TiO 2 NC, and Ag@SeO 2 NC in the reaction medium ( Fig. 3 A–C). AgNPs ( Fig. 3 A) appear predominantly spherical, with sizes ranging from 20 to 50 nm. The nanoparticles are densely packed and evenly distributed across the rock surface. Based on the SEM analysis, the estimated particle size of AgNPs falls within a range of 5–10 nm. SEM analysis of Ag@TiO 2 core/shell magnetic NPs ( Fig. 3 B) reveals spherical particles with a size range of 10–20 nm. Notably, the AgNPs, appearing smaller (5–10 nm), are uniformly distributed and likely partially embedded within the TiO 2 NPs. This observation suggests a porous TiO 2 shell, contributing to the rough surface texture visible in the image [ 80 ]. Consistent with expectations, the darker core corresponds to the silver component, while the lighter shell represents the TiO 2 [ 81 ]. Similarly, Ag@SeO 2 core/shell magnetic NPs ( Fig. 3 C) exhibit a spherical morphology with sizes ranging from 10 to 20 nm. As observed for Ag@TiO 2 , smaller AgNPs (5–10 nm) are well-dispersed on the SeO 2 NP surface, suggesting potential partial embedding. 3.1.4 Energy-dispersive X-ray spectroscopy (EDX) Energy-dispersive X-ray spectroscopy (EDX) serves as an indispensable tool in the arsenal of material characterization techniques, offering a non-destructive means to interrogate the elemental composition of diverse materials, including nanoparticles (NPs). Its capabilities extend beyond mere elemental identification, encompassing critical functionalities that drive scientific understanding [ 6 ]. The resulting data can be used to understand the structure and properties of the materials and signify potential applications [ 72 ]. EDX analysis was performed on AgNPs, Ag@TiO 2 nanocomposites (NCs), and Ag@SeO 2 NCs ( Fig. 4 A–C, Table S1 ). Carbon and oxygen were detected in all three samples, likely originating from organic matter like biomolecules [ 82 ]. The high silver (Ag) content in all samples confirms it as the primary element ( Fig. 4 A–C). Examining the relative abundance of elements reveals the highest Ag content in AgNPs, followed by carbon (C) and oxygen (O). This expected outcome confirms the successful green synthesis of AgNPs ( Fig. 4 A). The presence of titanium (Ti) and selenium (Se) in Ag@TiO 2 NC ( Fig. 4 B) and Ag@SeO 2 NC ( Fig. 4 C), respectively, confirms the conjugation of Ag NPs with TiO 2 NPs and SeO 2 NPs to form bimetallic NCs. Notably, Ti and Se rank as the second most abundant elements after Ag in these NCs. In the Ag@SeO 2 NC, the unexpected presence of chlorine (Cl) necessitates further investigation despite its absence in controls. Potential explanations include limitations in EDX sensitivity, masking by other elements, surface binding of chlorides with a stabilizing effect, or contamination within the SeO 2 precursor. Further studies are warranted to definitively determine the source and significance of chlorine in this unique nanomaterial. 3.1.5 X-ray diffraction (XRD) spectroscopy X-ray diffraction (XRD) analysis offers a robust technique for identifying the crystalline phases present in a sample, determining their relative abundance, and investigating phase transformations in nanoparticles (NPs) [ 6 ]. XRD data were acquired for AgNPs, Ag@TiO 2 NC, and Ag@SeO 2 NC ( Fig. 5 A–C, and Tables S2–S4 ). The XRD pattern of AgNPs shows several distinct peaks at 2θ values of 27.1430°, 31.4940°, 37.5498°, 43.5437°, 45.5153°, 54.1567°, 56.8317°, 63.8983°, and 85.1594° ( Fig. 5 A, and Table S2 ). These characteristic peaks align with the face-centered cubic (fcc) structure of silver, as confirmed by the calculated lattice parameters (Equation (4) ): Where, the unit "Å" stands for angstrom, which is a unit of length equal to 10 (4) a = 4.0878 Å −10 m ‘a’ is the lattice constant of a crystal. The lattice constant, essentially the length of one side of the repeating unit cell, reflects the fundamental building block size in a crystal structure. In this case, the calculated value of 4.0878 Å defines the unit cell size for the silver crystal structure. This fundamental parameter significantly influences the overall crystal properties. The observed peaks correspond to specific planes within the face-centered cubic (fcc) silver crystal structure: (111), (200), (220), (311), (222), (400), (420), (422), and (511). Notably, the (200) plane exhibits the highest peak intensity, corresponding to 31.4940° 2θ. The calculated lattice parameter of 4.0878 Å closely matches the standard value for fcc silver (4.0862 Å). This excellent agreement reinforces the accuracy of the XRD analysis. Additionally, similar lattice parameters for AgNPs synthesized via various methods have been reported in other studies. For example, a coprecipitation method yielded a lattice parameter of 4.0868 Å for AgNPs, and another study reported a range of 4.0860–4.0870 Å for AgNPs synthesized using different techniques [ 46 , 83 ]. The XRD pattern of Ag@TiO 2 NC displays peaks at various 2θ positions ( Fig. 5 B, and Table S3 ). The most intense peak at 31.7706° 2θ corresponds to the (101) plane of anatase TiO 2 , signifying its presence as the primary phase. However, additional peaks suggest the presence of other phases within the NC. Peaks at 5.3757° and 25.2792° 2θ arise from the (101) and (004) planes of anatase TiO 2 , respectively. Additionally, a peak at 47.5269° 2θ corresponds to the (111) plane of metallic silver, confirming its incorporation into the NC. Weaker peaks might be attributed to minor phases like rutile TiO 2 and silver compounds not classified as strictly "metallic" Ag. Overall, the XRD analysis indicates the Ag@TiO 2 NC comprises a combination of anatase and rutile TiO 2 phases alongside metallic silver. Several factors can influence the presence of these phases, including the synthesis method, doping concentration, and annealing temperature. Further analysis can provide more specific information on the relative proportion of each phase and their impact on the overall material properties. The XRD pattern of Ag@SeO 2 NC shows distinct peaks at specific 2θ positions ( Fig. 5 C, and Table S4 ). Peaks at 27.3359°, 31.6362°, and 45.6174° 2θ correspond to the (111), (200), and (113) planes of fcc silver, confirming its presence within the NC. However, other peaks suggest the existence of additional phases beyond metallic silver. Peaks at 37.5313°, 54.1235°, and 63.9025° 2θ might indicate the presence of silver compounds, potentially including Ag 2 SeO 3 . The remaining peaks likely arise from minor phases like SeO 2 and other possible silver compounds, not strictly classified as "metallic" Ag. In summary, the XRD analysis reveals that the Ag@SeO 2 NC comprises a combination of metallic silver and other silver-containing compounds alongside SeO 2 . Further analysis could provide more insight into the specific identities and relative proportions of these phases, potentially influencing the overall properties of the nanomaterial. The XRD patterns of AgNPs, Ag@TiO 2 NC, and Ag@SeO 2 NC all show "metallic" silver phases. However, these patterns reveal further details about their composition: Ag@TiO 2 NC shows peaks corresponding to anatase TiO 2 , while the XRD pattern of Ag@SeO 2 NC shows peaks corresponding to other silver compounds, such as AgSeO 3 and Ag 2 SeO 3 . Frequently, the XRD analysis of AgNPs, Ag@TiO 2 NC, and Ag@SeO 2 NC reveals that the NPs are composed of ‘metallic’ Silver and other phases such as TiO 2 and SeO 2 [ 84 ]. Furthermore, XRD analysis can estimate the crystal size of the Ag@TiO 2 NC using the Scherrer Equation (5) : Where D is the crystal size in nanometers, K is the Scherrer constant (typically 0.9), (5) D = K λ / β cos θ λ is the wavelength of the X-rays, β is the full width at half maximum intensity (FWHM) (a common measure of the spread of a peak) of the peak in radians, and θ (angular spread or divergence angle) is the Bragg angle in degrees. The FWHM of the peak at 31.7706 °2Th can be calculated using the following formula (Equation (6) ), which describes a relationship between the FWHM of a peak and its θ: where; sqrt(3), is the square root of 3 (1.732), serving as a scaling factor in the equation and relates to FWHM. (6) FWHM = 2 θ / sqrt ( 3 ) Substituting the measured values of D, K, λ , β, and θ (Bragg angle) into the Scherrer equation (Equation (5) ) revealed a crystal size of approximately 20 nm for both Ag@TiO 2 NC and Ag@SeO 2 NC. This finding reinforces the earlier observation that the XRD patterns of these nanomaterials contained peaks corresponding to metallic silver. However, the XRD analysis of Ag@TiO 2 NC yielded further insights beyond metallic silver. Peaks indicative of both anatase and rutile TiO 2 phases were also identified, suggesting the nanoparticles are not solely composed of silver but rather a composite structure combining anatase TiO 2 , rutile TiO 2 , and metallic silver. 3.2 Antioxidant activity The ABTS assay relies on the ability of antioxidants to scavenge the ABTS radical. This blue-green radical's characteristic color fades upon interaction with antioxidants, allowing for the measurement of antioxidant activity. Compared to the DPPH method, ABTS offers the advantage of consistent reproducibility across different pH values [ 66 ]. The antioxidant activity of beetroot extract, vitamin C (positive control), and biosynthesized nanomaterials was evaluated using varying concentrations ranging from 0 to 100 μg/mL. The results are presented in Fig. 6 A and B and Table S5 . Fig. 6 A demonstrates a direct correlation between sample concentration and percent scavenging activity. Among the tested samples at 100 μg/mL, vitamin C exhibited the strongest activity (94.3%), followed by the plant extract (73.3%), AgNPs (67.8%), Ag@TiO 2 NC (53.6%), and Ag@SeO 2 NC (53.2%). Between 40 and 100 μg/mL, the plant extract displayed the highest scavenging activity, followed by AgNPs and then both Ag@TiO 2 NC and Ag@SeO 2 NC with similar activity levels. Interestingly, at 40 μg/mL, both Ag@TiO 2 NC and Ag@SeO 2 NC (32.9% and 32.3% respectively) showed higher activity than AgNPs (31.7%). Interestingly, at lower concentrations of 10 and 20 μg/mL, Ag@SeO 2 NC emerged as the most potent scavenger among all samples, with 21.2% and 27% activity, respectively. All tested samples demonstrated antioxidant activity, but the potency varied across materials. Notably, the B. vulgaris extract displayed the strongest activity, with an IC 50 value of 52.57 μg/mL - approximately half that of ascorbic acid (vitamin C). AgNPs exhibited moderate activity (IC 50 = 63.53 μg/mL), followed by Ag@TiO 2 NC (IC 50 = 96.86 μg/mL) and Ag@SeO 2 NC (IC 50 = 109.5 μg/mL). The potent antioxidant activity of the plant extract likely stems from the presence of phytochemicals like polyphenols and flavonoids, as well as various nutrients like vitamin C, known for scavenging free radicals such as reactive oxygen species (ROS) and reactive nitrogen species [ 85 ]. Interestingly, despite potentially utilizing phytochemicals in their synthesis, the overall antioxidant activity of the nanomaterials remained lower than the plant extract. This could be attributed to various factors, including the possible oxidation of phytochemicals during synthesis [ 86 ] or the aggregation of nanoparticles, which reduces their surface area and free radical scavenging effectiveness [ 87 ]. 3.3 Anti-hemolytic activity Anti-hemolytic activity refers to a material's ability to protect red blood cells (RBCs) from rupturing (lysis). Here, we investigated this activity using hemolysis assays with healthy rat RBCs [ 6 , 12 ]. Hemoglobin (Hb), a red pigment absorbing light at 540 nm, is released from lysed RBCs. Hemolysis activity is assessed by measuring the solution's absorbance at this wavelength. We compared the hemolysis activity of beetroot extract, AgNPs, Ag@TiO 2 NC, and Ag@SeO 2 NC to vitamin C, a known hemocompatible substance ( Table 1 ). Notably, the plant extract and vitamin C showed very similar hemolysis activity, both below 5%. This suggests that the plant extract exhibits anti-hemolytic activity and could be safely administered to humans [ 6 , 12 ]. In contrast, the hemolysis rates of AgNPs (10.97%), Ag@TiO 2 NC (15.77%), and Ag@SeO 2 NC (56.22%) were approximately 2, 3, and 15 times higher than the safety limit. This indicates that these nano-materials are hemotoxic (toxic to blood cells). Furthermore, the significantly higher hemolysis rates of Ag@TiO 2 NC and Ag@SeO 2 NC compared to AgNPs suggest that bimetallic oxides are more hemotoxic. Notably, a higher hemolysis rate (%) translates to a greater percentage of lysed RBCs [ 88 ]. These findings strongly suggest that SeO 2 NPs are not hemocompatible, unlike TiO 2 NPs, based on the differential hemolytic effects observed between each nanomaterial and AgNPs. Evaluating the hemocompatibility of new drugs, particularly antibiotics, chemotherapeutics, nanodrugs, and oxide nanomaterials, is crucial to differentiate their specific hemolytic effects from other factors contributing to anemia. This differentiation is vital as several conditions and medical interventions can trigger red blood cell (RBC) destruction, including hemolytic anemia, infections, cell anemia, thalassemias, bone marrow aplasia, blood transfusions, and mechanical heart valves [ 89 , 90 ]. Increased hemolysis (measured as % hemolysis) can lead to several complications, such as anemia, organ damage (including kidney and heart failure), jaundice, and gallstones (formed when bilirubin is concentrated in the gallbladder) [ 89 ]. The anti-hemolytic assay investigates various mechanisms thought to protect RBCs from damage and lysis. One significant pathway involves antioxidants like vitamin C, which scavenge free radicals and reduce oxidative stress [ 67 ]. 3.4 Anti-inflammatory activity The carrageenan-induced paw edema assay, known for its simplicity and reliability, offers a valuable tool for evaluating the anti-inflammatory activity of compounds and drugs [ 91 ]. This method leverages the inflammatory response triggered by carrageenan, a seaweed extract, in rat paws, with the expectation that anti-inflammatory agents can mitigate this response [ 92 ]. Fig. 7 and Table S6 present the anti-inflammatory activity results, showcasing the protective effect of each sample at various time points. Indomethacin emerged as the most effective agent, providing an average protection of 24.55%. Following closely behind was the B. vulgaris extract with a mean protection of 27.86%. Meanwhile, Ag@NPs, Ag@TiO 2 , and Ag@SeO 2 demonstrated lower, but still noticeable, anti-inflammatory effects with mean protections of 22.73%, 19.66%, and 5.15%, respectively. In conclusion, this study confirms the potent anti-inflammatory properties of both Indomethacin and the B. vulgaris extract. While Ag@NPs, Ag@TiO 2 , and Ag@SeO 2 also exhibited anti-inflammatory activity, their efficacy was noticeably lower compared to the leading performers. One possible explanation for the superior anti-inflammatory activity of B. vulgaris extract is that it is more effective at inhibiting the production of pro-inflammatory cytokines. Pro-inflammatory cytokines are signaling molecules that play a key role in the inflammatory response. By inhibiting the generation of pro-inflammatory cytokines, B. vulgaris extract can reduce inflammation and its associated symptoms. Another possibility is that B. vulgaris extract is more effective at promoting the production of anti-inflammatory cytokines. Anti-inflammatory cytokines are signaling molecules that counteract the effects of pro-inflammatory cytokines and help to resolve inflammation. By promoting the production of anti-inflammatory cytokines, Beta vulgaris extract can help to speed up the recovery process from inflammation. It is also possible that Beta vulgaris L. extract has other mechanisms of action that contribute to its anti-inflammatory activity. For example, it may be able to reduce oxidative stress, which is a major contributor to inflammation, or it may be able to improve the function of the immune system, which can help to fight off infection and reduce inflammation. Generally, the anti-inflammatory data is very promising and recommends that B. vulgaris L. extract may be an effective treatment for a variety of inflammatory conditions. Metal nanoparticles have anti-inflammatory activity through a variety of mechanisms [ 93 ]. They can inhibit the production of pro-inflammatory cytokines [ 94 , 95 ], promote the production of anti-inflammatory cytokines [ 96 ], modulate the function of immune cells, and scavenge reactive oxygen species [ 97 ]. The specific mechanism of action varies reliant on the type, size, shape, and surface chemistry of the nanoparticles. In the context of the plant extract, it is possible that the active components are more readily accessible and can interact more effectively with cellular targets compared to the metallic nanoparticles. Additionally, the plant extract may contain a variety of compounds with synergistic effects, contributing to its overall anti-inflammatory activity. However, the results of this study suggest that these agents have the potential to be effective treatments for a variety of inflammatory conditions. 3.5 Antifungal activity 3.5.1 Radial fungal growth The antifungal activity was evaluated against two targeted pathogenic plant fungi: Rhizoctonia solani , and Sclerotinia sclerotium . The MIC of each nanomaterial was determined for both fungi ( Table 2 ). Against R. solani , the MIC values were 422, 624, and 930 μg/mL for AgNPs, Ag@TiO 2 NC, and Ag@SeO 2 NC, respectively. This indicates that AgNPs and Ag@TiO 2 NC were approximately twice as effective as Ag@SeO 2 NC in inhibiting this fungus. For S. sclerotium , the MIC values were 211, 314, and 1390 μg/mL for AgNPs, Ag@TiO 2 NC, and Ag@SeO 2 NC, respectively. Both AgNPs and Ag@TiO 2 NC were roughly 1.5 times more effective than Ag@SeO 2 NC against this fungus. Furthermore, Ag@SeO 2 NC exhibited concentration-dependent antifungal activity. At 422 and 930 μg/mL, it inhibited the growth of S. sclerotium by 47.5% and 56.25%, respectively. Notably, even at its MIC threshold of 422 μg/mL, Ag@SeO 2 NC inhibited the growth of R. solani by 72.5%. Overall, this study suggests that the antifungal activity of these nanomaterials varies depending on the fungal species and the specific material. While AgNPs and Ag@TiO 2 NC showed broader effectiveness, Ag@SeO 2 NC displayed potent activity against R. solani at relatively low concentrations. Nanoparticles pack a punch against not just bacteria, but also pesky fungi! Studies have shown their effectiveness against both Gram-positive and Gram-negative bacteria, and even specific silver nanoparticles hold promise against fungal foes like Candida albicans and Candida tropicalis [ 98 , 99 ]. Slavin and Bach [ 100 ] reviewed the mechanisms of NPs towards fungal activities across the following modes, i.e ., reactive oxygen species, plasma membrane deformation, cell wall architecture, interaction with fungal structure, inhibition of spore germination, and regulation of protein and gene. AgNPs showed potential activity against fungus., Saccharomyces cerevisiae KCTC7296, at MIC 50 (2 μg/mL) [ 101 ]. Likewise, the AgNPS, at MIC 100 (4 μg/mL), was efficient against Fusarium oxysporum and at MIC 100 (1 μg/mL) AgNPs showed potentiality against both Fusarium solani and Aspergillus flavus [ 102 ]. The AgNPs exhibited potentiality against Aspergillus fumigatus at MIC 100 (100 μg/mL) [ 103 ]. The TiO 2 /Ag NPs showed inhibitory activity against C. albicans and Cryptococcus neoformans , at MIC (12.5 μg/mL) [ 104 ]. Selenium NPs showed potential activity against A. fumigatus TIMML-050, at MIC value (0.5 μg/mL) [ 105 ]. However, The antifungal activity of metallic/bimetallic nanoparticles depends on the nanoparticle's morphology, for instance, size and shape [ 76 ] . Larger silver nanoparticles or NC may have a greater surface area, which can promote interactions with bacterial cell walls and membranes, and enhance their antimicrobial activity by causing bacterial cellular contents to leak out [ 77 ]. The preparation method produced NPs of various sizes, which caused the antifungal abilities of the nanoparticles to vary visibly. The larger surface area of smaller nanoparticles may explain the significant increase in the capacity of nanopesticides [ 106 ]. Nanoparticles with larger surface areas and smaller particle sizes exhibited significantly enhanced antifungal properties thereby inhibiting fungal growth [ 106 ]. Our study investigated the antifungal potential of biosynthesized nanomaterials against two major plant pathogens, Rhizoctonia solani , and Sclerotinia sclerotium . These fungi not only decrease plant growth but also contribute to mycotoxin contamination in food, posing health risks to both animals and humans [ 52–55 ]. Interestingly, the nanomaterials exhibited varying antifungal abilities. This variation likely stems from several factors, including the specific type of nanomaterial, its size, and its inherent antimicrobial activity. Previous research suggests that nanoparticles with larger surface areas and smaller sizes tend to demonstrate stronger antifungal properties [ 106 ]. Although AgNPs, TiO 2 NPs, and SeO 2 NPs have been shown to exert antifungal activities, the combination of AgNPs with TiO 2 or SeO 2 NPs does not enhance the antifungal activity. There are no additive or synergistic effects. This unexpected failure in enhancing antifungal activity is likely due to factors like competition for fungal binding sites, aggregation, altered surface properties, or interference with AgNP's mechanism of action. Further research is needed to pinpoint the exact reason and optimize nanoparticle design for effective fungal control. This study demonstrates the strong antioxidant activity of B. vulgaris L. extract, suggesting its potential as a natural alternative to harmful fungicides for controlling pathogenic fungi. Its high levels of phenolic and flavonoid constituents are likely contributors to this activity. Furthermore, both plant extracts and nanoparticles can exhibit antifungal activity. One proposed mechanism involves the disruption of membrane-bound respiratory enzymes in fungi, hindering their growth [ 107 ]. These findings not only highlight the issue of mycotoxin contamination in food but also offer promising avenues for its control. Ag@SeO2 and Ag@TiO2 NCs, with their antifungal activity, could be explored as safer and potentially more effective alternatives to conventional fungicides in food preservation. 3.5.2 FT-IR spectral analyses on fungi treated with Ag@SeO 2 NC Ag@SeO 2 NC was chosen for FT-IR analysis since it holds distinct advantages over Ag@TiO 2 NC for FT-IR studies of fungi treated with biosynthesized nanomaterials. Ag@SeO 2 NC holds potential for diverse interaction mechanisms, enhanced specificity for fungal biomolecules, and reduced spectral interference from SeO 2 , and the growing interest in their antifungal properties makes them a compelling choice for this research. This selection promises richer data, deeper insights, and broader research impact, solidifying their potential as valuable tools in fungal control. The FT-IR spectral analysis was performed for R. solani and S. sclerotium treated or not (control) with Ag@SeO 2 NC ( Fig. 8 ). The FTIR spectra obtained for R. solani , untreated or treated with Ag@SeO 2 NC, revealed some fascinating variances. The peak in the infrared spectrum at 3279 cm-1 is due to the stretching vibrations of hydroxyl groups. In the treated sample, this peak shifts to a lower frequency (3270 cm-1), suggesting that the Ag@SeO2 NC may be interacting with the –OH functions on the surface of the fungal cells [ 108 ]. The absorption band at ν = 1632 cm −1 is attributed to the C O stretching vibrations of amide functions. The absorption band at ν = 817 cm −1 is attributed to the C–O stretching vibrations of glycosidic bonds. The disappearance of this band in the treated sample suggests that the Ag@SeO 2 NC may be disrupting the glycosidic bonds in the fungal cell wall [ 109 ]. R. solani showed a shift in the absorption band at ν = 817 cm −1 to a lower frequency ( ν = 775 cm −1 ) after treatment with Ag@SeO 2 NC. This suggests that the nanoparticles may be interacting with the glycosidic bonds in the fungal cell walls. The FTIR spectra of S. sclerotium , untreated or treated with Ag@SeO 2 NC, revealed a shift in the absorption band at ν = 3279 cm-1. The peak in the infrared spectrum at 3279 cm-1 is due to the stretching vibrations of hydroxyl groups. In the treated sample, this peak shifts to a higher frequency (3280 cm-1), suggesting that the Ag@SeO 2 NC may be interacting with the hydroxyl groups on the surface of the fungal cells in a different way than they interact with the surface of R. solani cells. The peak in the infrared spectrum at 1630 cm-1 is due to the stretching vibrations of amide groups. The shift of this band to a lower frequency ( ν = 1614 cm −1 ) in the treated sample suggests that the Ag@SeO 2 NC may be interacting with the amide groups on the surface of the fungal cells in a similar way to how they interact with the amide groups on the surface of R. solani cells. The absorption band at ν = 1312 cm −1 is attributed to the C–N stretching vibrations of chitin. The appearance of this band in the treated sample suggests that the Ag@SeO 2 NC may be interacting with the chitin in the fungal cell wall [ 110 ]. The absorption band at ν = 566 cm −1 is attributed to the C–O stretching vibrations of glycosidic bonds. The disappearance of this band in the treated sample suggests that the Ag@SeO 2 NC may be disrupting the glycosidic bonds in the fungal cell wall, like how they disrupt the glycosidic bonds in the fungal cell wall of R. solani . S. sclerotium showed a shift in the absorption band at ν = 1148 cm −1 to a lower frequency ( ν = 1148 cm −1 ) after treatment with Ag@SeO 2 NC. This suggests that the NCs may be interacting with the C–O bonds in the fungal cell walls [ 111 ]. Both R. solani and S. sclerotium fungal species showed a shift in the absorption band at ν = 3279 cm −1 to a lower frequency ( ν = 3270 cm −1 and ν = 3280 cm −1 , respectively) after treatment with Ag@SeO 2 NC. This suggests that the NCs may be interacting with the hydroxyl groups (O–H) in the fungal cell walls. Additionally, both species showed a shift in the absorption band at ν = 1632 cm −1 to a lower frequency ( ν = 1632 cm −1 and ν = 1614 cm −1 , respectively) after treatment with Ag@SeO 2 NC. This suggests that the NCs may also be interacting with the amide groups (C O–N) in the fungal cell walls. 3.5.3 TEM of the bmNPs-affected fungi The TEM image of untreated S. sclerotium ( Fig. 9 a) shows a healthy cell with a well-defined cell wall and cytoplasm. The cytoplasm is the jelly-like substance inside the cell that contains all the cell's organelles. The following organelles are visible in the TEM image: mitochondria, ribosomes, nucleus, and vacuole. The TEM image also shows several small vesicles in the cytoplasm. In addition, the cell wall of the S. sclerotium cell is a thick, dense layer that surrounds the cell. The cytoplasm also contains a network of microtubules and microfilaments. The TEM analysis revealed significant damage to S. sclerotium cells treated with Ag@SeO 2 NCs ( Fig. 9 b). Numerous small, dark dots visible within the cell cytoplasm represent the internalized nanoparticles. Moreover, the cell walls exhibited damage, evident from the presence of several holes. Collectively, these observations suggest that Ag@SeO 2 NCs played a role in compromising the cell wall integrity, potentially rendering it more susceptible to further damage [ 112 ]. Furthermore, damage extended beyond the cell wall, affecting various organelles within the cytoplasm. Swollen and misshapen mitochondria, for instance, indicate potential interference with the cell's energy production capabilities. In conclusion, the TEM imagery provides compelling evidence for extensive cellular damage inflicted by Ag@SeO 2 NCs on S. sclerotium , likely impairing essential functions and compromising overall cell viability ( Fig. 9 b). The TEM image of untreated R. solani ( Fig. 9 c) exhibits a healthy cell with a well-defined cell wall and distinct cytoplasmic region. The cytoplasm, a gel-like substance containing various organelles essential for cell function, appears homogeneous in this micrograph. Additionally, small vesicles, membrane-bound structures for transport and storage, are visible within the cytoplasm. The TEM image shows a healthy R. solani cell with all of the organelles necessary for the cell to function properly. The cytoplasm of the R. solani cell is filled with a variety of organelles, including mitochondria, ribosomes, nuclei, and vacuoles. The cytoplasm also contains a network of microtubules and microfilaments, which provide support and structure for the cell [ 113 ]. The R. solani cell in the TEM image has a large, well-defined nucleus, which suggests that the cell is healthy and capable of dividing. The R. solani cell in the TEM image has a large vacuole, which suggests that the cell is well-hydrated and healthy. Additionally, the S. sclerotium cell in the TEM image has several mitochondria, which suggests that the cell is metabolically active [ 114 ]. The TEM image of R. solani treated with Ag@SeO 2 NCs ( Fig. 9 d) reveals substantial cellular damage induced by the nanoparticles. Numerous small, dark dots visible within the cytoplasm represent internalized nanoparticles. Moreover, the cell walls exhibit significant damage, evident from the presence of numerous holes. These observations collectively suggest that Ag@SeO 2 NCs contribute to compromised cell wall integrity, potentially rendering it more susceptible to further damage [ 115 ]. The damage extends beyond the cell wall, affecting various organelles within the cytoplasm. Notably, some mitochondria appear swollen and misshapen, indicating potential interference with energy production capabilities. Furthermore, the TEM image shows extensive vacuolation within the R. solani cell. Vacuolation, characterized by the formation of large vacuoles, represents a type of cell death often observed in response to toxins or stressors [ 116 ]. This finding provides additional evidence for the effectiveness of Ag@SeO 2 NCs in killing R. solani cells [ 117 ]. Generally, there are some hypotheses for possible mechanisms by which the Ag@SeO 2 NC is damaging the S. sclerotium and R. solani cells: The bmNPs may be generating reactive oxygen species, which are unstable molecules that can damage cell components [ 42 ]. The bmNPs may be interacting with the cell membrane, disrupting its structure and function [ 118 ]. The bmNPs may be entering the cell and interacting with DNA or other cellular components, causing damage [ 119 ]. 4 Conclusion A simple, green, and cost-effective method was established for synthesizing AgNPs, Ag@TiO2 NC, and Ag@SeO2 NC using B. vulgari aqueous extract. Comprehensive characterization via TEM, SEM, EDX, and XRD confirmed successful nanoparticle formation. Ag@SeO 2 NCs demonstrated potent antifungal activity against S. sclerotium and R. solani with a minimum inhibitory concentration (MIC) of 462 μg/mL. FT-IR analysis revealed diverse interactions between nanomaterial functional groups and fungal cell walls, suggesting potential mechanisms of action. Although exhibiting the strongest antifungal activity, Ag@SeO 2 NCs also displayed the highest hemolytic activity (56.22%), likely due to the synergistic effect of AgNPs and SeO 2 NPs. Notably, B. vulgaris extract itself exerted superior antioxidant activity (IC 50 = 52.57 μg/mL) compared to the synthesized nanomaterials, confirming previous reports. Finally, TEM images corroborated the MIC results, visually confirming the antifungal activity of Ag@SeO 2 NCs against both fungi. These findings showcase the significant chemical features and multifaceted biological activities (antioxidant, anti-hemolytic, and antifungal) of AgNPs, Ag@TiO 2 NCs, and Ag@SeO 2 NCs. While further investigation into specific functionalities and mitigation of Ag@SeO 2 NCs' hemolytic activity is needed, these nanomaterials exhibit promising potential for various industrial and biomedical applications, holding the potential to become candidates for the future development of novel therapeutic agents. Funding The authors extend their appreciation to the Researchers Supporting Project number (RSP2024R114), King Saud University , Riyadh, Saudi Arabia. Institutional review board statement The ethics consideration of the experimental protocols was approved by the Local Ethics Committee for Animal Experimentation in Animal Care and Use Committee (MU-ACUC), under the approval code; 2024-SREC-005), Mansoura University, 35516, Egypt. Informed consent statement Not applicable. Data availability statement All data generated or analyzed during this study are included in this published article and the supplementary file. CRediT authorship contribution statement Khaled M. Elattar: Writing – original draft, Supervision, Project administration, Formal analysis. Fatimah O. Al-Otibi: Funding acquisition. Mohammed S. El-Hersh: Methodology, Data curation. Attia A. Attia: Writing – review & editing, Investigation. Noha M. Eldadamony: Methodology, Data curation, Conceptualization. Ashraf Elsayed: Visualization, Validation, Methodology. Farid Menaa: Software, Conceptualization. WesamEldin I.A. Saber: Writing – review & editing, Writing – original draft, Visualization, Supervision, Methodology, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A Supplementary data The following is the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.heliyon.2024.e28359 . | [
"IDREES",
"MENAA",
"KHAN",
"SAHOO",
"ARIF",
"IQBAL",
"MENAA",
"MENAA",
"SALEEM",
"WIJESINGHE",
"ZAFAR",
"ZAFAR",
"ZHANG",
"BOUAFIA",
"GHERBI",
"HAQ",
"HAQ",
"WIJESINGHE",
"WIJESINGHE",
"MAMADOU",
"MARTINEZ",
"AHMED",
"EDZIRI",
"ALHACHAMI",
"VINSON",
"GEORGIEV",
"C... |
cce8e8576af44544b16bc296a6345861_Ultrasonic line source and its coupling with the tool induced heat generation and material flow in f_10.1016_j.jmrt.2022.09.053.xml | Ultrasonic line source and its coupling with the tool induced heat generation and material flow in friction stir welding | [
"Zhang, Guanlan",
"Wu, ChuanSong",
"Gao, Jinqiang"
] | To understand the synchronous interaction mechanism between the ultrasonic vibration exerted on the tool and the tool induced thermo-mechanical behavior in ultrasonic vibration-assisted friction stir welding (UVaFSW) process, the geometric shape of the contact surface between the horn and the tool was considered, and a previous point source of ultrasound was replaced by a line source which is more in line with the actual situation. With the established ultrasonic field model based on a line source of sound, the friction coefficient on the tool/workpiece interface was modified by considering the ultrasonic action from different directions. Combined with the computational fluid dynamics model, the UVaFSW process was numerically simulated. It was found that the line source of ultrasonic energy improved the computation accuracy of ultrasound pressure distribution, and ultrasonic antifriction effect greatly reduced the interfacial friction coefficient near the pin. The exerted ultrasonic vibration led to a slight overall decrease in total amount of heat generation in UVaFSW due to the dual effects of acoustic softening and ultrasonic antifriction. The model was validated by comparing the calculated thermal cycles and the thermo-mechanically affected zone with the experimental measurements. | 1 Introduction Friction stir welding (FSW), as a most advanced solid-state joining process with low heat input, can effectively avoid various weld defects associated with melting and solidification in fusion welding processes, and has become a prevailing process in joining of aluminum alloys in aerospace, automotive, railway vehicle, and shipbuilding industries [ 1 , 2 ]. To overcome the problems like high welding load and tool wear [ 3 , 4 ] and improve the joint quality further, researchers tried to apply an external energy source (plasma, laser, resistance/induction heat) to assist the FSW process, which has played a positive role in improving the microstructure and mechanical properties of the joints [ 5–8 ]. However, such additional heat source may offset the advantages of FSW as solid-state welding [ 9 ]. The ultrasonic vibration, as a kind of mechanical energy, has also been used to assist the FSW process, since acoustic softening effect occurs when materials experience plastic deformation [ 10 , 11 ]. There are two ways to exert ultrasonic vibration in FSW. One is to apply ultrasonic vibration directly on the workpiece near the welding tool, and the ultrasonic vibration enhanced FSW (UVeFSW) was developed [ 12 , 13 ]. Because there is a certain distance between the ultrasonic horn center and the welding tool axis in UVeFSW, there exists an asynchronous interaction of the ultrasonic exertion and the thermo-mechanical action around the tool, resulting in the acoustic softening effect is not employed at most appropriate instant during welding process. To solve this problem, another way was to exert ultrasonic vibration on the tool via two rollers, as shown in Fig. 1 , which was defined as the ultrasonic vibration-assisted FSW (UVaFSW) [ 14 ]. In UVaFSW, the synchronous coupling between the ultrasonic effect and the thermo-mechanical behavior induced by the rotating tool is realized. Previous experiments shown that UVaFSW was able to widen the suitable process parameters window, improve the weld quality and reduce the welding load [ 14–16 ]. To understand the underlying mechanism in UVaFSW, especially the interaction of multi-physical fields like ultrasonic vibration, heat generation, plastic deformation, heat transfer and material flow, Zhao et al. [ 17 , 18 ] conducted numerical simulation of the UVaFSW process, modified the constitutive equation by considering the acoustic stress work and strain, and analyzed the ultrasonic effects on the material flow state and heat generation at the contact interface. The prerequisite for modeling the UVaFSW process is how to describe the ultrasonic transmission process of “ultrasonic horn-tool-workpiece” appropriately. Zhao et al. [ 17 ] simplified the contact interface between the horn and the welding tool as a point source of ultrasonic energy. In fact, the horn front includes two rollers which are in contact with the tool side surface, as demonstrated in Fig. 1 . The contact region between a roller and the tool is a line, not a point. Therefore, a line source of ultrasonic vibration should be taken into consideration in modeling the UVaFSW process. In this study, the contact interface between the roller and the tool was taken as a line to make it closer to the actual situation, and the sound pressure distribution pattern between the tool and the workpiece was analyzed. In addition, the constitutive equation and the friction stress were modified by introducing the acoustic effect generated from the line source. Then, the numerical simulation based on the computational fluid dynamics (CFD) approach coupled with computational ultrasonic energy field was established to comprehensively analyze the temperature field and the material flow in UVaFSW. The model was validated by comparing the calculated thermal cycles and the boundary of thermo-mechanically affected zone (TMAZ) with the experimental measurements. 2 Experiments The UVaFSW system, schematically illustrated in Fig. 1 , was used to conduct the butt welding experiments of 2A12-T4 aluminum alloy plates with the dimensions of 200 mm (length) × 65 mm (width) × 3 mm (thickness). Table 1 lists the chemical composition of 2A12 aluminum alloy sheet. The welding tool had a concave shoulder with a diameter of 12 mm and a right-hand thread pin with a length of 2.8 mm. The root and the tip diameters of the pin were 4.2 mm and 3.2 mm, respectively. The welding speed, tool rotation speed, tool tilt angle and plunging depth of shoulder were 50 mm/min, 700 rpm, 2.5° and 0.1 mm, respectively. To measure the welding thermal cycles, three measurement points were selected at advancing side (AS) and retreating side (RS), respectively. The specific location is shown in Fig. 2 . The grooves with dimensions of 25 mm (length) × 2 mm (width) × 1.5 mm (height) were milled before welding, and the K-type thermocouples were inserted into the pre-machined grooves and fixed with inorganic high temperature resistant adhesive. The measuring position was 3 mm away from the welding centerline and 1.5 mm away from the bottom surface of the plates. After welding, specimens were cut at transverse cross sections of joints for observation of macroscopic morphology. The specimens were etched with the Keller reagent (1.0 ml HF + 1.5 ml HCl + 2.5 ml HNO 3 + 95.0 ml H 2 O) for 60 s, then rinsed with alcohol and dried with a blower. 3 Mathematical model Fig. 3 is a sketch diagram of UVaFSW geometric model. When the ultrasonic horn vibrates at a certain speed, it will generate sound waves in the space around it, and the contact region between the roller and the tool, i.e., the transmission path of sound waves, is simplified as a straight line. The ultrasonic waves can be transmitted to the side of the rotating tool, making it produce reciprocating vibration along the welding direction. A three-dimensional coordinate system was set up on the workpiece with the intersection of the tool axis and the plate bottom as its origin. The x -axis was opposite to the welding direction, and the z -axis was vertical to the bottom of the plate. Assuming that the material is an incompressible single-phase non-Newtonian viscous fluid, a CFD model was established. The workpiece was discretized into non-uniform grids which was finer near the tool and coarser far away from the tool. The number of elements in the calculation domain was 158,820. Since this study focuses on the influence of sound field model optimization on material flow and heat generation, some simplifications were made, i.e., the effect of the pin thread, the tool tilt angle and the plunge depth of the shoulder were ignored temporarily. 3.1 Ultrasonic field model The structure of the ultrasonic horn adopts double cylindrical rollers whose one side is touched with the ultrasonic amplitude-bar, and the other side is in contact with the tool. To ensure the stable exertion of ultrasound, the contact parts between the two rollers and the tool form the tangency of three cylindrical surfaces, as shown in Fig. 4 . Ultrasonic waves are emitted from the emission surface, and the change of the geometric size of the emission surface will inevitably affect the effective area of the ultrasonic wave emission, thus affecting the distribution of ultrasonic waves in space. Each point on the contact surface between the roller and the tool can be regarded as a single point source, which vibrates in simple harmonic way with the same amplitude and phase. Fig. 5 illustrates the countless points on the contact surface, whose arrangement can be taken as a line with a length of 2 L . Take the center of line as the coordinate origin, which coincides with the z -axis. Set the distance from point P to the origin as 1 r 1 , and the included angle with the x 1 -axis as γ , the rotation angle is φ . The sound pressure at any point can be viewed as the superposition of the sound pressure transmitted from all points on a wave surface to that point based on Huygens's Principle. Therefore, the sound pressure at point P in space is the superposition of all points on the contact surface to point P. According to the acoustic theory, the ultrasonic source is divided into an infinite number of point sources. The sound pressure generated at P by the point source at l from the origin is [ 19 ]: where (1) d p = j k u ρ T c T 2 π h ( x 1 , y 1 , z 1 ) u a e j [ 2 π f t − k u h ( x 1 , y 1 , z 1 ) ] d l k u is wave number, is the density of the tool material, ρ T is the acoustic velocity in the tool, c T is the distance from the point source to the P, h ( x 1 , y 1 , z 1 ) is the amplitude of the vibration velocity at the point source u a dl , j is the imaginary unit, f is the vibration frequency, and t is the time. When r 1 is much larger than the size of the line source L , the h of amplitude part in the sound pressure expression can be approximately replaced by r 1 . For h of phase part, there are: (2) h ≈ r 1 − l sin γ Therefore, by integrating , the sound pressure generated by the ultrasonic source at d l P can be obtained: (3) p = j k u ρ T c T 2 π r 1 u a e j ( 2 π f t − k u r 1 ) ∫ − L L e j k u l sin γ d l The sound pressure changes periodically with the time t , and the amplitude of the sound pressure is calculated. The sound pressure amplitude of the two rollers is superimposed. Thus, the amplitude of the sound pressure at any point P in the space is written as: where (4) P M ( r 1 ) = 2 P 0 ⋅ 2 L λ r 1 ⋅ sin ( k u L sin ( γ ) ) k u L sin ( γ ) λ is the wavelength, and P 0 is the initial sound pressure amplitude which can be described as: where (5) P 0 = 2 π f ρ h c h ξ 0 is the density of the horn, ρ h is the acoustic velocity in the horn, c h is vibration amplitude of the horn. The sound pressure and amplitude of the ultrasonic horn cannot be obtained directly, but can be attained indirectly through the amplitude in the air ξ 0 measured by the tests. ξ a i r When the ultrasonic wave is transmitted from one material into another material, the sound beam will be refracted. Therefore, it is necessary to consider the transmission coefficients at the interface between two materials. Besides, when the ultrasonic propagates in the material, it will also attenuate due to the viscosity loss, and follow the exponential attenuation pattern with the increase of distance. Thus, the absorption attenuation coefficient of ultrasonic transmission in the materials should be included. The transmission coefficient from the ultrasonic horn to the tool is: (6) β 1 = 2 ρ T c T ρ h c h + ρ T c T The transmission coefficient from the tool to the workpiece is: where (7) β 2 = 2 ρ Al c Al ρ T c T + ρ Al c Al is the density of the workpiece, ρ Al is the acoustic velocity in the workpiece. c Al The absorption attenuation coefficient in the tool is 0.1 because the tool is regarded as a rigid body, and that in the workpiece can be obtained as follows [ 20 ]: where (8) α 2 = 8 π 2 f 2 μ ′ 3 ρ Al c Al 3 is the shear viscosity coefficient. μ ′ The sound wave in the workpiece is considered as far-field transmission, the amplitude does not change with r 1 , and only the attenuation caused by material absorption is considered. Within the range of “ z > 0.2 mm” (i.e. the area containing the tool and workpiece), the ultrasonic vibration is transmitted to the left and right along the welding direction. Consequently, at the positions in different directions of x > 0 and x < 0, the direction of the sound pressure at any time is opposite, as shown in Fig. 6 a. The sound pressure amplitude transmitted to different positions on the workpiece is: (9) x > 0 P M ( r qp ) = β 1 β 2 P M ( r 1 ) exp ( − α 1 r 1 ) exp ( − α 2 r qp ) where (10) x < 0: P M ( r qp ) = − β 1 β 2 P M ( r 1 ) exp ( − α 1 r 1 ) exp ( − α 2 r qp ) is the distance between Q and P in r qp Fig. 3 , is the sound pressure at any point Q in the workpiece. P M r qp Within the range of “ z ≤ 0.2 mm” (i.e. the area below the surface of the pin bottom), sound waves are mainly transmitted by diffusion, as shown in Fig. 6 b. When the ultrasonic beam enters to the workpiece, it will diffuse. It is assumed that the diffusion angle of the ultrasonic beam after entering the workpiece is 45°. Considering ultrasonic sound as sine wave with time, the dynamic sound pressure at any position Q on the workpiece can be given as: (11) x > 0 P s ( r qp ) = P M ( r qp ) sin ( 2 π f t − k u x ) (12) x < 0: P s ( r qp ) = − P M ( r qp ) sin ( 2 π f t − k u x ) The user-defined scalar (UDS) method is applied to introduce the sound field into FLUENT software to calculate the UVaFSW process. The ultrasonic field can be written as: (13) ∂ 2 P s ∂ x 2 + ∂ 2 P s ∂ y 2 + ∂ 2 P s ∂ z 2 = 1 c Al 2 ∂ 2 P s ∂ t 2 Table 2 lists the parameter values involved in sound field calculation. The amplitude of the ultrasonic horn in air is 25.23 μm, and the ultrasonic vibration frequency is 20 kHz. 3.2 Governing equations The friction stir butt welding process of 2A12-T4 aluminum alloy plates is numerically simulated and the governing equations include the conservation equations of mass, momentum and energy [ 21 ]: (14) ∂ ρ Al ∂ t + ρ Al ∇ ⋅ V → = 0 (15) ρ Al [ ∂ V → ∂ t + ( V → ⋅ ∇ ) V → ] = − ∇ P + μ s ∇ 2 V → where (16) ρ Al c p ( ∂ T ∂ t + V → ⋅ ∇ T ) = ∇ ⋅ ( λ ∇ T ) + S v is the velocity vector of material flow, V → P is the static pressure, is the non-Newtonian viscosity, μ s is the specific heat, c p represents the thermal conductivity of aluminum alloy, and λ is the viscous dissipation heat generated by plastic deformation of the material, which can be expressed as [ S v 22 ]: where (17) S v = f m μ s φ m is the fraction of the viscous dissipation that is converted to heat and f m is the factor related to the shear strain rate. φ m The relationship between flow stress and strain rate during the FSW process may be expressed by [ 23 ]: where (18) Z ( T , ε ˙ ) = ε ˙ exp ( Q R T ) = A [ sinh α σ ] n is Zener–Holloman parameter, Z is the strain rate, ε ˙ Q is the thermal activation energy of plastic deformation, R is the gas constant, T is the absolute temperature, A and are the constants related to materials, α is the flow stress, and σ n is the stress index. Thus, the flow stress of the material is calculated by: (19) σ = 1 α ln { [ ε ˙ A exp ( Q R T ) ] 1 n + { [ ε ˙ A exp ( Q R T ) ] 2 n + 1 } 1 2 } The relevant parameters in the constitutive Equation (19) are obtained from compression experiments [ 24 ]. However, there is inevitable interface friction between the tool and workpiece, which leads to inaccurate calculation of flow stress. In order to eliminate the influence of friction and determine the actual flow stress, Ebrahimi [ 25 ] improved the flow stress through friction correction. According to the hypothesis put forward by Peng [ 26 ], the stress of the material at the melting temperature is much smaller than that at the test temperature, and the influence of temperature error on the flow stress should be considered. In addition, different sizes of samples will also have different effects on the softening effect of the material. The larger the height-diameter ratio, the smaller the softening effect. In fact, the real stress of the plate during the welding process is smaller than the result directly obtained from the compression test [ 27 ]. Therefore, considering the above factors, a correction coefficient is taken to modify the constitutive equation: (20) σ = ξ α ln { [ ε ˙ A exp ( Q R T ) ] 1 n + { [ ε ˙ A exp ( Q R T ) ] 2 n + 1 } 1 2 } When ultrasound is applied, the effective work of ultrasound on plastic deformation of materials is expressed by [ 17 ]: where (21) W = σ ue 3 k B T [ ∂ ln ( ε ˙ ) ∂ σ ] is Boltzmann's constant, the effective acoustic stress k B can be simplified as: σ ue where (22) σ ue = η v ⋅ 2 2 ⋅ P M is the transforming coefficient of effective work. η v The constitutive equation is further modified to obtain the flow stress during UVaFSW: (23) σ s = ξ α ln { [ ε ˙ A exp ( Q R T − W k B T ) ] 1 n + [ [ ε ˙ A exp ( Q R T − W k B T ) ] 2 n + 1 ] 1 2 } The viscosity of a material is relevant to its flow stress and strain rate, which can be represented by: μ s (24) μ s = σ s 3 ε ˙ The specific heat capacity and thermal conductivity of 2A12-T4 aluminum alloy are taken as functions of temperature respectively [ 28 ]: (25) c p = 38 + 3.3 × T − 0.00262 × T 2 − 3.33 × 10 − 7 T 3 (26) λ = 251 − 1.01 × T + 0.0025 × T 2 − 1.74 × 10 − 6 T 3 The relevant parameters in the constitutive equation are given in Table 3 . 3.3 Boundary conditions and heat generation During the computational process, the tool is removed out of the model because it is regarded as a rigid body. Therefore, the contact interface between the tool and workpiece is set as a wall and the boundary conditions are added. The material flow velocity components ( u , v , w ) in three directions at any point of the workpiece/shoulder and workpiece/pin bottom interface is written as: where (27) { u = ( 1 − δ ) ω r sin θ v = ( 1 − δ ) ω r cos θ w = 0 is the slip coefficient, δ is the rotating speed, ω r is the distance from a certain point to the tool axis and is the included angle between the welding direction and the θ r direction. There is an additional downward flow speed on the pin side, which can be expressed as: where (28) { u = ( 1 − δ ) ω r sin θ v = ( 1 − δ ) ω r cos θ w = − d z ω is the spacing of threads on the pin. d z The slip coefficient of aluminum alloy adopts empirical formula [ 22 ]: (29) δ = 0.31 × exp ( ω r / 1.87 ) − 0.026 Other sides are set as moving wall boundary, , u = V w . Where v = w = 0 is the welding speed. V w The heat generation during welding process mainly includes non-contact interface heat generation and contact interface heat generation. The former is mainly the viscous dissipation heat caused by plastic deformation, which is calculated by Eq. (17) , and the latter one mainly depends on the contact state on the interface, namely, interfacial slip coefficient. For partial sliding/sticking ( ) on the contact interface, the heat generation is composed of friction heat generation and plastic deformation heat generation [ 0 < δ < 1 22 ]. At the shoulder and the pin bottom, the heat flux density of each element at the interface is: (30) q b = [ η q ( 1 − δ ) σ y 3 + η f δ μ f P N ] ( 2 π ω 60 r − V w 60 sin θ ) The heat flux density of each element at the pin side/workpiece contact interface is: where (31) q s = [ η q ( 1 − δ ) σ y 3 + η f δ μ f σ y ] ( 2 π ω 60 r − V w 60 sin θ ) is the heat generation efficiency of plastic deformation work, η q is the friction heat conversion efficiency, η f is the yield stress of plastic material, σ y is the friction coefficient, and μ f is the normal force. The yield stress and friction coefficient of 2A12-T4 aluminum alloy may be expressed as follows [ P N 29,30 ]: (32) σ y = { 30.324 + 322.673 [ 1 + 10 ( - 0.101441 ) × ( 462.732 - T ) ] T < 644 K 69.424 − 0.216 × T T ≥ 644 K ( MPa ) (33) μ f = 0.5 × exp ( − δ ω r ) The contact state on the tool/workpiece interface is described primarily by the interface slip coefficient and the friction coefficient. Ultrasonic vibration will affect the friction coefficient at the tool/workpiece interface; subsequently affect the heat generation and material flow. In UVaFSW system, the material flow velocity can be divided into two parts: one is the relative sliding velocity ( V slide ) between the plastic material and the tool, the other is the ultrasonic vibration velocity with amplitude of V v , as shown in Fig. 7 . Based on the applied ultrasonic direction can be decomposed into two orthogonal directions ( direction and i → 1 direction), j → 1 θ v is the included angle between the ultrasonic vibration direction and the relative interfacial sliding velocity. Take , the system velocity can be obtained: τ = 2 π f t − k u x where (34) v → = ( V slide + V v cos τ cos θ v ) i → 1 + V v cos τ sin θ v j → 1 V v is the amplitude of vibration velocity, which is written as follows: where (35) V v = P M ρ c is the amplitude of sound pressure, P M and ρ c corresponds to the density and acoustic velocity of material, respectively. The Coulomb friction model states that the friction shear stress is proportional to the normal pressure: where (36) f p = − μ p v → | v → | = f 1 i → 1 + f 2 j → 1 p is the normal force of the contact interface, μ is the friction coefficient, f 1 and f 2 are the components of f in parallel and vertical directions ( and i → 1 ), respectively. j → 1 (37) f 1 ( τ ) = − μ p V slide + V v cos τ cos θ v V slide 2 + 2 V slide V v cos τ cos θ v + V v 2 cos 2 τ (38) f 2 ( τ ) = − μ p V v cos τ sin θ v V slide 2 + 2 V slide V v cos τ cos θ v + V v 2 cos 2 τ In quasi-steady state calculation of the UVaFSW, there is no time term, and observed effective friction is average friction of the motion time. Therefore, the average value of the friction component in a period ( T v = 2 πf ) in the parallel direction in the plane is: (39) f ¯ 1 = 1 T v ∫ 0 T v f 1 ( τ ) d τ According to Tsai [ 31 ], when and θ v = 0 , corresponding to parallel and vertical directions, take θ v = π / 2 , Eq. ζ = V slide / V v (39) can be simplified as: (40) f ¯ 1 0 = { − μ p ζ > 1 − μ p 2 π sin − 1 ζ 0 ≤ ζ ≤ 1 (41) f ¯ 1 π 2 = − μ p 2 π 1 + 1 ζ 2 K 1 1 + ζ 2 From Eq. (40) , the ratio of the average friction coefficient with ultrasonic vibration in parallel direction to that without ultrasonic vibration is expressed as: where (42) μ 1 μ f = { 1 ζ 1 ≥ 1 2 π sin − 1 ( ξ 1 ) 0 ≤ ζ 1 ≤ 1 is the ratio of the relative sliding velocity to the amplitude of the vibration speed components in parallel direction. ζ 1 Similarly, the ratio of the average friction coefficient with ultrasonic vibration in perpendicular direction to that without ultrasonic vibration is: where (43) μ 2 μ f = 2 π 1 + 1 ζ 2 2 K 1 1 + ζ 2 2 is the ratio of the relative sliding velocity to the amplitude of the vibration speed components in perpendicular direction. ζ 2 After two-direction antifriction effect modification, the final corrected friction coefficient under ultrasonic action is obtained: (44) μ vib = μ 1 2 + μ 2 2 4 Results and discussion With a line ultrasonic source on the tool side, the ultrasonic field was calculated. Then, the numerical simulation was performed for the UVaFSW and FSW process of aluminum alloy (2A12-T4) plates. 4.1 Sound pressure distribution Two observation points were selected on the longitudinal cross-section and transverse cross-section of the workpiece, respectively, as shown in Fig. 8 a and b. Fig. 8 c shows the sound pressure at points A and B in x -direction. It is clear that the direction of sound pressure at LS and TS is opposite, and the variation of sound pressure is periodic. At point A (3 mm away from the tool axis at TS), the peak sound pressure is 13 MPa, and the ultrasonic effect is most obvious. At the same time, the sound pressure at point B (3 mm away from the tool axis at LS) almost reaches the maximum negative pressure, and the sound pressure distribution along the x -direction is basically symmetrical. Fig. 8 d shows the sound pressure at positions C and D in y -direction. At the same distance from the welding centerline, the sound pressure at point C (AS) and D (RS) is almost the same because both points in y -direction are symmetrical with respect to the line ultrasonic source. Since C and D are 3 mm away from x -axis, they have much lower peak sound pressure than A and B which are on the weld centerline. As the sound pressure changes periodically in the aforementioned, a period is divided into t 1 - t 9 , a total of 9 time points ( Fig. 9 (a)) and the sound pressure field in Z = 1.5 mm horizontal plane at different times is extracted ( Fig. 9 b–j). The periodic change of sound pressure can be divided into four stages: firstly, it can be seen as the rising stages of sound pressure, and the sound pressure reaches its peak value at the time t 2 in Fig. 9 b–c. Then it begins to drop with time and reaches zero ( Fig. 9 d–e). After that, the sound pressure changes its direction and reaches the maximum negative pressure at the time t 6 ( Fig. 9 f–g). Later, the sound pressure starts to rise and reaches zero at the time t 8 , and then starts another period of sound pressure, as shown in Fig. 9 h–j. Fig. 10 shows the sound pressure distribution along welding direction on the horizontal section of the workpiece (Z = 1.5 mm). At instant t = 1.01250 ms, the tool is driven to vibrate toward positive x -axis, and the sound pressure is positive for x > 0 but negative for x < 0. At instant t = 1.03750 ms, the tool is driven to vibrate toward negative x -axis, and the sound pressure is positive for x < 0 but negative for x > 0. The maximum value of the sound pressure appears near the tool, and with increasing of the distance from the tool axis, the sound pressure drops quickly. The amplitude of sound pressure is the key variable to correct the flow stress and interface friction coefficient, and plays an important role in the calculation of UVaFSW process. Fig. 11 displays the distribution of sound pressure amplitude on different horizontal planes. It is worth noting that the peak values appear at the contact interface between tool/workpiece and locate at both sides of the tool along the x -axis. Due to the attenuation effect, the amplitude of sound pressure decreases with the increasing distance from the monitored location to the tool axis. On the plane of Z = 2.5 mm, the ultrasonic vibration is more intense, and its sound pressure amplitude is lager as well. With the z -coordinate decreases, the distance from the ultrasonic source increases, and the ultrasonic vibration effect gradually weakens. 4.2 Material flow According to Eq. (23) , the effect of ultrasound on flow stress is dependent on the effective work done by acoustic stress in plastic deformation. Therefore, the ultrasonic softening effect is characterized by the magnitude of acoustic stress work. Fig. 12 shows the comparison of the acoustic stress work at different horizontal planes and the area marked by white dotted circle is the area overlapped with the shoulder. It can be seen that the maximum value of acoustic stress work appears at the pin root and distributes on both sides of the tool along the x -axis direction. At the edge of the shoulder, the calculated value of acoustic stress work is only 0.45 MPa. The distribution of acoustic stress work is in agreement with the distribution of sound pressure. At the position where ultrasonic vibration is most intense, the acoustic stress work is also the largest. Fig. 13 shows the flow stress distribution at the tool/workpiece contact interface. After applying ultrasonic vibration, the reduction of the flow stress is more obvious on the pin-affected zone, while such difference is not significant in shoulder-affected area. With the distance from the shoulder bottom increases (the z -coordinate decrease), the acoustic stress work gradually attenuates, while the flow stress on the pin side does not change significantly. In order to explore the change of flow stress clearly, two lines a-f and g-l (in Fig. 13 ) are chosen along x -direction and y -direction, respectively. Fig. 14 illustrates the variation of flow stress along these two lines. The flow stress is higher on the LS than that on the TS, and is higher on the RS than that on the AS. With applying ultrasonic vibration, such distribution pattern does not change, but acoustic softening causes a decreasing of flow stress. At the point b and point e, which locate at the pin root, the flow stress decreases the most, and the change in x -direction is more obvious than that in y -direction. There is no significant difference in flow stress between FSW and UVaFSW at the shoulder edge. The reason is that the change of acoustic stress work is essential to the interfacial flow stress. Zhao et al. [ 17 ] shown that the ultrasonic softening effect is directly proportional to the vibration amplitude. The amplitude of sound pressure raises as the vibration amplitude increases, and the acoustic stress work is proportional to the amplitude of sound pressure. Therefore, the more work the acoustic stress does, the more significant is the reduction of flow stress due to the ultrasonic softening effect. The position with the largest acoustic stress work difference is at the pin root, so the effect of ultrasonic reducing the flow stress is more obvious. While at the edge of the shoulder, the acoustic stress work difference is small, the ultrasonic softening effect is weak, so the influence on the flow stress is slight. The acoustic enhancement effect of material flow along the transverse direction is much lower than that along the main ultrasonic vibration direction, so the softening effect along the y -axis is obviously lower than that along the x -axis. Fig. 15 is the predicted material flow velocity at different horizontal sections. On the plane Z = 2.5 mm, because it is close to the strong action of the shoulder, the material flow velocity is higher. It can be seen that the material flow range in UVaFSW is larger than that in FSW on the same horizontal planes. For lower plane (Z = 1.5 mm), the material flow velocity is obviously lower, and the ultrasonic induced enhancement of material flow is weakened, so that the difference is not evident. As demonstrated in Fig. 16 a and b, four paths along the x -axis and y -axis at different horizontal planes are taken. Fig. 16 c and d show the material flow velocity along the paths. Along x -direction, the material flow range on LS is roughly the same as that on TS. On the plane near the shoulder, the material flow velocity in UVaFSW is higher than that in FSW, while on the plane close to the pin bottom, the flow velocity of the two processes has little difference. The reason is as follows: the flow stress is related to the acoustic stress work, and the greater the reduction of flow stress, the better the promotion of material flow. Thus, at upper horizontal plane with stronger action of ultrasonic vibration, the material flow is obviously enhanced. For lower horizontal planes, the influence of ultrasonic vibration is little, and the ultrasonic induced enhancement of material flow becomes weak as well. Along the y -axis, it can be seen that ultrasound still plays a role in promoting material flow, and the flow velocity in UVaFSW is larger than that in FSW. 4.3 Heat generation and temperature field The ultrasound induced friction reduction effect is described by the ratio of the friction coefficient modified with ultrasonic effect to the friction coefficient without ultrasonic effect, and the modified friction coefficient is obtained by the superposition of the corrected friction coefficients in different directions. Therefore, antifriction mechanism of ultrasonic vibration is composed of two mechanisms in parallel and perpendicular directions. Fig. 17 shows the friction coefficient ratio on the tool/workpiece contact interface in different correction directions. The region with minimum friction coefficient appears around the pin and has apparent distribution direction along the x -axis. At the edge of the shoulder, antifriction effect is not obvious, and even the friction coefficient increases. Because the magnitude of ultrasonic antifriction is relevant to the ratio of the relative velocity to the ultrasonic vibration velocity, the ultrasonic vibration velocity is the largest at the pin root, and the antifriction effect is the most obvious. With the increase of the distance from the tool axis, the ultrasonic vibration speed at the edge of the shoulder reaches the minimum. The ultrasonic vibration cannot change the direction of material movement within a period, and even hinder the material movement, resulting in little change or even increase in the friction coefficient. The reduction of friction coefficient induced by ultrasonic vibration will directly lower the interfacial shear stress, and then produce an effect on the interfacial heat generation. Fig. 18 shows the heat flux at the tool/workpiece contact interface. The heat flux reaches the maximum at the edge of the shoulder, because the velocity is large there, the friction between the workpiece and the tool generates more heat, and the heat flux at the interface is large. After superimposing ultrasonic vibration, the heat flux in UVaFSW is significantly lower than that in conventional FSW in general, especially in the pin root, which is consistent with the distribution of friction coefficient ratio. While in the edge area of the shoulder, the heat flux increases due to the increase of friction heat generation. For better comparison of the heat flux on pin side, two circumferences at the pin/workpiece contact interface are taken in Fig. 19 a, which locate at the horizontal plane 1.5 mm and 2.5 mm away from the bottom of the workpiece. As shown in Fig. 19 b, r q is the radius direction; θ q is the included angle between the welding direction and the radius direction. Fig. 19 c–d shows the heat flux versus θ q at the tool/workpiece contact interface at different horizontal planes. Under the action of ultrasound, the heat flux at the interface decreased significantly. When θ q is 0°and 180°, the decrease of heat flux is the largest. This is because the effect of ultrasonic vibration along the x -direction is more intense, the antifriction effect is stronger, and thereby making the difference between the heat flux in FSW and UVaFSW reaches the maximum. When θ q is 90° and 270°, that is, in the y -direction, the heat flux changes little. Fig. 20 shows the temperature fields on the top surface of workpiece (a-b) and their local magnification (c-d), in which the black dotted circles represent the cover area by the shoulder. Under the same conditions, the peak temperature in FSW is 685 K, and that in UVaFSW is 677 K. With applying ultrasound, the maximum temperature range near the tool tends to contract inwards, while the whole temperature field shows a slight expansion trend. The reason is that the acoustic effect is strong near the tool and gradually attenuates with the increase of the distance from the tool axis. Ultrasonic vibration decreases the interfacial shear stress and reduces the friction heat generation, which is mainly concentrated near the tool. In the area far away from the tool, the effect of antifriction on heat generation is limited. However, ultrasonic vibration promotes the material flow, and increases the heat generation of plastic deformation, thus expanding the overall temperature range. Table 4 lists the heat generation on the contact interface and in the shear layer in FSW and UVaFSW. Considering the effect of antifriction, the friction heat generation on the contact interface in UVaFSW is significantly reduced, but the plastic deformation heat generation on the interface is also obviously increased due to the strengthening of material flow. Thus, the total heat generation on the contact interface changes not so obviously. The influence of ultrasound on material flow is mainly reflected near the contact interface, and the volumetric heat generation in the shear layer does not change much. The total heat generation in UVaFSW is slightly lower than that in FSW owing to the dominant effect of the reduction of friction heat generation. Overall, ultrasound has little effect on the temperature in the welding process, but affects the distribution of heat generation. 4.4 Experimental validation Fig. 21 shows the comparison between the predicted thermal cycle curves with the experimental measurement ones in UVaFSW. The calculated peak temperature is in agreement with the experimental results. Generally, the boundary with the viscosity value of 4 is defined as TMAZ boundary [ MPa · s 22 ]. Fig. 22 compares the measured and calculated TMAZ profiles in FSW and UVaFSW. The calculated TMAZ profiles are shown by the red dotted lines, and the measured weld cross sections are in white solid lines. The simulated TMAZ boundaries match with the experimental results. It also can be seen that both the predicted and experimental measured TMAZ boundaries in UVaFSW are wider than that in FSW. 5 Conclusions 1. By considering the geometric shape of the contact surface between the horn and the tool, a novel CFD model coupled with an ultrasonic field model based on a line source of sound is established. Meanwhile, the friction coefficient on the tool/workpiece interface is modified by considering the ultrasonic action from different directions, and the heat generation at the contact interface is quantitatively described. 2. The computed results show that with considering the line source of ultrasonic energy, the sound pressure is obviously enhanced, the action range of ultrasound is enlarged, the effect of ultrasonic along the vibration direction is stronger, and the amplitude of sound pressure is rapidly declined with increasing distance from the tool. 3. At the position where the acoustic stress work is most intense, the reduction of flow stress is largest. The acoustic stress work enhances the ultrasonic softening effect, resulting in a decrease of flow stress near the pin in UVaFSW. In addition, it also contributes to increasing of material flow velocity and enlargement of the flow region. 4. The ultrasonic antifriction effect greatly reduces the interfacial friction coefficient near the pin. Therefore, the heat flux in UVaFSW gets lower in the area near the pin, but there is no significant difference near the outer area of the shoulder. Ultrasonic vibration changes the ratio of interfacial frictional and plastic deformation heat. In general, ultrasonic vibration leads to a slight overall decrease in total amount of heat generation in UVaFSW due to the dual effects of acoustic softening and ultrasonic antifriction. 5. The predicted thermal cycle and TMAZ boundary in UVaFSW is in agreement with the experimental measured ones. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment The authors acknowledge the financial support from the National Natural Science Foundation of China (Grant No. 52035005 ). | [
"PADHY",
"MENG",
"BHUKYA",
"WANG",
"TIWARI",
"XU",
"SENGUPTA",
"MOHAN",
"PADHY",
"MENG",
"SNOPINSKI",
"LIU",
"LV",
"DING",
"NAJIB",
"KUMAR",
"ZHAO",
"ZHAO",
"DU",
"CHO",
"SHI",
"SHI",
"KUMAR",
"CHEN",
"EBRAHIMI",
"PENG",
"JIANG",
"XUE",
"ARORA",
"LI",
"TSA... |
57f989111cfd4bb4a3a7de6b05b40316_Frontline nurses burnout anxiety depression and fear statuses and their associated factors during th_10.1016_j.eclinm.2020.100424.xml | Frontline nurses’ burnout, anxiety, depression, and fear statuses and their associated factors during the COVID-19 outbreak in Wuhan, China: A large-scale cross-sectional study | [
"Hu, Deying",
"Kong, Yue",
"Li, Wengang",
"Han, Qiuying",
"Zhang, Xin",
"Zhu, Li Xia",
"Wan, Su Wei",
"Liu, Zuofeng",
"Shen, Qu",
"Yang, Jingqiu",
"He, Hong-Gu",
"Zhu, Jiemin"
] | Background
During the Coronavirus Disease 2019 (COVID-19) pandemic, frontline nurses face enormous mental health challenges. Epidemiological data on the mental health statuses of frontline nurses are still limited. The aim of this study was to examine mental health (burnout, anxiety, depression, and fear) and their associated factors among frontline nurses who were caring for COVID-19 patients in Wuhan, China.
Methods
A large-scale cross-sectional, descriptive, correlational study design was used. A total of 2,014 eligible frontline nurses from two hospitals in Wuhan, China, participated in the study. Besides sociodemographic and background data, a set of valid and reliable instruments were used to measure outcomes of burnout, anxiety, depression, fear, skin lesion, self-efficacy, resilience, and social support via the online survey in February 2020.
Findings
On average, the participants had a moderate level of burnout and a high level of fear. About half of the nurses reported moderate and high work burnout, as shown in emotional exhaustion (n = 1,218, 60.5%), depersonalization (n = 853, 42.3%), and personal accomplishment (n = 1,219, 60.6%). The findings showed that 288 (14.3%), 217 (10.7%), and 1,837 (91.2%) nurses reported moderate and high levels of anxiety, depression, and fear, respectively. The majority of the nurses (n = 1,910, 94.8%) had one or more skin lesions, and 1,950 (96.8%) nurses expressed their frontline work willingness. Mental health outcomes were statistically positively correlated with skin lesion and negatively correlated with self-efficacy, resilience, social support, and frontline work willingness.
Interpretation
The frontline nurses experienced a variety of mental health challenges, especially burnout and fear, which warrant attention and support from policymakers. Future interventions at the national and organisational levels are needed to improve mental health during this pandemic by preventing and managing skin lesions, building self-efficacy and resilience, providing sufficient social support, and ensuring frontline work willingness. | Research in context Unlabelled box Evidence before this study The outbreak of COVID-19 put global and national healthcare systems to test, which when overwhelmed, can severely compromise the well-being of frontline healthcare workers (HCWs). We searched electronic databases, including CINAHL, PubMed, Google Scholar, and the China National Knowledge Infrastructure, for articles that were published in either English or Chinese from 1 January 2003 to 12 February 2020, using the following keywords: disease outbreak, pandemic, medical crises, quality of life, self-efficacy, resilience, social support, fatigue, anxiety, depression, fear, nurses, healthcare workers, and healthcare professionals. The selection criteria included: (i) non-interventional studies on any pandemic outbreaks, (ii) studies that focused on the impact of any pandemic outbreaks on the health of healthcare workers, and (iii) studies that identified various contributing factors of the experiences described by healthcare workers during any pandemic outbreaks. Articles that were excluded were those that: (i) focused heavily on clarifying transmission routes and improving surveillance systems, (ii) emphasized on how the outbreak led to the development of a particular phenomenon or transition in nursing practice, and (iii) were conducted on humanitarian aid workers. A total of 31 full-text journal articles were reviewed. The physical and psychological well-being of frontline HCWs was compromised across all pandemic outbreaks. Many researches evaluated only the psychological impacts of pandemic outbreaks on frontline HCWs without considerations of other possible influencing factors. None reported the mental health statuses of frontline nurses in particular during the COVID-19 outbreak. Added value of this study In the absence of epidemiological data on the mental health of frontline nurses who are caring for COVID-19 patients and its associated factors, our study recruited 2014 frontline nurses with diverse demographic backgrounds and explored their mental health statuses during the COVID-19 outbreak. There was a total of 1324 nurses who were originally working in Wuhan and 690 nurses who were supporting Wuhan from other provinces in China, making our results a good representative of the mental health statuses of the Chinese frontlines nurses woring in Wuhan during the pandemic. We found that frontline nurses experienced a variety of mental health challenges, especially burnout and fear. The prevalence of anxiety, depression, and skin lesion was high. The majority of the nurses expressed their willingness to participate in frontline work. Mental health outcomes were positively correlated with skin lesion and negatively correlated with self-efficacy, resilience, social support, and frontline work willingness. Implications of all the available evidence Future interventions at the organisational and national levels are needed to improve frontline nurses’ mental health during the pandemic by addressing its associated factors. Similar research and support may be extended to include other frontline healthcare workers. 1 Introduction The pandemic of Coronavirus Disease 2019 (COVID-19) is currently a major global public health emergency [1] . By 27 March 2020, there were 465,915 confirmed cases in 199 countries, and 21,031 people had lost their lives [2] . The outbreak of COVID-19 put global and national healthcare systems to test, which when overwhelmed, can severely compromise the well-being of frontline healthcare workers (HCWs) [3] . Since the first COVID-19 case was reported in December 2019 in Wuhan [4] , approximately 42,000 HCWs, including 28,600 nurses all over China, were sent to Hubei Province to assist local healthcare teams to care for COVID-19 patients [5] . A study revealed that HCWs who were working in Wuhan often felt stress, depression, and anxiety, but this study didn't target specially at frontline nurses [6] . HCWs, especially nurses, who come close in contact with these patients when providing care are often left stricken with inadequate protections from contamination, high risks of infection, working burnout, fear, anxiety, and depression [ 7 , 8 ]. Nurses constitute the largest part of the healthcare workforce in an epidemic [9] , and they undertake most of the tasks related to infectious disease containment [10] . To date, epidemiological data on the mental health of frontline nurses who are caring for COVID-19 patients and its associated factors are still limited. Such evidence-based knowledge is crucial for HCWs and the government to prepare for health responses to pandemics such as COVID-19. The aim of this study was to examine mental health (burnout, anxiety, depression, and fear) and its associated factors among frontline nurses who were caring for COVID-19 patients in Wuhan, China. The research questions were: (a) What are the levels of burnout, anxiety, depression, fear, skin lesion, self-efficacy, resilience, and social support among frontline nurses? (b) What are the differences in burnout, anxiety, depression, and fear between nurses’ various sociodemographic and other COVID-related background subgroups? (c) What are the relationships between burnout, anxiety, depression, fear, and other aforementioned variables? 2 Methods 2.1 Study design This was a large-scale cross-sectional, descriptive, correlational study. 2.2 Settings and sampling This study was conducted in two hospitals in Wuhan, China. One hospital, which consists of three divisions that were located in different places, was originally a public tertiary hospital in Wuhan, and two out of the three divisions were converted to venues that only received COVID-19 patients after 13 January 2020 and 13 February 2020, respectively. These two divisions had 1860 beds in total, with approximately 2000 nurses who were caring for COVID-19 patients. The other hospital was newly established and operated specially for COVID-19 patients since 3 February 2020, with 1000 beds and 600 nurses. All frontline nurses who were caring for COVID-19 patients in the participating hospitals were invited to participate in this study. Nurses who were diagnosed with any prior mental disorders and/or who had the COVID-19 were excluded from the study. 2.3 Outcomes and measurement Sociodemographic and other COVID-9 related background data were collected using a self-developed questionnaire. Sociodemographic data consisted of gender, age, marital status, child-rearing, monthly household income, education, professional title, clinical experience, working duration as a frontline nurse, average working hours per shift, whether Wuhan is the original working place, way to be dispatched to Wuhan for those nurses from other cities, position in the hospital, whether the working ward has changed, prior training or experience of caring similar patients, their confidence in caring for patients with COVID-19 infection, self-protection, and working safety. Their belief in their families, colleagues, and hospital readiness to cope with this COVID-19 outbreak was also collected. Willingness and reasons to participate in frontline work during the COVID-19 outbreak were also included. Suggestions to improve frontline work were also explored. Nurses’ burnout was measured by the Chinese version of the Maslach Burnout Inventory: Human Services Survey (MBI-HSS) for Medical Personnel (MP) [11] , which contains 22 items with three dimensions: emotional exhaustion (EE, 9 items), depersonalization (DP, 5 items), and personal accomplishment (PA, 8 items). Each item was measured by a seven-point Likert scale. For the EE and DP dimensions, higher scores meant more severe burnout, while for the PA dimension, lower scores meant more severe burnout. Scores of 19–26 or ≥27 on EE, 6–9 or ≥10 on DP, and 34–39 or ≤33 on PA were indicative of moderate or high burnout for the respective dimensions [11] . The Cronbach's alpha value of the MBI-HSS for MP was 0.86 in this study. Nurses’ anxiety was measured by the Chinese version of Zung's Self-Rating Anxiety Scale (SAS) [12] . The SAS contains 20 items that examine emotional and physical symptoms of anxiety. Each item was measured by a four-point Likert scale. The total scores ranged from 25 to 100 (20 × 1 × 1.25 to 20 × 4 × 1.25), with 50–59, 60–69, and ≥70 indicating mild, moderate, and severe anxiety, respectively [13] . The Cronbach's alpha value of the SAS was 0.87 in this study. Nurses’ depression was measured by the Chinese version of Zung's Self-Rating Depression Scale (SDS) [14] . The SDS has 20 items that assess emotional, physiological, psychomotor, and psychological imbalance. Each item was measured by a four-point Likert scale. The total scores ranged from 25 to 100 (20 × 1 × 1.25 to 20 × 4 × 1.25), with 53–62, 63–72, and ≥73 indicating mild, moderate, and severe depression, respectively [13] . The Cronbach's alpha value of the SDS was 0.88 in this study. Nurses’ fear was measured by the Fear Scale for Healthcare Professionals (FS-HPs), which was developed by the research team. The FS-HPs has eight items that assess nurses’ fear of infection and death as well as nosocomial spreading to their loved ones during COVID-19 outbreak. Each item was measured by a five-point Likert scale. The total score ranged from 8 to 40, with ≤19, 20–29, and 30–40 indicating no or mild fear, moderate, and severe fear, respectively. Ten experts were invited to evaluate its content validity, giving it a total Content Validity Index (CVI) of 1.0. The Cronbach's alpha value of the FS-HPs was 0.80 in this study. Skin lesion was measured using a self-developed scale named the Skin Lesion Scale (SLS) based on the book “Epidemic Prevention Medical Protective Equipment related Skin Lesion and Management”. [15] The scale has 11 items that examine various common skin lesions related to personal protective equipment (PPE) among HCWs, including facial flushing, blistering of the mouth, skin erosions, skin soaking, skin allergies, skin chapping, skin indentation marks, cutaneous lichen, red spots with clear boundaries, blisters, and isolated pyoderma. For each type of skin lesion, we asked whether each nurse had such a condition (Each “yes” response was given a score 1 and each “no” response was given a score of 0, giving a total score of 0–11). For nurses who had skin lesions but could not manage them, such questions were asked: (1) not sure how to manage them, (2) no medicine available during the period, and (3) the root cause for the skin lesions cannot be changed. A group of ten experts were invited to evaluate the content validity, resulting a total CVI of 1.0. The Cronbach's alpha value of the SLS was 0.73 in this study. Nurses’ self-efficacy was measured by the Chinese version of the General Self-efficacy Scale (GSS) [16] . It consists of ten items and each was measured by a five-point Likert scale. The total score of the scale ranged from 10 to 40. The higher the score, the better the self-efficacy. The Cronbach's alpha value of the GSS was 0.93 in this study. Nurses’ resilience was measured by the Chinese version of the Connor-Davidson Resilience Scale-10 (CD-RISC-10) [17] . It contains ten items with a five-point Likert scale. The total score of the scale ranged from 0 to 40. The higher the score, the better the resilience. The Cronbach's alpha value of the CD-RISC-10 was 0.96 in this study. Social support was measured using the Chinese version of the Multidimensional Scale of Perceived Social Support (MSPSS) [18] . The scale consists of 12 items and uses a seven-point Likert scale. It has two subscales: intra-family social support and extra-family social support. The higher the mean score, the better the social support. The Cronbach's alpha value of the MSPSS was 0.96 in this study. 2.4 Data collection procedure The online questionnaire survey was developed using an online platform called “Questionnaire Star”. After obtaining ethical approval from the two participating hospitals, the directors of nursing and the head nurses were informed about the inclusion and exclusion criteria. The head nurses distributed the online survey to the WeChat group of frontline nurses who were caring for COVID-19 patients on 13 February 2020. Those who had interest in the survey then filled in the survey on the “Questionnaire Star” platform, which had a feature that only when all questions were answered, the online questionnaire could be submitted. A token of appreciation of 50 RMB (equivalent to 7 USD) was provided to each participant via the WeChat red packet on the completion of the online survey. Data collection was completed on 24 February 2020. The study protocol has been published on the last author's institutional website. 2.5 Ethical considerations Ethical approval was obtained from the participating hospitals’ ethical review boards as well as the last author's university. All nurses provided consent by ticking the “yes” box to indicate their willingness to participate in the online survey. Voluntary participation and data confidentiality were emphasized. 2.6 Data analyses Data were analysed using IBM SPSS version 25.0 for Windows [19] . Descriptive statistics were used to summarize nurses’ sociodemographic and other COVID-related background variable subgroups (such as working duration as the frontline nurses, reasons for being dispatched to Wuhan, confidence in self-protection, and so on) and all continuous outcome variables (including burnout, fear, anxiety, depression, fear, skin lesion, self-efficacy, resilience, and social support). An independent two-sample t -test was used to examine the differences in mental health outcomes between sociodemographic and other COVID-related background variable subgroups. Pearson product-moment correlation coefficient was used to examine the relationships between burnout, fear, anxiety, and depression and all other continuous outcome variables. P values of less than 0.05 were considered statistically significant. 2.7 Role of funding source The funding bodies had no role in study design, data collection, analysis, and interpretation, the manuscript writing, or submission decision. The corresponding authors had full access to all the data and had final responsibility for the decision to submit for publication. 3 Results 3.1 Sociodemographic and other characteristics of the participants Of the 2110 nurses who opened the survey link, nine (0.4%) ticked the “no” box to indicate their unwillingness to participate in the study and withdrew from the survey. Among the rest of 2101 nurses who completed and submitted the survey, 68 (3.2%) nurses reported that the number of days working at the frontline was zero indicating they had not begun their duties as frontline nurses, and 19 (0.9%) spent less than five minutes to complete the survey with several scales ticking the same answers consecutively ( Fig. 1 ). Thus, these nurses were excluded, leaving a total of 2014 frontline nurses who were included in this study. Table 1 shows the participants’ sociodemographic and other characteristics. The mean age of the frontline nurses was 30.99 (SD=6.17) years old. The mean working duration as frontline nurses was 20.72 (SD=12.9) days, and the average working hours was 6.57 (SD=1.90) hours per shift. The majority of the frontline nurses were female (87.1%), were married (61.1%), had one or more children (54.6%), had bachelor's degrees or higher (78.1%), and had junior professional titles (74.2%). There were a total of 1324 nurses who originally worked in Wuhan and 690 nurses who were sent to support Wuhan from other provinces in China. Among these 690 nurses, 476 were voluntary and 214 (209 willing and 5 unwilling) were delegated by their hospitals. The majority of the participants ( n = 1, 654, 82.1%) received prior training, but 1229 (61.0%) participants had no prior experiences of caring for patients with infectious diseases. A large number of frontline nurses had confidence in caring for COVID-19 patients, self-protection, and work safety. The majority of the frontline nurses believed that their family, colleagues, and hospitals were ready to cope with the COVID-19 outbreak. The majority of the participants ( n = 1950, 96.8%) indicated their willingness to participate in frontline work with the following reasons: responsibility and mission as a nurse, prior experiences during the SARS outbreak, patriotism, dedication, helping others, extra welfare, hospital assignment, and the mission as a communist party member. Some participants ( n = 64, 3.2%) indicated their unwillingness because of safety concerns, family caring needs such as breastfeeding, fear, work stress, and personal health problems. The participants put forward some suggestions to support frontline nurses’ work: (1) improve the welfare and social statuses of frontline nurses, (2) strengthen training regarding self-protection and provide adequate PPE, (3) enhance manpower and resource allocations, (4) improve the conditions of accommodation, food, and environments for frontline nurses, and (5) offer more psychosocial support to frontline nurses. 3.2 Participants’ mental health and other outcomes Table 2 shows the mental health and other outcomes of the frontline nurses. The participants had moderate levels of burnout, as shown in EE (mean=23.44, SD=13.80), DP (mean=6.77,SD=7.05), and PA (mean=34.83, SD= 9.95). The participants reported high levels of fear (mean=30.41, SD=7.60). Eight hundred and thirty-five (41.5%) nurses reported high EE, 556 (27.6%) nurses indicated high DP, and 771 (38.3%) had no or low PA, which all indicated high burnout during work. The participants reported mild ( n = 545, 27.1%), moderate ( n = 221, 11.0%), and severe ( n = 67, 3.3%) anxiety. Similarly, the participants indicated mild ( n = 661, 32.8%), moderate ( n = 194, 9.6%), and severe ( n = 23, 1.1%) depression. The majority of the nurses reported moderate ( n = 564, 28%) and high ( n = 1273, 36.2%) fear. The majority of the participants ( n = 1910, 94.8%) had one or more skin lesion(s) caused by PPE. Among nurses who did not manage their skin lesions ( n = 1703, 84.6%), 316 nurses (15.7%) indicated that they were not sure about the management, 518 nurses (25.7%) indicated that no medicine was available during the period, and 718 nurses (35.7%) said that the root causes were not changeable. Besides the 11 skin lesions that we included in our self-developed scale, some nurses mentioned other skin lesions such as conjunctivitis, ear tenderness, decrustation, beriberi, and needle stick injuries. 3.3 Differences in mental health outcome levels between various sociodemographic and other characteristic subgroups for the participants Table 3 shows the differences in the burnout, anxiety, depression, and fear levels between various sociodemographic and other characteristic subgroups. It was typical for one mental health variable to have significant differences for some, but not all sociodemographic and other characteristic subgroups. However, statistically significant differences in the levels of burnout, anxiety, depression, and fear were found between subgroups of the following variables: professional title ( p <0.05), whether Wuhan was the original working place ( p <0.05), whether working wards had changed ( p <0.05), confidence in caring for COVID-19 patients ( p <0.001), confidence in self-protection ( p <0.001), evaluations of work safety ( p <0.001), belief in family's or colleagues’ or hospitals’ readiness to cope with the COVID-19 outbreak ( p <0.001), and willingness to participate frontline work ( p <0.01) . 3.4 Relationships among mental health and other health outcomes Table 4 showed the relationships among mental health and other health outcomes for frontline nurses. EE was positively correlated with skin lesion ( r = 0.182) and negatively correlated with self-efficacy ( r =−0.193), resilience ( r =−0.325), intra-family social support ( r =−0.170), and extra-family social support ( r =−0.234). DP was negatively correlated with resilience ( r =−0.208), intra-family social support ( r =−0.221), and extra-family social support ( r =−0.216). PA was positively correlated with self-efficacy ( r = 0.376), resilience ( r = 0.436), intra-family social support ( r = 0.348), and extra-family social support ( r = 0.363). Anxiety was positively correlated with skin lesion ( r = 0.265) and negatively correlated with self-efficacy( r =−0.262), resilience ( r =−0.427), intra-family social support ( r =−0.274), and extra-family social support ( r =−0.333). Similarly, depression was positively correlated with skin lesion ( r = 0.224) and negatively correlated with self-efficacy( r =−0.409), resilience ( r =−0.554), intra-family social support ( r =−0.384), and extra-family social support ( r =−0.455). Fear was negatively correlated with resilience ( r =−0.121). 4 Discussion This is the first study that examined frontline nurses’ mental health and its associated factors during the COVID-19 outbreak using a large-scale cross-sectional design. The strengths of this study included the multi-centered sampling and the big sample size. We conducted our survey in a local hospital and a newly built hospital specially for COVID-19 patients. Among all participants, 1324 were originally working in Wuhan, whereas 690 nurses were originally from other provinces who were supporting Wuhan's healthcare system. The diversity of nurses’ geographic background make our sample a relatively good representative of nurses from China.This study found that the prevalence of burnout, anxiety, depression, and fear was high in frontline nurses. Skin lesions were very common for frontline nurses wearing PPE. Although frontline nurses were suffering from the aforementioned physical and mental health symptoms, they still expressed their willingness to participate in frontline work during the COVID-19 outbreak. We also found that frontline nurses’ mental health was positively correlated with skin lesion and negatively correlated with self-efficacy, resilience, social support, and frontline work willingness. In this study, frontline nurses reported moderate levels of burnout. Moreover, 60.5%, 42.3%, and 60.6% of the frontline nurses had moderate/high EE, DP, and PA, respectively, which all indicated a high prevalence of burnout among frontlines nurses. The COVID-19 outbreak has led to a sharp increase in admissions and presentations to hospitals and consequently impacts the workload of nurses. A previous study indicated that each additional patient added to a nurse's workload was associated with a 23% increase in the likelihood of burnout [20] . In a study of nurses during the Middle East Respiratory Syndrome outbreak, nurses started to sink into a state of burnout after a prolonged and sustained period of exposure to the deteriorating situation with no end in sight [21] . This study showed that 40% to 45% of the frontline nurses experienced anxiety or depression, with 11% to 14% having moderate to severe anxiety or depression. Similar to the SARS outbreak in 2003, due to the life-threatening nature of the disease and the increasing workload, frontline nurses were at high risks of anxiety and depression [22] . Compared to the previous report on 5062 HCWs (3240 from non-isolation wards, 1607 from isolation wards, 215 off work or in self-isolation), including 3417 nurses, 1004 doctors and 641 medical technicians, regardless of whether they were working at frontline during COVID-19 outbreak in Wuhan [6] , the incidence of anxiety and depression of frontline nurses in our study was relatively higher. This study showed that the frontline nurses suffered from fears of infection and death as well as nosocomial spreading to their loved ones. As the numbers of infection and mortality cases surge, the COVID-19 outbreak in China has caused public panic and distress [7] . HCWs caring for COVID-19 patients were also found to be scared [7] . As of 27 February 2020, 3387 healthcare professionals have been diagnosed with COVID-19 in China [23] . As of 1 March 2020, 25 healthcare workers died not only because of being infected with COVID-19 but also because of cardiac arrest or other ailments due to fatigue and overwork during the COVID-19 outbreak [24] . Thus, for frontline nurses, their colleagues getting infected or dying might aggravate their fears. The frontline nurses’ burnout, anxiety, and depression were weakly positively correlated with skin lesion, which means that the worse the skin lesion, the higher the burnout, anxiety, and depression levels. Currently, in the context of a lack of definite and effective treatment for COVID-19, wearing PPE is the most effective way to prevent infections, especially in HCWs [25] . However, it is very uncomfortable and inconvenient to wear PPE, especially when they were wearing the same PPE throughout the shift for a few hours, and 94.8% of the frontline nurses reported one or more skin lesion in our study. Moreover, a large number of nurses did not manage and treat their skin lesions due to a lack of related knowledge or no medicine available at hand. Appropriate training on skin lesion prevention and adequate medicine to manage skin lesion should be guaranteed to protect frontline nurses, thus promoting their mental health. Our findings showed that frontline nurses’ burnout, anxiety, and depression were moderately negatively correlated with self-efficacy and resilience, which means that when nurses have better self-efficacy and resilience, they may experience less mental health problems. Higher self-efficacy is beneficial for disaster preparedness [26] . Resilience can mitigate the negative impact of work related stress and prevent poor psychological health outcomes among nurse [27] . Individual attributes and organisational resources should be addressed to build self-efficacy and resilience, thus achieving improvement in the mental health of nurses [28] . The frontline nurses’ burnout, anxiety, depression, and fear were moderately negatively correlated with social support. A systematic review indicated that a lack of social support was one of important risk factors for developing negative psychological outcomes in HCWs across all types of disasters [29] . The availability of psychological interventions, including the establishment of response, social support, medical, and assistance hotline teams, was beneficial and helpful for frontline nurses’ mental health [6] . Although the prevalence of burnout, anxiety, depression, fear, and skin lesion was high, 1950 (96.8%) nurses still expressed their willingness to participate in frontline work. Moreover, the frontline nurses’ mental health was strongly negatively correlated with frontline work willingness based on the t -test results. Nursing willingness or intention, voluntary and active caring for patients during any newly emerging infectious diseases [30] , is important in mitigating nurses’ burnout, anxiety, depression, and fear. Thus, suggestions from the frontline nurses to improve their working conditions, such as enhancing of manpower and resource allocation, as well as to improve welfare and living conditions, should be addressed to support nurses’ willingness in caring for COVID-19 patients. This survey was conducted from 13 to 24 February 2020. The confirmed cases of COVID-19 in Wuhan reached the peak on 13 February 2020 and decreased gradually. Confirmed cases were still reported after the completion of data collection [2] . The timing of the survey may limit the generalization to all frontline nurses who were working in other period and in other parts of China where the pandemic situation was not that severe. Moreover, the frontline nurses might expressed less burnout, anxiety, depression and fear than real condition due to social desirability. Because of the time limit and COVID-19 urgency, we developed the fear scale for health care professionals and generated the thresholds based on our experiences. A future study is needed to test and better decide the thresholds of the fear scale. Moreover, the cross-sectional design provided information at one time point only. The correlation shown in this study does not imply causation. The lack of follow-up data of frontline nurses’ mental health made it impossible to know their mental health statuses over time. Longitudinal studies are recommended for future studies to capture more in-depth information about the mental health statuses of frontline nurses, both in China and other parts of the world. Frontline nurses experienced a variety of mental health challenges, especially burnout and fear, which warrants more attention and support from policymakers. Future interventions at the organisational and national levels are needed to improve frontline nurses’ mental health during the pandemic by considering preventing and managing skin lesions, building self-efficacy and resilience, providing sufficient social support, and ensuring frontline work willingness. Similar research and support may be extended to include other frontline healthcare workers. Declaration of Competing Interest We declare no competing interests. Funding This study was funded by the 2020 COVID-19 Emergency Response Special Fund from Xiamen University (20720200025) and Huazhong University of Science and Technology (2020kfyXGYJ001) in China. | [
"HONEY",
"CHEN",
"ZHU",
"KANG",
"BAO",
"SAID",
"USHER",
"MASLACH",
"ZUNG",
"WANG",
"ZUNG",
"SCHWARZER",
"CONNOR",
"ZIMET",
"AIKEN",
"KANG",
"SU",
"CHENG",
"KILIC",
"DELGADO",
"BADU",
"NAUSHAD",
"OH"
] |
73212d4e5fd74e8ba8a89a254af2eae9_Feasibility of Tablet-Based Patient-Reported Symptom Data Collection Among Hemodialysis Patients_10.1016_j.ekir.2020.04.021.xml | Feasibility of Tablet-Based Patient-Reported Symptom Data Collection Among Hemodialysis Patients | [
"Flythe, Jennifer E.",
"Tugman, Matthew J.",
"Narendra, Julia H.",
"Dorough, Adeline",
"Hilbert, Johnathan",
"Assimon, Magdalene M.",
"DeWalt, Darren A."
] | Introduction
Individuals receiving in-center hemodialysis have high symptom burdens but often do not report their symptoms to care teams. Evidence from other diseases suggest that use of symptom electronic patient-reported outcome measures (ePROMs) may improve outcomes. We assessed the usability of a symptom ePROM system and then implemented a quality improvement (QI) project with the objective of improving symptom communication at a US hemodialysis clinic. During the project, we assessed the feasibility of ePROM implementation and conducted a substudy exploring the effect of ePROM use on patient-centered care.
Methods
After conducting usability testing, we used mixed methods, guided by the Quality Implementation Framework, to implement a 16-week symptom ePROM QI project. We performed pre-, intra-, and postproject stakeholder interviews to identify implementation barriers and facilitators. We collected ePROM system-generated data on symptoms, e-mail alerts, and response rates, among other factors, to inform our feasibility assessment. We compared pre- and postproject outcomes.
Results
There were 62 patient participants (34% black, 16% Spanish-speaking) and 19 care team participants (4 physicians, 15 clinic personnel) at QI project start, and 32 research participants. In total, the symptom ePROM was administered 496 times (completion rate = 84%). The implementation approach and ePROM system were modified to address stakeholder-identified concerns throughout. ePROM implementation was feasible as demonstrated by the program’s acceptability, demand, implementation success, practicality, integration in care, and observed trend toward improved outcomes.
Conclusions
Symptom ePROM administration during hemodialysis is feasible. Trials investigating the effectiveness of symptom ePROMs and optimal administration strategies are needed. | Individuals receiving maintenance hemodialysis have high symptom burdens that negatively affect their health-related quality of life and dialysis care experiences. Patients often underreport their symptoms, 1–3 and nephrologists tend to underestimate patient symptoms. 2 Evidence from individuals living with cancer demonstrates that symptom assessment through routine patient-reported outcome measure (PROM) administration can improve patient-provider communication, symptom distress, health-related quality of life, and survival. 4 Conceptual frameworks synthesizing the existing evidence posit that PROMs support patient care through changes to patient-care team communication, detection of unrecognized problems, changes to patient behavior and clinical management, and improved patient experiences and health outcomes. 5–12 However, in most dialysis practices, there are no standardized approaches for routine symptom collection outside of required annual health-related quality of life assessments. 13 Although there is growing interest in incorporating PROMs into clinical care, there are numerous perceived implementation barriers. Studies of nephrologists’ perspectives on PROMs have revealed concerns about patient ability and/or willingness to complete them, care team capacity to meet patient follow-up expectations, workflow disruptions, and uncertainty about optimal administration frequency and appropriate response thresholds for follow-up. 14 , In addition, paper-based questionnaires are the most common mode of PROM administration. However, ePROM capture may be more advantageous because of its capacity to (i) generate alerts to notify providers of problems, (ii) track data longitudinally, and (iii) facilitate integration of PROM data with the electronic health record. 15 Existing data suggest that tablet-based ePROM capture is acceptable to individuals with kidney disease, including those receiving home hemodialysis. 16–18 Less is known about ePROM administration to in-center hemodialysis patients, who experience greater burdens of cognitive dysfunction and comorbidities affecting dexterity. 19–21 22 We converted a paper-based symptom PROM to an ePROM and assessed its usability. We then conducted a QI project with research substudy to improve symptom communication and, simultaneously, assess feasibility of routine collection of ePROM-based symptom data at a U.S. hemodialysis clinic. In addition, we developed care processes to support routine ePROM administration in clinical practice. We used a mixed methods approach to assess symptom ePROM implementation feasibility and its potential to improve outcomes. Methods Overview We executed a 2-phase project. In the first phase, we converted an existing, content-valid, paper-based symptom PROM to a tablet-based ePROM and evaluated its usability. In the second phase, we implemented the resultant ePROM system in routine care through a QI project and conducted a research substudy. We relied on principles of human-centered design for interactive systems 23 to guide conversion of the paper PROM to an ePROM, and the Quality Implementation Framework 24 to guide ePROM implementation. 25 Conversion of a Paper-Based Symptom PROM to a Tablet-Based ePROM We converted a paper-based, dialysis-related physical symptom PROM with demonstrated content validity to a tablet-based ePROM using an agile software development approach. Agile methodology uses incremental, iterative cycles of development ( 23 sprints ) to adapt the user interface to end-user needs, enhancing the end-technology’s effectiveness, efficiency, and usability in a real-world clinical environment. 24 , 26 , Usability testing, a component of agile methodology, evaluates how an individual responds to, understands, and navigates application questions, while capturing problems with the application interface, navigation prompts, question wording, and/or difficulties in question completion. 27 First, we completed a series of 2-week sprints to identify end-user needs and develop the tablet user interface. Thereafter, we conducted 2 rounds of interviews and usability testing with hemodialysis patients, iteratively refining the interface in response to feedback. Participants completed the ePROM independently, and then research personnel reviewed the interface with participants using a think-aloud technique and verbal probing. 27 We recruited usability testing participants from 2 central North Carolina clinics (University of North Carolina Institutional Review Board 18-1531). Individuals were eligible to participate if they were ≥18 years old and received in-center hemodialysis for ≥6 months. We excluded individuals who were unable to read and converse in English and those with cognitive impairment (as identified by treating nephrologists). We used purposeful sampling to capture individuals of varying ages, symptom experiences, education levels, and comfort with technology, stopping recruitment on reaching data saturation. Participants received $20 remuneration. Tablet-Based ePROM System Description Usability testing resulted in the tablet-based ePROM system, Symptom Monitoring in Renal Replacement Therapy–Hemodialysis, SMaRRT-HD. SMaRRT-HD is a 14-item instrument that measures 13 symptoms (12 specific + free response) with 5-point severity Likert scales, and hours for dialysis recovery time (open-ended) ( Supplement 1 ), available in both English and Spanish. The ePROM is administered during the first 30 minutes of hemodialysis and specifies a recall period of the last hemodialysis treatment for each symptom. The system sends designated care team members e-mail alerts for symptoms meeting prespecified severity thresholds at the time of instrument completion ( Supplementary Table S1 ) and generates longitudinal symptom reports displaying reported symptoms from up to the last 8 ePROM administrations. QI Project and Research Substudy We implemented the SMaRRT-HD system as a QI project with the goal of improving patient-care team symptom communication. In addition, we sought to (i) assess feasibility of routine patient-reported symptom collection via an ePROM in a dialysis clinic (QI) and (ii) explore the effect of such data collection on patient-centeredness of care (research). We used a pre- and postproject design over 24 weeks, with a 4-week preproject period, 16-week intervention, and 4-week postproject period. A 7-member steering committee (patient, researchers, dialysis organization leaders, and implementation expert) supported the initiative. The QI project was approved by the participating dialysis clinic’s leadership and was determined to be nonhuman subject research by the University of North Carolina Institutional Review Board (17-0193). We performed, analyzed, and reported the QI project in accordance with the Standards for Quality Improvement Reporting Excellence guidelines ( Supplementary Table S2 ). The research substudy was approved by the University of North Carolina Institutional Review Board (19-0303), and participants provided informed consent. 28 Setting and Participants The project took place in 2019 at a North Carolina hemodialysis clinic, a joint venture between the University of North Carolina and a large dialysis organization. All clinic hemodialysis patients who were able to respond to questions about their health and personnel (nurses, patient care technicians, and medical providers) were eligible to participate. Patients were informed about the QI project via waiting room signs and letters and asked to notify their care team if they desired to opt out ( n = 0). All patient QI project participants were eligible to participate in the research substudy, except those lacking cognitive ability as identified by their treating nephrologist. Research recruitment methods included study informational signs, letter, and in-person recruitment by research personnel. Research participants received $25 remuneration. Symptom ePROM Implementation We relied on the Quality Implementation Framework, an implementation science framework, as the conceptual model for SMaRRT-HD implementation. The Quality Implementation Framework is organized into 4 temporal phases ( Figure 1 ) and synthesizes 25 implementation models, focusing on actions constituting quality implementation in real-world environments. In phase 1 (host setting considerations), we assessed clinic needs, resources, and readiness for SMaRRT-HD implementation, optimized the SMaRRT-HD interface through the previously described usability testing, and built capacity by engaging with local stakeholders and fostering a supportive clinic climate. In phase 2 (creating a structure for implementation), we collaborated with clinic stakeholders to create an implementation plan and provided clinic personnel trainings. In phase 3 (ongoing structure), we iteratively updated the implementation plan to respond to end-user feedback and address encountered barriers. In phase 4 (improving future applications), we collected clinic stakeholder perspectives on barriers to and facilitators of long-term use and sustainability of SMaRRT-HD. At project start, the QI support team provided in-clinic assistance, gradually withdrawing support over time. 24 Data Collection Overview We collected data to assess SMaRRT-HD implementation feasibility, including acceptability, demand, implementation, practicality, integration, and limited efficacy testing ( Table 1 ). 29 Interviews and Observations A trained interviewer conducted pre-, intra-, and postproject semistructured interviews with clinic stakeholders (patients, clinic personnel, and medical providers) to capture end-user needs, experiences, and recommendations for change. Interviews occurred in-person at the clinic, and responses were recorded on standardized note templates. Preproject interviews assessed clinic needs, resources, capacity, and support for implementing SMaRRT-HD. Intraproject interviews were conducted every 2 weeks to assess program acceptability, perceived demands, and barriers to and facilitators of implementation. Postproject interviews assessed perceived sustainability and effects of the program, as well as plans for ongoing use. We supplemented interviews with field observations. Quantitative Outcomes The primary quantitative QI outcomes were implementation-related measures including the proportion of symptom ePROMs completed, proportion of ePROMs requiring assistance, and ePROM completion time. Exploratory QI clinical outcomes included pre- to postproject change in shortened treatments, missed treatments, and hospitalizations. We collected data on symptoms and associated e-mail alerts throughout the project. The research outcome was pre- to postproject change in patient-reported patient-centeredness of care as captured by a modified Patient Perception of Patient-Centeredness (PPPC) Scale ( Supplementary Table S3 ), a valid and reliable 14-item instrument that measures patient-centeredness of care. Lower PPPC Scale scores indicate perception of more patient-centered care, correlating with better emotional health and patient satisfaction. 30–33 30 , 31 , 33 Data Analysis Symptom ePROM Usability Testing During ePROM usability testing, 2 trained cognitive interviewers took detailed notes on standardized templates. Data were entered into Research Electronic Data Capture (REDCap) and organized by usability testing domain (i.e., navigation/use, understanding). We created overall data summaries in table format, collectively reviewing summaries and notes to confirm accurate data summation. Symptom ePROM Implementation Interviews and Observations Implementation interview data were entered into tables organized by content (i.e., SMaRRT-HD system, program implementation) and interviewee type (patient, clinic personnel, medical provider). Field observations were organized similarly. Throughout the project, 3 research personnel (JHN, MJT, and JEF) met every other week to review SMaRRT-HD implementation challenges and successes. We used thematic analysis and investigator triangulation (i.e., iterative discussions) to categorize semantically related concepts into common themes in the qualitative data. 34 Quantitative Outcomes Descriptive statistics (e.g., count [%], means ± SDs) were used to report participant characteristics, ePROM response and assistance rates, patient-reported symptoms, and clinical outcomes. We calculated pre- and postproject PPPC Scale scores according to instrument scoring instructions (mean of individual item scores). 30 , 31 , We used paired Student’s 33 t -test to compare pre- and postproject means for PPPC Scale scores and clinical outcomes. Results Conversion to Tablet-Based ePROM and Usability Testing See Supplementary Tables S4 and S5 for participant characteristics and complete findings from symptom ePROM usability testing. In brief, 13 patients (mean age 54 ± 17 years, 77% black, 31% with less than a high school education) participated. Of the 13 participants, 7 (54%) had never used a tablet before the interview. All participants displayed good understanding of the symptoms, recall period, and time-to-recovery question. In response to interview findings, we updated the interface to improve navigation and appearance. Specifically, we (i) removed the progress tracking bar to decrease clutter, (ii) replaced the auto-advance feature with a “next” button so the user could control screen advancement timing, (iii) added a pop-up keypad for recovery time to ease entry, (iv) changed the phrase “write in” to “type in” to align terminology with the administration mode, and (v) altered the dialysis machine graphic to make it more realistic. Round-2 participants ( n = 3) confirmed understanding of the symptoms, recall period, and time-to-recovery question. All were able to navigate the updated ePROM, including the pop-up keypad for recovery time entry. An 83-year-old woman with no prior tablet experience commented, “I caught on easily.” A 32-year-old man found the application “simple and easy.” QI Project and Research Participant Characteristics Table 2 displays participant characteristics, and Supplementary Figure S1 displays patient participant flow diagrams. At QI project start, there were 62 patients with a mean age of 61 ± 15 years and dialysis vintage of 6 ± 5 years; 21 (34%) were women, 21 (34%) were black, and 10 (16%) were Spanish-speaking only. There were also 19 care team participants (4 medical providers, 5 nurses, 8 patient care technicians, 1 dietitian, and 1 social worker). Overall, the 32 research participants had similar characteristics to the QI project patient participants. Symptom ePROM Implementation Findings Overview Figure 2 displays the project timeline. We assessed clinic needs, resources, and readiness for SMaRRT-HD implementation and built capacity among clinic stakeholders through staff meeting presentations, individual interviews, and personnel trainings. Clinic personnel and the QI support team (MJT, JHN, and JEF) codeveloped an initial SMaRRT-HD implementation plan. Data from intraproject interviews and field observations informed iterative updates throughout the project. Stakeholder Feedback and Responsive Changes Table 3 displays findings, responsive project updates, and future recommendations from the pre-, intra-, and postproject interviews. In brief, the clinic had no preexisting formal approach to symptom assessment but considered symptoms important. Care team members noted variability in symptom reporting across patients, recognizing a subset of patients who never to rarely reported symptoms, even when asked. The care team opted for weekly ePROM administration frequency with symptom alert e-mails sent to (i) a central clinic e-mail account accessed by all nurses and (ii) individual medical provider e-mail accounts. To minimize burden, the care team selected a higher symptom severity threshold to trigger e-mail alerts. After project implementation, care team members reported e-mail alerts as too infrequent, leading to missed symptoms and inadequate follow-up. In response, the symptom severity alert thresholds were lowered to generate more alerts (week 5); however, care team members then found the alerts too frequent, and an intermediary threshold was implemented (week 8) ( Supplementary Table S1 ). In addition, clinic personnel noted poor symptom follow-up by medical providers. To prompt this follow-up, clinic personnel provided printed e-mail alerts and longitudinal symptom reports for medical providers to use while rounding. Overall, patients found care team symptom follow-up acceptable but noted there was no follow-up for some symptoms, particularly itching and thirst. Similarly, 1 medical provider observed less follow-up for non–fluid-related symptoms. Clinic personnel confirmed this, acknowledging uncertainty about how to address such symptoms. All agreed that future implementations should include suggested care team guidance for symptom management; however, despite the challenges, all agreed that ePROM administration improved patient-care team communication by facilitating conversations on symptoms, a high-priority patient issue. At week 8, clinic stakeholders suggested changing the administration frequency from weekly to monthly to decrease patient burden; however, all were concerned about missing symptoms with this approach, and requested concurrently changing the recall period from “last treatment” to “last week” to capture symptoms over 3 treatments. Monthly administrations with a 1-week recall period were thus used for the remaining 8 weeks of the project. Although this approach was less burdensome, clinic personnel were frustrated by their inability to link reported symptoms to specific treatments. To address this concern, on receiving e-mail alerts, nurses discussed the reported symptoms with patients to identify the associated treatment. The care team viewed the ability to link symptoms to individual treatments as essential to symptom management. As such, all concluded that a single treatment recall period was the preferred approach and ultimately recommended twice-monthly administration. Although providing written longitudinal symptom reports to medical providers was helpful, all desired report linkage to the electronic health record for point-of-care accessibility. Patients and nurses requested simplified reports that omitted symptoms reported as “none,” citing a preference for fewer data to interpret. Finally, all agreed that reviewing symptom reports at monthly Quality Assurance and Performance Improvement meetings would better facilitate full interdisciplinary team input. Symptom ePROM Implementation Outcomes During the 16-week project, SMaRRT-HD was administered 496 times to 66 unique patients (398 weekly and 98 monthly administrations). The overall completion rate among patients present for treatment was 84% with varying completion rates across time ( Figure 3 ). The completion rate was <80% at 2 administrations: week 6, 73% (clinic water problem on day 1 of administration) and week 10, 70% (uncharged tablets on day 1 of administration). Reasons for missed ePROMs included patient late arrival, sleeping, refusal, and busy with medical team; and clinic personnel forgot and were too busy. The system-recorded median [quartile 1, quartile 3] time to completion was 3 [2, 4] minutes. Actual completion times were usually shorter than system-recorded times, however, as tablets were often “started” and set down before being handed to the patient to complete. In general, patients were able to complete SMaRRT-HD without assistance. Support with ePROM administration from the QI support team decreased over time ( Figure 3 ). Reasons for assistance included poor vision (28%), patient care technician preference (23%), trouble with tablet (e.g., poor dexterity), hemodialysis machine alarming with arm movement or patient care technician concern that machine could alarm (19%), tablet unfamiliarity (10%), fell asleep (9%), requested assistance (6%), and isolation room utilization (5%). Patient need for assistance due to tablet unfamiliarity and requested assistance decreased to 0% by week 8. In some cases, patient care technicians preferred to help patients with ePROM completion regardless of patient ability, because they felt their assistance speeded administration. The care team made numerous changes to patient management in response to SMaRRT-HD symptom reports. Example interventions included changes in target weight, dialyzer and phosphate binder prescriptions, and ultrafiltration rate; patient education about salt/fluid restrictions and thirst management; transition from profiled ultrafiltration to conventional ultrafiltration; and administration of saline and medications (e.g., antiemetic, antipruritic). Of these interventions, target weight changes were most common. In 1 case, a follow-up conversation about physical symptoms prompted patient disclosure of mood symptoms, resulting in depression treatment. Clinical Outcomes and Patient-Centeredness of Care Symptoms and Alerts Figure 4 a displays reported symptoms over the 16-week project, with thirst, cramping, and itching reported most frequently. Of the 495 ePROMs with complete data (a computer system error resulted in incomplete data on 1 ePROM), 121 (24%) had “none” for all symptom items, and 306 (62%) had no reports of symptoms above mild severity. Fifteen (3%) of the ePROMs (12 unique patients) had write-in symptoms. Reported write-in symptoms included back pain, chills, cough, diarrhea, and pain in both hands, among others ( Supplementary Table S6 ). The time-to-recovery question was completed on 495 (99.8%) ePROMs, with a median [quartile 1, quartile 3] response of 2 [1, 4] hours and a range of 0 to 96 hours. Of the 66 patients, 64 (97%) reported at least 1 symptom during the QI project, and 49 (74%) reported at least 1 symptom at a severity of moderate or greater. The number of system-generated e-mail alerts for symptoms meeting the specified threshold ranged from 2 to 22 per week, depending on the alert threshold paradigm ( Figure 4 b). Adherence and Patient-Centeredness of Care Among the 55 patients with pre- and postproject data, the number (%) of patients having at least 1 unexcused hemodialysis absence, shortened treatment, and hospitalization declined from pre- to postprogram, but these declines did not reach statistical significance (unexcused hemodialysis absences, 14 [25%] vs. 11 [20%] patients [ P = 0.5]; shortened hemodialysis treatments, 29 [53%] vs. 25 [45%] patients [ P = 0.4]; hospitalizations, 5 [9%] vs. 2 [4%] patients [ P = 0.4]). Among the 30 research participants with complete data, there was no change in pre- to postproject PPPC Scale scores, 1.3 ± 0.4 and 1.3 ± 0.4, respectively ( P = 0.7). Overall Feasibility Assessment Overall, symptom ePROM administration was feasible as demonstrated by affirmative evidence of acceptability, demand, implementation, practicality, integration, and limited efficacy testing ( Table 1 ). Specifically, qualitative data indicated perceived benefits from ePROM administration for both patients and care team members, and pre- and postproject quantitative data showed a non–statistically significant trend toward improved clinical outcomes. Data from interviews, observations, and ePROM completion and assistance rates suggested that SMaRRT-HD was practical and could be integrated into clinical workflows with minimal added burden. The system’s perceived value, overall feasibility, and potential for sustainability were underscored by the clinic’s decision to continue using SMaRRT-HD postproject. 29 Discussion We demonstrated that symptom ePROM data collection during routine hemodialysis care is feasible. Our findings suggest that individuals receiving in-center maintenance hemodialysis are able and willing to complete ePROM symptom assessments during dialysis without clinical care interruption. Moreover, such data collection has the potential to improve patient-care team communication about symptoms and associated clinical outcomes. Notably, our findings underscore the importance of patient and care team engagement and flexibility when developing and implementing new clinic processes, such as ePROM data capture in the dialysis environment. Collection of patient-reported symptoms is associated with improved patient-care team communication, symptom management, and health-related quality of life, as well as reduced hospitalizations and mortality among individuals with advanced cancer and those receiving palliative care. ePROM-collected data can facilitate patient-care team discussions about symptoms, promoting shared decision-making and demonstrating care team appreciation for a patient-prioritized aspect of care. 5–10 Such interactions can also strengthen therapeutic relationships, extending beyond symptoms. Moreover, symptom recognition facilitates interventions aimed at symptom alleviation, potentially improving treatment tolerance and adherence and subsequent clinical outcomes. 17 Interest in tailoring treatment plans to individual patient priorities has fueled interest in incorporating PROMs across the spectrum of kidney disease. 35 , Prior studies among individuals with kidney disease indicate that ePROMs may be usable by patients, 36 14 , 20 , but pragmatic implementation concerns remain. 21 14 , As such, it is necessary to rigorously study the impact of ePROMs on outcome 15 and implementation strategies. Studying the two in parallel has the potential to expedite translation of effective clinical interventions into practice. Therefore, we first established the usability of SMaRRT-HD using an agile methodology approach and then evaluated its implementation using the Quality Implementation Framework, a framework that supports implementation through a series of steps including assessment, collaboration, monitoring, and self-reflection. We used capacity-building strategies, intervention fit assessments, and codevelopment of an implementation plan to increase likelihood of successful SMaRRT-HD implementation. Moreover, by obtaining early buy-in and iteratively integrating input from clinic personnel, we empowered them to take ownership and identify solutions to encountered challenges. 25 Recognizing that modifications are often needed to accommodate host settings, we allowed mid-project, stakeholder-informed changes. For example, we adjusted the symptom severity threshold for e-mail alerts twice to balance the need for real-time information with the associated follow-up burden. We also changed the administration frequency from weekly to monthly and the recall period from last treatment to the last week of treatments. Although these latter changes reduced burden, patients and care team members were frustrated by the associated loss of linkage between reported symptoms and specific treatments. In the end, these changes 25 increased burden, as they resulted in the additional step of nurses asking patients to recall with which treatment the reported symptom occurred. The desire to link symptoms to individual hemodialysis treatments led QI project participants to conclude that SMaRRT-HD may be optimally administered on a twice-monthly basis using a single treatment recall period. This also underscored the importance of linking symptoms to specific treatments for management considerations. For example, when patients reported cramping, nurses subsequently examined the ultrafiltration volume/rate and target weight–post-weight differential from the associated treatment, informing intervention selection. Finally, studying implementation highlighted the need for guidance about symptom management strategies. Care teams were less likely to follow up on symptoms such as itching and thirst than they were cramping and shortness of breath, acknowledging uncertainty about how to manage some symptoms. Provision of symptom management guidance algorithms may be helpful in future implementations. Finally, our data confirm the previously reported burden of symptoms experienced by individuals receiving dialysis. 1–3 , It is striking that 97% of patients reported at least 1 symptom during the project, and nearly 75% reported at least 1 symptom at moderate severity or higher. Care team members reported feeling more informed about patient symptoms and made numerous responsive management changes. They also noted that communication about symptoms and other topics increased, even on days when SMaRRT-HD was not administered. However, there was no change in patient-reported patient-centeredness of care, possibly because of the clinic’s favorable preproject PPPC Scale scores. Our treatment adherence and hospitalization data showed nonsignificant trends toward improvement, suggesting potential for symptom ePROM administration to affect clinical outcomes, but these findings should be interpreted with caution given their exploratory nature. 37 Strengths of our project include use of an established implementation framework under which we engaged key stakeholders throughout, ultimately achieving clinic ownership of the new care process, and rigorous evaluation of both qualitative and quantitative outcomes. However, our project has limitations. First, we lengthened the recall period mid-project in response to stakeholder input, potentially introducing recall bias. As such, the symptom data should be interpreted with caution. In addition, findings may not transfer to environments with different implementation barriers and practice climates. For example, we conducted our project in a university-affiliated clinic in a rural setting with a substantial Spanish-speaking population, staffed by large dialysis organization–employed personnel who use corporate clinical algorithms. Preproject data suggest a positive practice environment in which most patients felt comfortable discussing their symptoms with the care team, despite having no established processes for symptom ascertainment, documentation, or follow-up. The positive preproject clinic environment may also explain the lack of change in pre- to postproject PPPC Scale scores. As implementation processes must be tailored to individual environments, we present our experience not as a recommendation for a universal implementation strategy, rather as an illustration of implementation science guiding process development in dialysis. Finally, this was a pilot feasibility project; we neither sought nor were powered to detect intervention effects on outcomes. In conclusion, we demonstrated that routine symptom ePROM administration to in-center hemodialysis patients is feasible and has the potential to improve outcomes. We propose the next step in SMaRRT-HD evaluation is a randomized trial investigating its effectiveness at improving outcomes while simultaneously evaluating optimal implementation strategies to expedite its potential, future clinical practice uptake. Disclosure In the past 2 years, JEF has received speaking honoraria from American Renal Associates, American Society of Nephrology, Dialysis Clinic, Incorporated, National Kidney Foundation, and multiple universities, as well as investigator-initiated research funding related to and unrelated to this project from the Renal Research Institute, a subsidiary of Fresenius Medical Care, North America. JEF is on the medical advisory board to NxStage Medical, now owned by Fresenius Medical Care, North America, and has received consulting fees from Fresenius Medical Care, North America and AstraZeneca, Inc. In the past 2 years MMA has received honoraria from the International Society of Nephrology and investigator-initiated research funding unrelated to this project from the Renal Research Institute, a subsidiary of Fresenius Medical Care, North America. All the other authors declared no competing interests. Acknowledgments The authors thank our collaborating clinic’s patients, personnel, medical providers, and governing body for their willingness to try new approaches, and their commitment to creating an outstanding culture of dialysis care. We also thank Lorien Dalrymple, Derek Forfang, Robert Kossmann, Mahesh Krishnan, and Francesca Tentori for their contributions while serving on the steering committee. This work was supported by an unrestricted, investigator-initiated research grant (A17-1082) from the Renal Research Institute (RRI), a subsidiary of Fresenius Medical Care, North America. RRI played no role in study design; collection, analysis, and interpretation of data; writing the report; or the decision to submit the report for publication. This project made use of systems and services provided by the Patient-Reported Outcomes Core (PRO Core; pro.unc.edu) at the Lineberger Comprehensive Cancer Center (LCCC) of the University of North Carolina at Chapel Hill . PRO Core is funded in part by a National Cancer Institute Cancer Center Core Support Grant ( 5-P30-CA016086 ) and the University Cancer Research Fund of North Carolina. The LCCC Bioinformatics Core provided the computational infrastructure for the project. In addition, this work was supported in part by the University of North Carolina at Chapel Hill’s Connected Health Applications & Interventions Core (CHAI Core) through a grant from the National Institutes of Health ( DK056350 ) to the University of North Carolina Nutrition Obesity Research Center, University of North Carolina , and a grant from the National Cancer Institute ( P30-CA16086 ) to the LCCC. JEF is supported by the National Institute of Diabetes and Digestive and Kidney Diseases ( K23 DK109401 ). Supplementary Material Supplementary File (PDF) Supplementary Material Supplementary File (PDF) SMaRRT-HD System Description . Table S1. Summary of changes to symptom severity thresholds for e-mail alerts. Table S2. SQUIRE guidelines and manuscript section with the relevant content. Table S3. Patient perception of patient-centeredness scale adaptation. Table S4. Tablet-based SMaRRT-HD usability testing participants and ratings. Table S5. Key usability testing findings with SMaRRT-HD updates and rationale. Table S6. Reported write-in symptoms and associated severity. Figure S1. Quality improvement project and research participant flow diagrams. | [
"ABDELKADER",
"FLYTHE",
"COX",
"WEISBORD",
"BASCH",
"DUPONT",
"ESPALLARGUES",
"SCHOUGAARD",
"ETON",
"SMITH",
"CHEN",
"VANEGDOM",
"GREENHALGH",
"AIYEGBUSI",
"TONG",
"SCHICKMAKAROFF",
"BASCH",
"SNYDER",
"WONG",
"SCHICKMAKAROFF",
"SCHICKMAKAROFF",
"SARAN",
"FLYTHE",
"MEYER... |
875588c5967a4310b58bbc98e71a1079_Non-specialist emergency medicine qualifications in Africa Lessons from the South African Diploma in_10.1016_j.afjem.2022.04.006.xml | Non-specialist emergency medicine qualifications in Africa: Lessons from the South African Diploma in Primary Emergency Care | [
"Geduld, H.",
"Cloete, D.",
"Dickerson, R.",
"Groenewald, A.",
"Stephens, T.",
"Fredericks, D.",
"Parker, A.",
"Jooste, W.",
"Lahri, S."
] | Introduction
Non-specialist emergency medicine qualifications are an important step in developing the specialty of emergency medicine. The Diploma in Primary Emergency Care (Dip PEC) of the Colleges of Medicine of South Africa is one of the oldest registrable qualifications. Reviewing its changing role over time has lessons for academics developing Emergency Medicine training in Africa.
Methods
Through a series of meetings and stakeholder engagements, the Council of the College of Emergency Medicine conducted a three year review of the qualification focusing on the curriculum, assessment processes, success rate and role of the qualification in the South African medical context. A survey of the perceptions of graduates over the last six years was also conducted.
Results
The survey showed candidate numbers increased dramatically from 2011 to 2017, resulting in an entry cap. Lessons identified included ensuring that the qualification is responsive to the state of development of emergency medicine in the country, needing aligned and valid assessment processes and maintaining the value of the qualification in context.
Discussion
Emergency medicine qualifications are dynamic in and of themselves and how they relate to their context. Program designers must prioritize ongoing evaluation from the start. | Introduction With the growth of emergency care in Africa, there is a great deal of interest in developing educational products to increase emergency medicine (EM) knowledge and skills. Whilst these products may be in addition to or as precursors to EM specialist training, they may also be stand-alone products aimed at improving EM competencies in the general physician population. These educational products range from short introductory courses such as The African Federation for Emergency Medicine (AFEM) Keystone course and the World Health Organization (WHO) Basic Emergency Care course to more extended teaching and learning programs such as the AFEM one year curriculum [ 1 , 2 ]. In South Africa, there is also a range of EM university-based Masters degrees available for the academic development of non-EM specialists, such as the Master of Science at the University of Witwatersrand and the Master of Philosophy at the University of Cape Town. One of the oldest registerable EM qualifications in Africa is the Diploma in Primary Emergency Care (Dip PEC) of the College of Emergency Medicine of South Africa (CEM) – a constituent college of the Colleges of Medicine of South Africa (CMSA). The CMSA represents 29 specialty medical and dental colleges and is the national assessment body for postgraduate medical training in South Africa. Specialist training requires 4 years of registration for a university-based Master of Medicine (MMed) while in a training post recognised by the Health Professions Council of South Africa (HPCSA) and completion of the relevant fellowship examinations through the CMSA. The relationship between specialist and non-specialist qualifications is a complex and dynamic one. This is particularly relevant to the African context where there is currently a strong impetus for specialist qualifications [3] . This development may have unintended consequences on the educational environment increasing the demand for specialty training and the competition for training posts by junior doctors. The CEM has seen a significant evolution in the role and scope of the Dip PEC over the past few years and we believe that an analysis of this process has value for other African countries looking to develop their own EM educational products and qualifications. The Dip PEC was the first registrable postgraduate EM qualification in South Africa and was first offered in 1986 by the College of Family Practitioners. It remained the only postgraduate EM qualification available in the country until the recognition of EM as a specialty by the HPCSA in 2003 and the subsequent establishment of the CEM in 2004 ( Fig. 1 ) [4] . The Dip PEC is registrable with the HPCSA allowing candidates to use the post-nominals professionally. The Dip PEC was initially targeted at the cohort of South African doctors already working in emergency care, emergency units, and emergency medical services seeking to formalize their knowledge, skills, and expertise. Many of them had numerous years of experience and had completed several emergency care-related short courses - some of whom were eventually ‘grandfathered’ by the HPCSA as EM specialists. As a result, the examination standard was set at an expert level, and the pool of candidates was relatively small. The Dip PEC was also seen as a marker of commitment to a specialist career in EM. For a brief period (2003–2004), it was considered the primary examination for the Fellowship of the College of Emergency Medicine (FCEM) before being replaced by a basic science examination. Many EM MMed training programs in South Africa have the Dip PEC as a requirement for entry. Over time, the CEM council aligned the Dip PEC outcomes to the standard of other diplomas offered by the CMSA. These are extended, continuous medical education activities focused on non-specialist, primarily junior, doctors who are developing their competencies in a particular field. Current regulations suggest that the competency level required is a generalist working at a district-level hospital [5] . The focus is on developing the skills and knowledge to manage common emergencies. The syllabus is broad and includes emergencies about all specialties and age groups, as well as associated clinical and practical skills. Learning is self-directed with no formal teaching or faculty. Like all CMSA examinations, the exam is run each semester, with the clinical examination sites rotating through five cities. Currently, admission to the Dip PEC examination requires candidates to be registered or registerable with the HPCSA as a medical practitioner and possess a valid Basic Life Support certificate. The candidate must have completed their internship and six months at an emergency center accredited by the CMSA. Alternatively, candidates may enter the exam by demonstrating active interest in EM through a comprehensive learning portfolio [5] . The Dip PEC is currently the largest diploma with a clinical component offered by the CMSA. In terms of popularity, it has outstripped the diplomas in Anaesthetics and Child Health. It is surpassed in numbers only by the diploma in HIV Management – a written only diploma [6] . The popularity of the Dip PEC has substantially increased in recent years, with 21 successful candidates in 2012, compared to 136 in 2017 ( Fig. 2 ) [6] . This coincided with the growing awareness of EM as a speciality and the development of EM specialist training in major cities. There was increasing engagement by EM specialists with undergraduate medical programs making young doctors more aware of EM knowledge and skills and potentially highlighting the lack of training. The Dip PEC written assessment initially consisted of two three-hour short answer question papers. In 2019 it transitioned to one Multiple Choice Question (MCQ) paper and one Visually Aided Question (VAQ) paper. The written examination can be written at multiple national examination centres and a selected number of African CMSA examination sites. Candidates who are successful in the written examination are invited to the clinical examination, which consists of an Oral Simulated Clinical Examination (OSCE), which includes practical skills assessment stations, resuscitation skills assessment stations and oral examination stations. The success rate of the diploma has increased since its inception. This is likely due to a combination of EM skills training entering undergraduate curriculums with more exposure to EM practice and consultants and growing comfort with simulation-based assessment practices. It would also be expected that the shift in standard to generalist competencies would lead to a higher pass rate. Over the 3 years of review, the pass rate has been between 75% and 85%. There is increasing interest in the qualification from outside of South Africa – several African EM led units have applied to be training sites or are supporting entry to the examination through portfolio use. Successful African candidates have come from Kenya, Zimbabwe and Malawi. The Dip PEC straddles the public-private emergency care divide well, focusing on core academic EM knowledge and critical basic EM skills. Non-specialist physicians run most emergency centres in South Africa. There are proportionally more private emergency centres than public emergency centres registered as Dip PEC accreditation sites. All public hospitals accredited for intern training are accredited Dip PEC training sites; however, few of these have EM specialists managing the emergency centres. For emergency centres to become accredited Dip PEC sites, they must be run by a physician who has the diploma or an EM or family medicine specialist and has an active training program with formal clinical supervision. The Dip PEC has an essential role in supporting CPD and professional development in private emergency centres. In 2011, the College of Emergency Medicine introduced the Higher Diploma in Emergency Medicine (H Dip Emerg Med). While the Dip PEC focused on basic skills, the goal of the Higher Diploma - which could only be entered by candidates who had completed the Dip PEC more than two years prior and had completed a portfolio of further EM experience – was to demonstrate a greater degree of EM competency [7] . This may allow for career and salary progression for non-specialist Emergency Medicine doctors in public and private. Moreover, this enabled the Dip PEC to become a more basic qualification. Challenges with articulation with higher education programs and registration of the qualification with the Health Professions Council have limited uptake, with only two candidates completing the qualification thus far. In 2020 the CMSA allowed limiting entry into the Dip PEC examination to 80 candidates per semester. This decision to restrict candidate numbers was complex and controversial but mandated by logistical and resource constraints related to conducting the clinical component of the examination. More specifically, the limited pool of voluntary examiners and lack of availability of venues that could accommodate large candidate numbers (exceeding 120) made large OSCEs unfeasible. This restriction was initiated prior to the COVID pandemic and the pandemic delayed the exam so that only one sitting was held in 2020. Methods From 2018 −2020, the CEM council conducted an in-depth evaluation of the evolving role of the Dip PEC in the landscape of EM training and general healthcare delivery in South Africa. As part of program evaluation, an electronic survey of 293 successful Dip PEC candidates was conducted by a Masters of Medicine (EM) student at Stellenbosch University. Results Participants represented 52% of all candidates completing the Dip PEC between 2012 and 2019. At the time of writing the examination: 82% were working in EM, 66% were working directly with an EM physician, 81% of candidates were early in their careers (within the first seven years post-graduation), and 55% were within two years of internship [6] . The top motivations expressed by participants were to broaden and deepen EM knowledge, obtain a further academic qualification, and prepare for a career in this specialty. Interestingly, the value of the qualification to enable travelling, working and living abroad was frequently mentioned. The impact of the coronavirus-19 (COVID) pandemic resulted in the clinical examination being postponed and forced the CEM to reflect on the goals and process of the Diploma. The DipPEC review was extended to include the impact of the COVID pandemic on the structure and format of the examination. In the current process of reconceptualising the examination format, the focus is on increasing capacity for the examination without strain on the College examiner pool but also, in a climate of growing acceptability of remote examination, to reframe the examination so that only components that have to value are assessed and in assessment methods that are valid and reliable and safe. The examination has evolved into testing knowledge, skills, behaviours, and attitudes to reflect current best educational and assessment practices and not only core clinical procedures. Current discussions are framed around remote examination of skills, decentralizing sign off of clinical skill competencies prior to entry and effective low-cost simulation techniques for assessment. When considering the evolution of the Dip PEC qualification in South Africa, we can identify several vital lessons which may be generalisable to other African settings. Responsiveness The Dip PEC is a powerful tool that fills gaps in general provider EM knowledge, competency, and confidence in South Africa due to limited EM training of undergraduate and junior providers. When planning a qualification, it may be helpful to consider national-level outcomes. General EM qualifications may be a critical part of influencing large scale change in healthcare provider competency and undergraduate curricula. Alignment The curriculum should mirror the purpose of the qualification and be aligned to the scope of EM knowledge and practice nationally. Alignment to a national standard of care can be difficult when considering the differences in private and public sector practice. The CEM has a nationally elected council from public and private practice and includes specialist and non-specialist members. Oversight of the diploma is performed by the CEM Council and the panel of examiners from around the country. Feedback about content and candidate performance in each examination is routinely given to the CEM council for consideration, and blueprinting of the curriculum is done regularly. Adaptability The role of the qualification changes as EM develops in a country and the learning needs evolve - the Dip PEC had to adapt to be fit for purpose. It was initially associated with specialist level training and was regarded as the highest available EM qualification. With the growth of specialist training, the qualification scope changed to a more basic general qualification with a broader candidate pool. When interpreting the value and standard of the Dip PEC, time of completion is critical to understanding the level at which assessment was performed. Programs need to understand the dynamic nature of qualifications. Frequent feedback, regular needs assessments and reflection on the purpose and scope of the qualification is an essential part of ensuring its relevance. Validity of assessment Continuous evaluation of assessment processes is necessary to ensure the validity of the assessment itself. The learning environment for the Dip PEC is mainly service-driven and not directly associated with higher educational institutions, and candidates are predominantly self-motivated, independent learners. In this unstructured environment, great value is placed on appropriate and valid assessments. A critical review of each examination cycle by the convenor and moderator is also fed back to the council. Educationalist input, incorporating evidence-based assessment practices and ongoing examiner training is key to ensuring trustworthiness. Reputability and value The value of the qualification is partly based on the reputation of the assessment and the assessing body. The external use of the qualification and the way it is viewed globally are important considerations. The Dip PEC leverages heavily on the stature of the CMSA and the history and reputation of this 66-year old organization. The value of the examination is reinforced by the EM community established around it – members of this community are responsible for conducting the examination, establishing training environments, and creating value for the examination. In turn, through the examination, the community is built in a cascading way. Furthermore, the qualification has intrinsic value in terms of identity and demonstrable competence for the individuals in this community - both examiners and successful candidates. Globalization The migratory nature of modern medical practice and specifically EM practice is an important consideration. Professional freedom and self-determination mean that doctors may travel for qualifications or work; and attempt the qualification because it is accessible. The pool of candidates is often more heterogeneous than predicted. A clearly defined curriculum with specified learning outcomes and an explicit assessment strategy including rules around language and performance is needed for equitable performance. Commodification Qualifications do not exist in a vacuum. In medicine, these may have financial and employment drivers such as career advancement and increased remuneration. In these settings, this may drive the popularity of the qualification. The Dip PEC in South Africa is associated with charging higher locum fees for professional work and is an entry point for employment on cruise ships. Discussion The CEM predicts ongoing change in the role and scope of the Diploma. The impact of the pandemic has driven new adaptations in the format of the exam with increasing remote-based assessment and a view to incorporate work-based assessments. The increased engagement of EM in undergraduate training nationally suggests that the basic emergency skills that are now the focus will soon be a required competence for all graduates. The challenge will then be to identify the new role for the Dip PEC, potentially as a qualifier to work in an Emergency center or as a surrogate of higher-level emergency care skills and the transition from junior to a senior Emergency Care provider. These evolutions would necessitate a review of the curriculum, the entry requirements and assessment processes, and prompt the CEM to explore the potential impact on the national landscape of medical qualifications and human resources for health. With interest from many African countries in the diploma, it is useful to reflect on the meaning and value. It is clear that the role and scope of the curriculum are determined by the human resource need in South Africa and that the Dip PEC is not a static qualification. African candidates likely complete the diploma for the same reasons as local candidates, namely to improve EM skills and knowledge and to have a qualification that shows their commitment to EM as a specialty. In some cases, this may increase their eligibility for local or sponsored international specialist training. The political power of a South African qualification is also a factor, particularly in Sub-Saharan Africa. New qualifications should consider issues such as accessibility for international candidates and transferability of the qualification. The scope of the curriculum and the level of assessment should be clarified so that appropriate judgement can be made in other settings as to the value of the qualification. It is also useful to look broader at the impact a new medical qualification will make across the board to all medical graduates in a country or region. One may need to review the need for generalist EM training, the potential for growth, and the inherent difficulties from such growth. It is useful in the planning stages to consider how the qualification will intercalate with other existing or future qualifications and whether it could have direct professional value to the candidates in terms of salary or promotion. Early engagement with stakeholders including the national medical registration board, Universities and other academic institutions and national education qualification authorities is crucial. New emergency medicine educational products are constantly developed across Africa. The impact of these qualifications should be considered not just in terms of EM but also in general health system training, the potential candidate pool, and national human resources for health. Systems should be in place to ensure that the assessment remains valid and that the outcomes in terms of candidates and the needs of the health system are continuously evaluated. Dissemination of results The thoughts and perspectives were generated through work done by the College of Emergency Medicine Council and were discussed at the biannual council meetings. The findings of the candidate survey was published in the Transactions of the Colleges of Medicine of South Africa. Authors' contribution Authors contributed as follow to the conception or design of the work; the acquisition, analysis, or interpretation of data for the work; and drafting the work or revising it critically for important intellectual content: HG contributed 40%; DC and WJ contributed 15% each; and RD, DF, AP, AG, TS and SL contributed 5% each. All authors approved the version to be published and agreed to be accountable for all aspects of the work. Declaration of Competing Interests The authors declared no conflicts of interest. Acknowledgements The thoughts and perspectives were generated through work done by the College of Emergency Medicine Council and were discussed at the biannual council meetings. | [
"SAWE",
"WALLIS",
"CLOETE"
] |
5ffa8675c6af4d24bef2d991c6cc32ed_Genetic polymorphism of CYP2D6 gene among Egyptian hypertensive cases_10.1016_j.jobaz.2012.12.002.xml | Genetic polymorphism of CYP2D6 gene among Egyptian hypertensive cases | [
"Ali, Ahmed A.A.",
"Wassim, Nahla M.",
"Dowaidar, Moataz M.",
"Yaseen, Ahmed E."
] | Background
Hypertension is a cardiovascular disease that is affected by environmental, demographic and genetic factors.
Objective
This study aims to determine the frequency of the CYP2D6∗1, ∗3, ∗4 and ∗5 variants among hypertensive cases and cases with obesity and cases with cardiac complications.
Subjects and methods
DNA was isolated from peripheral blood samples that were collected from 123 hypertensive cases and from 429 healthy non-related controls by using the Magna pure system. Genomic DNA was used to determine the frequency of CYP2D6∗1, CYP2D6∗3, CYP2D6∗4 and CYP2D6∗5 allelic variants by the application of the light cycler polymerase chain reaction (Realtime PCR) technique.
Results
Comparing cases of hypertension and controls as regard to the genotypic allelic variants of CYP2D6 gene, hypertensive cases showed a significantly higher wild genotype 1/1 compared to controls (85.4% vs. 74.8%, p
=0.01) with a lower frequency of mutant genotype 4/4 (1.6% vs. 8.6%, p
=0.008) This phenomenon was manifested among cases of subgroups with obesity that had significantly lower mutant homozygous forms than obese controls (2.3% vs. 9.5%, p 0.04) and cases with cardiac complications (88.2% vs. 74.8%, p
=0.01).
Conclusion
CYP2D6 polymorphism is positively associated with hypertensive cardiac complications as well as hypertensive obese cases. | Introduction Cardiovascular diseases are complex disorders that are affected by environmental, demographic and genetic factors. It is evaluated that the genetic element associated with blood pressure variation ranges from 30% to 50% ( Tanira, 2005 ). Evidence from animal models and human studies suggests that cytochrome P450 plays a mechanistic role in the development of hypertension. However, the human cytochrome P450 (CYP) – containing more than 59 functional genes and 58 pseudogenes – are playing an important or dominant role in the metabolism of some endogenous compounds, therapeutic drugs, and other xenobiotics ( Zhou et al., 2009; Bradford, 2002 ). One of the most important polymorphic P450 genes is CYP2D6 , the debrisoquine hydroxylase. ( Bertilsson, 1995; Sistonen et al., 2007 ). Although this CYP2D6 makes up only 1.5% of the total cytochrome P450 isoforms, it metabolizes up to one-quarter of all prescribed drugs such as antidepressant, antipsychotic, antiarrhythmic, opiates and antihypertensive drugs ( Murphy et al., 2001 ). The CYP2D6 gene is localized on chromosome 22q13 ( Heim and Meyer, 1990 ). It is highly polymorphic variant, more than 120 alleles had been. These alleles are resulting from point mutations, rearrangements, additions, deletions and duplications ( Ma et al., 2002 ; Evans & Relling, 2004). Among the common allelic variants, individuals with two functional alleles (wild-type allele, CYP2D6 ∗1), and with one functional allele are considered extensive metabolizers (EMs) (Gardiner & Zanger et al., 2001; Gardiner and Begg, 2006 ). In addition, individuals with whole gene deletion (null allele) are considered poor metabolizers (PM) e.g. CYP2D6 ∗5. There are also individuals with inactivating alleles produced from single nucleotide polymorphisms alone or in combination, deletion or insertion of a single or multiple base(s), e.g. CYP2D6 ∗3 and CYP2D6 ∗4 are considered PM ( Linder et al., 1997; Sachse et al., 1997; Bradford, 2002; Kirchheiner et al., 2001; Zanger et al., 2004; Guzey and Spigset, 2004 ). Published studies have determined that the frequency of the alternative CYP2D6 phenotypes varies significantly among different racial/ethnic groups. Genetic variation of drug metabolizing enzymes also has been shown as one of the major causes of the inter-individual variability to drug response. Therefore the impact of CYP2D6 polymorphism on the clearance of or response to a series of cardiovascular drugs was addressed ( Zhou, 2009 ). Although the frequencies of CYP2D6 mutant alleles have been studied extensively in different populations, limited information is available for those of the Middle East populations. The objective of the current study was to determine the frequencies of the CYP2D6 ∗1, ∗3, ∗4 and ∗5 variants among Egyptian hypertensive cases and with complicated subgroups comparing with healthy individuals. Subjects and methods As a pilot study ahead of a future wide scale genetic analysis of hypertensive cases, a convenient sample of 123 cases (83 males and 40 females) with an age mean ± SD of 50.93 ± 15.43 was selected for the study. Of these cases, 22.8.3% had a positive family history of hypertension, 28.5% had a positive parental consanguinity, 10.6% were smokers, 71.5% were obese and 39% had type 2 diabetes. For comparison, 429 normal healthy unrelated subjects (227 males and 202 females) with an age mean and SD of 47.65 ± 11.15 years were taken from the same locality as controls. Cases were furthermore stratified according to the association of other complications into 4 subgroups including: Gp I: Hypertension with obesity 88 (71.5%), Gp II: Hypertension with diabetes 48 (39%), Gp III: Hypertension with cardiac complication 76 (61.8%) and Gp IV: Hypertension with renal complication 22 (17.9%). Obesity was diagnosed on the basis of the definitions established by the World Health Organization (WHO) in 1997 and published in 2000, that defines the body mass index (BMI) of obesity as being 30 or greater ( WHO, 2012 ). On the other hand, type 2 diabetes was diagnosed on the basis of blood sugar during fasting time (12–14 h), being higher than 126 mg/dL at least for two times; or with symptoms of hyperglycemia plus a random plasma glucose being higher than 200 mg/dl (11.1 mmol/l) ( American Diabetes Association, 2009 ). The other cases with cardiac and renal complications were diagnosed by renal and cardiology departments. DNA extraction and amplification Genomic DNA was isolated from collected peripheral blood samples using MagNA Pure LC instrument (Roche Molecular Biochemicals, Mannheim, Germany), according to the manufacturer’s standard. Oligonucleotide primers and fluorescence-labeled hybridization probes were designed for amplification and sequence-specific detection of corresponding polymorphism (TIB MolBiol; Berlin, Germany). During PCR the amplicon was detected using two specific hybridization probes, one labeled with fluorescein and one with LightCycler Red 640 (LCRed640). The absence of a CYP allele introduces a destabilizing mismatch, which results in a decreased melting temperature. For mutation detection with the LightCycler, a 20 μl reaction was performed. The reaction mixture contained: 1 × LightCycler DNA Master Hybridization Mix (Roche), 2.0 mM MgCl2, 5 pmol of each primer, 4 pmol Sensor probe, 8 pmol Anchor probe, and 200 ng genomic DNA. PCR conditions were: 120 s at 95 °C for DNA denaturation, 40 cycles (0 s at 95 °C (denaturation), 10 s at 55 °C (annealing), 40 s at 72 °C (extension)). Next a melting curve analysis was performed by heating at 95 °C for 60 s, followed by cooling at 40 °C for 120 s and gradual heating (0.1 °C/s) up to 80 °C. After the melting curve analysis a final cooling was performed at 40 °C for 30 s. To analyze the melting curves the corresponding melting peaks were calculated by plotting the first negative derivative of the fluorescence with respect to the temperature (−dF/dT vs. T) Fig. 1 . Probes used in the RT PCR “variable base is highlighted in yellow if the probe is wild type specific or green for the polymorphism specific probes”: Statistical analysis Statistical analysis was done using the SPSS software program version 17. Allelic and genotypic frequencies were compared and statistically analyzed using the Fisher’s exact test and Odds ratio (OR) with 95% confidence intervals (95% CI). Conformity of genotype distributions with Hardy–Weinberg (HW) equilibrium was evaluated by Chi-square analysis. For all tests, a p -value < 0.05 was considered to be statistically significant. Results Comparing cases of hypertension and controls as regards the genotypic allelic variants of CYP2D6 gene showed a significantly higher wild genotype 1/1 among hypertensive cases compared to controls (85.4% vs. 74.8%, OR 1.96, 95% Cl = 1.1–3.4, p = 0.01) with a lower frequency of mutant genotypes (1.6% vs. 8.6%, OR 0.2, 95% Cl = 0.04–0.7, p = 0.008) ( Table 1 ). This phenomenon was manifested among cases of subgroups with obesity that had significantly lower mutant homozygous forms than obese controls (2.3% vs. 9.5%, OR 0.2, 95% Cl = 0.05–0.95, p = 0.04) ( Table 2 ) and cases with cardiac complications (1.3% vs. 8.6%, OR 0.1, 95% Cl = 0.02–1.05, p = 0.03) ( Table 3 ). Other subgroups were neglected because their genotyping was not confirmed. Regarding the allelic frequency, hypertensive cases had also a significant higher frequency of the wild type allele 1 (91.9% vs. 82.6%, OR 2.3, 95% Cl = 1.4–3.8, p = 0.001) with a significantly lower frequency of the mutant alleles compared to controls (8.1% vs. 16.9% OR 0.4, 95% CI = 0.3–0.7, p = 0.001) ( Table 1 ). Discussion Based on the polymorphism detection, the present study identified the prevalent wild genotype ( CYP2D6 1/1) and the most common inactive genotypes (mutant or poor metabolic) particularly CYP2D6 4/4 in hypertensive cases and healthy controls. As we did not get any previous studies associating hypertension with genotypic variants of CYP2D6 , we have this study to reach a conclusion in this regard. As regards the genotypic variants of CYP2D6 gene, the present study showed a significantly higher wild genotype 1/1 among hypertensive cases compared to controls with a lower frequency of mutant genotypes (poor metabolic). This phenomenon was manifested among cases of subgroups with obesity that had significantly lower mutant homozygous forms than obese controls and cases with cardiac complications. Hypertensive cases with diabetes and cases with renal complication were neglected because their genotyping was not confirmed. Whereas comparing cases of hypertension and controls as regards the allelic variants of CYP2D6 gene showed hypertensive cases exhibited a significantly higher frequency for allele 1 than controls. Actually the impact of CYP2D6 polymorphisms on the clinical outcome of hypertensive drugs had been extensively published in several literatures ( Arnett et al., 2006; Lefebvre et al., 2007; Zateyshchikov et al., 2007; Bijl et al., 2009; Goryachkina et al., 2008; Rau et al., 2009 ). Depending on which alleles are present in an individual, a wide range of metabolic activities was observed among subjects: poor (PM), intermediate (IM), extensive (EM) and ultrarapid (UM) ( Zanger et al., 2004; Kirchheiner et al., 2004; Sistonen et al., 2007; Spina and de Leon, 2007 ). PM alleles (e.g.∗3,∗4,∗4 × n ,∗5,∗6,∗7,∗8,∗11 or ∗45) cause absent enzymatic activity and, possibly, an increased risk of adverse drug reactions even with a routine therapy with CYP2D6 substrates ( Marez et al., 1997; Sachse et al., 1998 ). In addition, many of the studies have shown that some ethnic groups have specific CYP2D6 phenotypes such as CYP2D6∗10, found in Asians and CYP2D6∗17, found in Africans ( Bertilsson, 1995; Ma et al., 2002 ; Evans & Relling, 2004). On the other hand, the frequency of the phenotype of poor metabolizers differs among ethnic groups. Less than 1% of Asians, 2–5% of African–Americans, and 6–10% of Caucasians are poor metabolizers of CYP2D6 ( Kalow et al., 1980 ). Whereas, the frequencies of CYP2D6 ∗4, allele are described among different ethnic populations: 20.7% in Caucasians ( Sachse et al., 1997 ), 7.8% among African Americans ( Wan et al., 2001 ), 17.84% in Greeks ( Arvanitidis et al., 2007 ), 18.4% in Dutch ( Tamminga et al., 2001 ), 18.2% in Russians ( Gaikovitch et al., 2003 ), 20.7% in Germans ( Sachse et al., 1997 ), 15.3% in Italians ( Scordo et al., 2004 ), 33.4% in Faroese population ( Halling et al., 2005 ), 13.8% in Spanish ( Menoyo et al., 2006 ), 14% in Croatians ( Bozina et al., 2003 ), 3.5% in the Saudi population ( McLellan et al., 1997 ) and 2% in the UAE population ( Qumsieh et al., 2011 ). Conclusion CYP2D6 polymorphism is positively associated with hypertensive cardiac complications and hypertensive obese cases. Actually some caution-in the interpretation of the data-is warranted because of the relatively small number of studied subjects, especially in subgroup analysis. In this respect, we recommend a wider scale analysis recruiting a large sample of hypertensive cases and also involving a wide genome association study to confirm these results and search for other important interactive genomic markers. Acknowledgement Authors are grateful to Prof. Dr Ahmad Settin, College of Medicine, Mansura University for helping in collection and diagnosis of cases. | [
"ARNETT",
"ARVANITIDIS",
"BERTILSSON",
"BIJL",
"BOZINA",
"BRADFORD",
"GAIKOVITCH",
"GARDINER",
"GORYACHKINA",
"GUZEY",
"HALLING",
"HEIM",
"KALOW",
"KIRCHHEINER",
"KIRCHHEINER",
"LEFEBVRE",
"LINDER",
"MA",
"MAREZ",
"MCLELLAN",
"MENOYO",
"MURPHY",
"QUMSIEH",
"RAU",
"SAC... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.