filename
stringlengths 65
180
| title
stringlengths 4
2.07k
| authors
listlengths 0
8.02k
| abstract
stringlengths 1
25.1k
⌀ | body
stringlengths 0
753k
⌀ | references
listlengths 0
2.07k
|
|---|---|---|---|---|---|
d825b5b791b541b3a2fd9c796e0a8161_Characteristics and Diagnostic Yield of Pediatric Colonoscopy in Taiwan_10.1016_j.pedneo.2015.01.005.xml
|
Characteristics and Diagnostic Yield of Pediatric Colonoscopy in Taiwan
|
[
"Wu, Chien-Ting",
"Chen, Chih-An",
"Yang, Yao-Jong"
] |
Background
Colonoscopy of the lower gastrointestinal tract has diagnostic and therapeutic value. This retrospective study aimed to investigate the indications, complications, and diagnostic yield of diagnostic colonoscopy among Taiwanese children.
Methods
The application of colonoscopy performed on children aged < 18 years between 1998 and 2010 in a referral tertiary center in Southern Taiwan was reviewed. Data on age, gender, indications, complications, and colonoscopic and final diagnoses were collected and analyzed.
Results
One hundred and ninety-two children with 201 colonoscopies and 27 sigmoidoscopies were enrolled. The rate of successful ileocecal approach was 77.5%. The most common indication was lower gastrointestinal bleeding (LGIB; 53.5%), followed by chronic abdominal pain (20.6%), iron deficiency anemia (IDA; 11.8%), and chronic diarrhea (11.4%). There were 144 patients (75%) with a conclusive diagnosis in their first colonoscopy, including nonspecific colitis (23.4%), polyp (20.4%), and inflammatory bowel disease (8.3%). The diagnostic yields of colonoscopy according to the major indications were 77.3% in LGIB, 68.1% in chronic abdominal pain, 66.7% in IDA, and 79.2% in chronic diarrhea. Among the patients with LGIB, juvenile polyp (26.4%) was the most common etiology. There were no major procedure-related complications.
Conclusion
LGIB is the most common indication for pediatric colonoscopy. Pediatric colonoscopy is most effective in diagnosing pediatric LGIB and chronic diarrhea.
|
1 Introduction Pediatric fiber optic colonoscopy was introduced in the 1970s. Since then, improvements in fiber optic and video technology, conscious sedation, and physicians' experience have led to the establishment of colonoscopy as a procedure for the diagnosis, evaluation, and management of lower gastrointestinal tract disorders in children. Colonoscopy is technically more challenging than esophagogastroduodenoscopy, especially in pediatric patients, due to their poor compliance and cooperation. 1–4 The development of pediatric colonoscopy in Taiwan began in the late 1970s, and both diagnostic and therapeutic colonoscopies are now widely performed by most pediatric gastroenterologists. 5 Despite the generally increased use of colonoscopy in pediatric patients, careful selection of the indications for colonoscopy in these patients can still achieve higher diagnostic yields and prevent complications. The most common indications are unexplained iron deficiency anemia (IDA), lower gastrointestinal bleeding (LGIB), and diarrhea. However, the diagnostic yield varies depending on the indication, with unexplained diarrhea and blood in the stools having the highest diagnostic yield (91–97%). 6 Therapeutic colonoscopy is most frequently applied in children for polypectomy and for bleeding, 7,8 with successful resection rates exceeding 96% for polypectomy. 9,10 2,11,12 With the rapidly increasing number of colorectal polyps, polypectomy has also become the most common endoscopic procedure in adults. Although the potential for malignant change of colorectal polyps in children is rare, symptomatic colorectal polyps are not uncommon, especially in those with LGIB. However, available data regarding the prevalence, clinical features, and significance of colonoscopy in the evaluation of colorectal polyps in children remain limited. This retrospective study aimed to investigate the indications, complications, and diagnostic yield of pediatric colonoscopy, and the prevalence and clinical characteristics of colorectal polyps in children. 13–15 2 Methods 2.1 Participants enrolled, data collected, and definitions The application of diagnostic colonoscopy in consecutive children aged < 18 years between 1998 and 2010 at National Cheng Kung University Hospital, a tertiary referral center in Southern Taiwan, was reviewed. Data were collected from the hospital's electronic database system. Demographic data, indications for colonoscopy, final diagnosis, and complications were recorded and analyzed. A positive diagnostic yield was defined as colonoscopic and/or histologic findings leading to a conclusive diagnosis that corresponded with the symptoms. Nonspecific colitis was defined as colonoscopic features showing erosive or erythematous mucosal lesions combined with lymphocytic infiltration in histologic examinations. Major complications were defined as complications related to the procedure that led to prolonged admission, morbidity, or mortality. The hospital's Ethical Committee approved the study protocol. 2.2 Colonoscopy preparation and procedure Each patient had a bowel preparation of low residue diet for 2 days prior to examination, followed by either oral castor oil (dosage: > 2 years, 5–15 mL; < 2 years, 1–5 mL), and suppository bisacodyl (dosage: > 10 years, 10 mg; < 10 years, 5 mg) on the night before the examination, or oral sodium phosphate (dosage: > 12 years, 20–45 mL; 10–11 years, 10–20 mL; 5–9 years, 5–10 mL) with liquid in the night and on the day before the examination. Bowel-cleansing enemas were also performed on the day of the examination. Intravenous or intramuscular meperidine (dosage: 2 mg/kg, maximum 50 mg) and hyoscine butyl bromide (0.5 mg/kg, maximum 20 mg) were given 30 minutes prior to the examination. Patients who were not cooperative or tolerant during the examination received conscious sedation with intravenous midazolam (single dose 0.2 mg/kg) and propofol (single dose 2.5 mg/kg for induction of anesthesia, with a bolus dose of 5–10 mg if needed). 16 Under conscious sedation, the patients were given oxygen supplementation via nasal cannula and monitored by pulse oximetry. 17,18 All of the colonoscopy procedures were performed by a single pediatric gastrointestinal endoscopist using an Olympus PCF-240L colonoscope (Olympus Corporation, Tokyo, Japan). In neonates or young infants, the Olympus GIF-Q230 gastroscope (Olympus Corporation, Tokyo, Japan) was used instead of a colonoscope. The examinations were performed in the left lateral position. The position of the patient was changed and manual compression of the abdomen was performed when there was difficulty in advancing the colonoscope. If a colorectal polyp was found, it was removed using the forceps or by polypectomy, and the specimen was sent for histologic studies. 3 Results 3.1 Demographic data and indications for examination One hundred and ninety-two children with 201 colonoscopies and 27 rectosigmoidoscopies were enrolled. There were 81 girls and 111 boys, with a mean age of 7.7 ± 5.4 years (range, from 15 days to 18 years). Two or more sequential endoscopies were performed in 22 (11.5%) patients ( Table 1 ), and 65% of the procedures involved conscious sedation. There were no major procedure-related complications in any of the patients. In the first colonoscopies of 169 patients, the cecum was successfully reached in 131 (77.5%) cases, while the terminal ileum was reached in 92 (54.4%) ( Table 1 ). The most common indication for colonoscopy or sigmoidoscopy was LGIB (53.5%), followed by chronic abdominal pain (20.6%), IDA (11.8%), and chronic diarrhea (11.4%) ( Table 2 ). 3.2 Conclusive diagnosis and diagnostic yield Conclusive diagnosis relied on endoscopic imaging and/or histology and 144 patients (75%) had a conclusive diagnosis on their first examination, while 48 had negative findings. The most common conclusive diagnosis was nonspecific colitis (23.4%), followed by colorectal polyp (20.4%), lymphoid hyperplasia (6.3%), Crohn's disease (4.7%), ulcerative colitis (3.6%), and cow's milk protein allergy (3.6%). The diagnostic yield was 77.3% in 110 patients with LGIB and colorectal polyp (26.4%) was the most commonly encountered etiology, followed by nonspecific colitis (22.7%) and inflammatory bowel disease (IBD; 10.9%) ( Table 2 ). Of the 47 patients with chronic abdominal pain, the diagnostic yield was 68.1%, with nonspecific colitis (25.5%) as the most common etiology. In 24 IDA patients, the diagnostic yield was 66.7% and most cases were IBD (25%) or cow's milk allergy (25%). Among the 24 patients with chronic diarrhea, the diagnostic yield was 79.2%, with IBD (29.2%) as the most common etiology. 3.3 Characteristics of colorectal polyps The clinical and colonoscopic characteristics of 39 patients with colorectal polyps revealed a male predominance (2.2:1) and a mean age of 6.1 years ( Table 3 ). Thirty-six (92.3%) of the 39 polyps were solitary, 36 were juvenile polyps, and three were tubular adenoma. The most common location of the polyp was the colon (53.8%) above the rectosigmoid portion, and 74.3% of patients presented with LGIB. All polyps were removed by forceps or polypectomy without significant bleeding or perforation. 4 Discussion Except for data from the Pediatric Endoscopy Database System-Clinical Outcomes Research Initiative, the current report is the largest series of pediatric colonoscopy in the literature in English. Colonoscopy is performed less frequently in children than in adults because of difficulties in preparation and sedation, which are usually needed in children. 6 In the report by Hassall et al, 6 all 113 patients received colonoscopy under either general anesthesia or conscious sedation. By contrast, the present report shows only 65% of the procedures involving conscious sedation. This may be one of the reasons for the slightly lower success rate of reaching the cecum or terminal ileum (77.5%) in the present study compared to the series with full sedation (84–97.6%). 2 Another reason may be that while the pathologic lesions identified in other images were reached, the procedures were terminated in some patients before the endoscope reached the cecum. Although the terminal ileum approach is not necessary in each colonoscopy, previous study revealed that up to 85% of patients with Crohn's disease had a terminal ileum lesion, which was confirmed using colonoscopy and ileum biopsy. 2,3,19 Therefore, if IBD is suspected or there are no lesions in the colon, the ileum intubation is crucial in making a prompt diagnosis. As in other reports, there are no major procedure-related complications in the patients here. 20 Minor complications like cough with transient desaturation, intravenous fluid extravasation, and occasional slight oozing after polypectomy also occurred. 3,7 The most common indication for colonoscopy in this study is LGIB. Chronic abdominal pain, IDA, and chronic diarrhea are also common reasons for pediatric colonoscopy. Moreover, this study highlights a new indication for evaluating the lead points of recurrent intussusception suspected from imaging studies. All (2 lymphomas and 2 polyps) were managed nonsurgically, with good outcomes. 3–5,11 According to the endoscopic features and histology, a conclusive diagnosis has been made in 75% of the patients. As in other reports from Asia, the two most common etiologies are nonspecific colitis (23.4%) and colorectal polyp (20.4%). However, there is a lower rate (8.3%) of IBD in this study than that reported in The Netherlands. 3,7,21 Furthermore, 6.7% of 192 children with indications for colonoscopy have eosinophilic colitis (including cow's milk protein allergy). 14 The diagnostic yield of colonoscopy according to the major indications is 77.3% in LGIB, 68.1% in chronic abdominal pain, 66.7% in IDA, and 79.2% in chronic diarrhea. These results imply that LGIB (including colorectal polyps, nonspecific colitis, and IBD) and chronic diarrhea are alarming events for children who require further lower gastrointestinal investigations. The results also show more cases of colorectal polyps in children undergoing colonoscopy compared to those of a report from the United Kingdom (20.4% vs. 4.0%). Among patients with chronic diarrhea, IBD is the leading etiology. El Mouzan et al 12 reported that pediatric patients with bloody diarrhea had a high colonoscopic yield (91%) compared to those with only chronic diarrhea (43%). 7 In the present study, only two patients with visible blood in their stools were categorized into the chronic diarrhea group. One was diagnosed as pseudomembranous colitis and the other eosinophilic colitis pathologically. 7 In contrast to colonoscopy in adults with unexplained IDA, which is highly recommended, the role of diagnostic colonoscopy in children with IDA is uncertain. Esophagogastroduodenoscopy is generally applied in pediatric patients with unexplained IDA because it is associated with 22 Helicobacter pylori infection. However, in this study, 23 H. pylori infection has been excluded in all of the patients with IDA prior to colonoscopy. The diagnostic yield of colonoscopy among these patients is 66.7%. This indicates that colonoscopy is an appropriate examination for evaluating children with unexplained IDA but without H. pylori infection, especially IBD and cow's milk protein allergy. As in the current report, the reported prevalence of colon polyps in pediatric diagnostic colonoscopy in Asian populations (20.3–20.5%) is higher than that in Western populations (4.0–8.6%). Most reports showed that 80–90% of polyps were located at the rectosigmoid colon. 5,12,15,24 However, in this study, only 43.6% of the polyps are at the rectosigmoid colon, which is similar to the report by Gupta et al. 3,5 Although most pediatric colon polyps are juvenile polyps, some are potentially premalignant. 24 This highlights the importance of colonoscopy rather than sigmoidoscopy for the diagnosis and treatment of pediatric colon polyps. 25 A colonic pathology is not uncommon in pediatric patients. Children presenting with symptoms or signs of lower gastrointestinal disorders should undergo colonoscopy to obtain a definite diagnosis and prompt treatment. Pediatric colonoscopy is a safe and effective procedure to detect pathologic lesions of the lower gastrointestinal tract. Conflicts of interest The authors have no conflicts of interest relevant to this article.
|
[
"SQUIRES",
"HASSALL",
"TAM",
"KALAOUI",
"PARK",
"GILGER",
"ELMOUZAN",
"ELMOUZAN",
"BARNERT",
"NG",
"MUDAWI",
"LATT",
"HYER",
"DERIDDER",
"THAKKAR",
"HUNTER",
"KOH",
"ELITSUR",
"KAWAMITSU",
"BATRES",
"THAPA",
"PEYTREMANNBRIDEVAUX",
"HUANG",
"GUPTA",
"LEE"
] |
a06f37f7790f4ac9aabdb783a16ac922_Upregulation of liver VLDL receptor and FATCD36 expression in LDLR apoB100100 mice fed trans-10cis-1_10.1194_jlr.M600140-JLR200.xml
|
Upregulation of liver VLDL receptor and FAT/CD36 expression in LDLR−/− apoB100/100 mice fed trans-10,cis-12 conjugated linoleic acid
1
|
[
"Degrace, Pascal",
"Moindrot, Bastien",
"Mohamed, Ismaël",
"Gresti, Joseph",
"Du, Zhen-Yu",
"Chardigny, Jean-Michel",
"Sébédio, Jean-Louis",
"Clouet, Pierre"
] |
This study explores the mechanisms responsible for the fatty liver setup in mice fed trans-10,cis-12 conjugated linoleic acid (t10c12 CLA), hypothesizing that an induction of low density lipoprotein receptor (LDLR) expression is associated with lipid accumulation. To this end, the effects of t10c12 CLA treatment on lipid parameters, serum lipoproteins, and expression of liver lipid receptors were measured in LDLR−/− apoB100/100 mice as a model of human familial hypercholesterolemia itself depleted of LDLR. Mice were fed t10c12 CLA over 2 or 4 weeks. We first observed that the treatment induced liver steatosis, even in the absence of LDLR. Mice treated for 2 weeks exhibited hypertriglyceridemia with high levels of VLDL and HDL, whereas a 4 week treatment inversely induced a reduction of serum triglycerides (TGs), essentially through a decrease in VLDL levels. In the absence of LDLR, the mRNA levels of other proteins, such as VLDL receptor, lipoprotein lipase, and fatty acid translocase, usually not expressed in the liver, were upregulated, suggesting their involvement in the steatosis setup and lipoprotein clearance. The data also suggest that the TG-lowering effect induced by t10c12 CLA treatment was attributable to both the reduction of circulating free fatty acids in response to the severe lipoatrophy and the high capacity of liver to clear off plasma lipids.
|
Conjugated linoleic acids (CLAs) refer to a group of dienoic derivatives of linoleic acid. In most feeding studies, CLAs are mainly represented by cis -9, trans -11-C 18:2 , the main natural isomer produced in ruminants, and by trans -10, cis -12-C 18:2 (t10c12 CLA) essentially originating from vegetable oil processes. These isomers of linoleic acid have been shown to exhibit a variety of unique properties such as anticancer ( 1 ), antiatherogenic ( 2 ), and immune response-enhancing ( 3 ) effects in animal models. CLAs have also been reported to reduce total body fat content in mice, rats, and chickens ( 4–6 ). The C57Bl6 mouse is a model largely used to study the biological effects of CLA, to which this strain is very sensitive, and in particular to t10c12 CLA, which was identified recently as the isomer affecting body lipid metabolism ( 7 , 8 ). After a 4 week treatment with t10c12 CLA, C57Bl6 mice exhibit severe lipoatrophy, steatotic liver, hyperinsulinemia, and plasma triglyceride (TG) alteration ( 7 , 9 ). Thus, t10c12 CLA-fed mice constitute an interesting model for the study of steatosis onset in relation to lipid metabolism dysfunctions, nonalcoholic fatty liver disease now being recognized as one of the common features of the metabolic syndrome, with visceral fat obesity, insulin resistance, dyslipidemia, and hypertension ( 10 ). In a previous work, we suggested that a high uptake of plasma lipids by the liver would explain part of the TG accumulation in this organ after t10c12 CLA feeding ( 7 ). As the overexpression of hepatic low density lipoprotein receptor (LDLR) was demonstrated to increase the clearance of apolipoprotein B-100 (apoB-100)-containing lipoproteins in mice ( 11 ), and as CLA treatment was found to induce the expression of LDLR ( 7 ), liver steatosis onset could depend, at least in part, on this lipoprotein receptor. Therefore, in this study, LDLR −/− apoB 100/100 mice, which represent a good model of human familial hypercholesterolemia ( 12 ), were fed t10c12 CLA to induce lipoatrophy to study the consequences on liver and plasma lipid parameters. We particularly focused on the hepatic effects of both the high lipid flux originating from adipose tissue attributable to t10c12 CLA action and on the absence of LDLR in this dyslipidemic model with regard to the expression of other lipid transporters. To address this issue, we also determined the gene expression profile of untreated LDLR −/− apoB 100/100 mice compared with normal wild-type animals. Our results show, first, that LDLR deficiency was unable to prevent the steatosis induced by t10c12 CLA and, second, that other proteins substitute for LDLR in lipoprotein clearance to such an extent that serum TG levels were significantly reduced in these mice usually exhibiting high levels of circulating apoB-100-rich lipoproteins. MATERIALS AND METHODS Animals and treatments Official French regulations (No. 87848) for the use and care of laboratory animals were followed throughout. Control (B6129SF2) and transgenic (B6; 129S-Apobtm2SgyLdlrtm1Her) mice originated from the Jackson Laboratory. Transgenic mice are deficient in LDLR and express only apoB100 (LDLR −/− apoB 100/100 mice). After 1 week of adaptation to the control diet (AO4; Unité d'Appui à la Recherche, Epinay-sur-Orge, France), 7 week old male mice were housed in individual plastic cages. LDLR −/− apoB 100/100 mice were randomly allocated to the control or the CLA diet (n = 5 for each), consisting of a basal diet, whose detailed composition has been described ( 13 ), enriched with 1% C 18:1 n-9 (oleic acid) or t10c12 CLA, both esterified as TG. CLA-fed LDLR −/− apoB 100/100 mice and their corresponding controls were euthanized after 2 or 4 weeks. Wild-type B6129SF2 mice (n = 5), fed only the control diet, were used after 4 weeks. Mice were food-deprived for 4 h before anesthetized with ketamine/xylazine (7.5 mg/100 g body weight) and euthanized. For lipid analysis, liver, heart, gastrocnemius, and blood (collected from the vena cava) were stored at −80°C. Fresh samples of liver were used for immediate FA oxidation measurements on whole liver homogenates and isolated mitochondria. Lipid analysis Total liver, muscle, and heart lipids were extracted according to Folch, Lees, and Sloane Stanley ( 14 ). For liver, total lipids were determined by gravimetry and lipid classes were quantified by the TLC-flame ionization detection method ( 15 ). Phospholipids, cholesteryl esters, and TG were separated by TLC on silica plates (Merck, Darmstadt, Germany). Their constitutive FAs were methylated according to the procedure of Christie, Sebedio, and Juaneda ( 16 ) and analyzed by gas-liquid chromatography as described previously ( 17 ). For skeletal muscle and heart, aliquots of total lipid extracts were resuspended in a solution of Triton X-100 as described previously ( 18 ). Then, TG contents were measured using a commercial kit from Roche Diagnostics Corp. (Indianapolis, IN). Commercial kits were also used for the determination of serum TG and glycerol concentrations (Sigma Diagnostics, Saint-Quentin-Fallavier, France) and of serum free FA (Roche Diagnostics Corp.). Serum lipoprotein analysis Serum lipoprotein analysis was performed by fast-performance liquid chromatography, and total cholesterol was quantified with an inline detection system as described previously ( 19 ). Liver lipolytic activity The procedure used was adapted from that of Iverius and Ostlund-Lindqvist ( 20 ). Lipolytic activity determined on tissue homogenates corresponding to amounts of [ 3 H]oleic acid released from radiolabeled triolein as described previously ( 7 ). Carnitine palmitoyltransferase I activity and palmitate oxidation rate Measurements of carnitine palmitoyltransferase I (CPT I) activity and palmitate oxidation rates were performed as described previously ( 13 ). FA oxidation was measured with whole liver homogenates using two media, the first allowing mitochondrial and peroxisomal activities to occur, the second allowing the peroxisomal activity only ( 21 ), and with liver mitochondrial fractions. Protein concentrations of mitochondrial fractions were measured using the bicinchoninic acid procedure (Sigma) ( 22 ). Western blot analysis of the very low density lipoprotein receptor Approximately 100 mg of frozen liver was quickly homogenized with a mini-beadbeater (BioSpec Products, Inc., Bartlesville, OK) in 10 volumes of a 20 mM Tris buffer containing sucrose (0.2 M), MgCl 2 (2 mM), pepstatin A (1.46 μM), leupeptin (10 μM), aprotinin (0.035 TIU/l), and E64 (1.4 μM). After centrifugation of homogenates at 12,000 g for10 min, supernatants were half-diluted in Laemmli buffer (Bio-Rad S.A., Ivry-sur-Seine, France) ( 23 ) without boiling, and aliquots were size-fractionated on a 7% SDS-polyacrylamide gel using a Mini-Protean 3 electrophoresis cell (Bio-Rad) at 200 V for ∼70 min at room temperature. After electrophoresis, proteins were transferred to a nitrocellulose membrane (Hybond-ECL; Amersham Biosciences, Saclay, France) at 140 V for 1.5 h. For immunodetection, the blots were incubated overnight in TBST [10 mM Tris, 0.15 M NaCl, and 0.05% (v/v) Tween-20] plus 5% (w/v) BSA, for 1 h in TBST, 2% BSA, plus 0.2 μg/ml of a goat anti-mouse VLDL receptor antibody (R&D Systems, Abingdon, UK), and then for 1 h in TBST, 2% BSA, plus a 1:10,000 dilution of rabbit anti-goat IgG peroxidase conjugate antibody (Sigma). The blots were developed with chemiluminescent reagents (ECL; Amersham Biosciences) and subjected to autoradiography. The membrane was stripped using Restore Western Blot Stripping Buffer (Pierce, Rockford, IL) and reprobed in the same conditions with a mouse anti-β-actin antibody and an anti-mouse IgG peroxidase conjugate antibody (Sigma) for standardization. Spot intensities were determined by densitometric analysis with a gel documentation system (Gel Doc 2000) equipped with Quantity One software (Bio-Rad). Protein concentrations of supernatants were measured by the bicinchoninic acid procedure after trichloroacetic acid precipitation to eliminate incompatible substances. Gene expression Total mRNA was extracted from liver by the Tri-Reagent method adapted from the procedure of Chomczynski and Sacchi ( 24 ). Tri-Reagent was provided by Euromedex (Souffelweyersheim, France). Total mRNA were reverse-transcripted using the Iscript cDNA kit (Bio-Rad). Real-time PCR was performed as described previously ( 19 ). Primer pairs were designed using Primers! software and were synthesized by MWG-Biotech AG (Ebersberg, Germany). The sequences of the forward and reverse primers used are as follows: 5′-aattagtagaaccgggccac-3′ and 5′-ccaactcccaggtacaatca-3′, respectively, for fatty acid translocase (FAT/CD36); 5′-ctaaggacccctgaagacaca-3′ and 5′-tctcatacattcccgttaccgt-3′ for LPL; 5′-gtgaatgtggggttagtggac-3′ and 5′-acttcgcagattcctccagc-3′ for HL; 5′-gaccgactggcgaacaaat-3′ and 5′-ctgggtgttggtcctctgta-3′ for low density lipoprotein receptor-related protein (LRP); 5′-agcaccacagatcaatgacc-3′ and 5′-ctctcgtccattttcttcgaga-3′ for very low density lipoprotein receptor (VLDLR); 5′-tcccttcgtgcattttctca-3′ and 5′-gttcatcccaacaaacagg-3′ for scavenger receptor class B type I (SR-BI); and 5′-aatcgtgcgtgacatcaaag-3′ and 5′-gaaaagagcctcagggcat-3′ for β-actin. Statistics Differences in mean values between groups were tested by one-way ANOVA. Significant differences between means were tested by Student's t -test for an independent variable. When variances were unequal, means were tested by the Kruskal-Wallis nonparametric test. RESULTS Effects of t10c12 CLA feeding on body, liver, and serum parameters in LDLR −/− apoB 100/100 mice Table 1 shows that dietary t10c12 CLA did not affect body weights of LDLR −/− apoB 100/100 mice for the two durations of treatment. The drastic reductions of epididymal adipose tissue weights and concomitant liver steatosis usually found in wild-type mice fed t10c12 CLA were also observed in the transgenic model. After 2 weeks of CLA feeding, adipose tissue and liver relative weights were already markedly altered (−62% and +56%, respectively), these effects being even more pronounced after 4 weeks (−82% and +97%, respectively) ( Table 1 ). Liver TG content increased with treatment duration, and the TG enrichment found in mice fed t10c12 CLA for 4 weeks was even greater than that measured under the same experimental conditions with wild-type mice [13-fold vs. 7.5-fold ( 7 ), respectively]. It is worth noting that the t10c12 CLA treatment also increased liver cholesteryl ester contents but did not affect free cholesterol contents. To determine whether lipid accumulation occurred in tissues other than liver, the TG contents of heart and skeletal muscle were measured. Unlike the liver, heart and muscle did not accumulate TG in response to CLA feeding, with muscle TG levels actually being reduced by 6-fold. In LDLR −/− apoB 100/100 mice, as serum lipid parameters are modified with aging, data from control and CLA-fed mice were compared for the same treatment duration. Indeed, Table 1 shows that the effect of t10c12 CLA on serum TG levels was dependent on the duration of treatment, because TGs were increased after 2 weeks and decreased after 4 weeks, relative to the control series. Levels of total cholesterol, free FA, or glycerol in serum were unaltered after 2 weeks of treatment but were decreased significantly when t10c12 CLA was administered for 4 weeks. Lipoprotein profile analysis ( Fig. 1 ) indicates that VLDL-cholesterol and HDL-cholesterol levels were increased in the serum of mice fed t10c12 CLA for 2 weeks (+165% and +22%, respectively), whereas LDL-cholesterol was decreased slightly, but not significantly. Additionally, when mice were fed for 4 weeks, cholesterol levels were decreased in all fractions, particularly in apoB-100 lipoproteins (VLDL-cholesterol, −78%; LDL-cholesterol, −41%; HDL-cholesterol, −26%, relative to controls). Effects of t10c12 CLA feeding on liver FA oxidation in LDLR −/− apoB 100/100 mice Administration of t10c12 CLA improved both peroxisomal and mitochondrial palmitate oxidation rates as measured using liver homogenates after both 2 and 4 weeks of treatment ( Table 2 ). Similarly, carnitine-dependent palmitate oxidation rates measured using isolated mitochondria were also increased, as were CPT I activities in the t10c12 CLA series ( Table 2 ). mRNA expression of proteins involved in liver lipid uptake in LDLR −/− apoB 100/100 mice compared with wild-type mice The impact of the absence of LDLR on the mRNA expression of some proteins involved in lipid uptake (HL, LPL, SR-BI, LRP, VLDLR, FAT/CD36) was estimated in LDLR −/− apoB 100/100 mice in comparison with wild-type mice ( Fig. 2 ). The data indicate that the liver of control transgenic mice overexpressed LPL, FAT/CD36, and VLDLR, which are usually poorly expressed in this organ. In these mice, mRNA levels of HL and two other potential candidates for lipoprotein transport, SR-BI and LRP, were not different between the two genotypes. Effects of t10c12 CLA feeding on the mRNA expression of proteins involved in liver lipid uptake in LDLR −/− apoB 100/100 mice The mechanisms of the steatosis setup in LDLR −/− apoB 100/100 mice fed t10c12 CLA, despite the absence of LDLR, were investigated through the estimation of mRNA levels of enzymes and receptors involved in plasma FA or lipoprotein uptake (SR-BI, LRP, HL, VLDLR and FAT/CD36). Dietary t10c12 CLA decreased mRNA expression of liver HL at both 2 and 4 weeks and increased that of LPL at 4 weeks ( Fig. 3 ). Among the lipoprotein receptors studied, VLDLR was upregulated in the two CLA series, whereas mRNA levels of LRP and SR-BI were significantly upregulated and downregulated, respectively, but only after 4 weeks of CLA treatment. Feeding t10c12 CLA also strongly increased mRNA levels of FAT/CD36, which is usually poorly expressed in the liver, after 2 or 4 weeks of treatment. Liver VLDLR protein levels and lipolytic activity in LDLR −/− apoB 100/100 mice The apparent inductions of mRNA levels of VLDLR and LPL prompted us to study their effects on protein levels and catalytic activity, respectively. Figure 4 indicates that VLDLR protein levels were induced concomitant with mRNA levels. As the regulation of LPL may also occur at the posttranslational level ( 25 ), we measured the actual lipolytic activity of liver extracts. The results presented in Fig. 5 indicate that the liver capacities to hydrolyze TG were greater in LDLR −/− apoB 100/100 mice fed t10c12 CLA for 4 weeks than in control transgenic mice. DISCUSSION In wild-type mice, t10c12 CLA feeding induced severe lipoatrophy with concomitant liver steatosis, and we previously showed that mRNA levels of LDLR were induced, suggesting an increase in lipoprotein uptake by hepatocytes ( 7 ). In this study, the fact that liver TG accumulation was found even in the absence of LDLR raises the question of how hepatocytes manage to face the high flux of lipids that were not stored any longer in adipose tissues of t10c12 CLA-fed mice. Because the liver steatosis seems to be related to lipoatrophy, one could think that hepatocyte TG accumulation did not result from a direct action of t10c12 CLA on liver cells but rather as a consequence of mechanisms altering the adipose tissue. Liver steatosis may also originate from a reduction of lipoprotein secretion rates, from an inhibition of FA β-oxidation, from high rates of de novo lipogenesis, and/or from high lipid uptake. Feeding mice with t10c12 CLA did not reduce liver lipoprotein secretion ( 7 ) or FA oxidation and CPT I activities (this study). De novo lipogenesis might be stimulated by t10c12 CLA feeding, owing to the greater [saturated + monounsaturated]/[polyunsaturated] ratios found in liver lipids of LDLR −/− apoB 100/100 -treated mice than in those of control mice (i.e., 22.3 vs. 5.62, respectively; data not shown). Indeed, in a recent study, it was hypothesized that the conversion of excess glucose to FA and the storage as TG in the liver, rather than in adipose tissue, could be the mechanism leading to liver fat accumulation ( 9 ). Therefore, the data presented here also support the conclusion that lipids diverted from adipose tissues, and available for other organs such as the liver, might contribute to a large extent to hepatic lipid accumulation. Consistent with this hypothesis, we did not observe any lipid accumulation in the other two lipid-utilizing tissues, heart and muscle, suggesting that the liver could be the main acceptor of plasma lipids in CLA-fed mice. Gene expression analysis of control LDLR −/− apoB 100/100 mice compared with wild-type mice indicated that hepatocytes of LDLR-deficient mice overexpressed other genes in response to high levels of TG-rich lipoproteins. Interestingly, LPL, FAT/CD36, and VLDLR, which are usually poorly expressed in liver ( 26–28 ), were induced. Under normal conditions, VLDLR is known to participate in the clearance of VLDL mediated by peripheral organs actively using fat, such as heart or adipose tissue, but not the liver ( 29 ). Nevertheless, it has been demonstrated that the induction of hepatic expression of VLDLR using adenoviral vectors improved lipoprotein clearance ( 30 , 31 ). In this way, in the absence of LDLR, the upregulation of VLDLR mRNA and protein levels observed in the liver of LDLR −/− apoB 100/100 mice compared with wild-type mice strongly suggests that VLDLR is an effective surrogate receptor for the clearance of lipoproteins. A possible mechanism to explain the effect of VLDLR on lipoproteins has been proposed ( 32 ). VLDLR would facilitate the hydrolysis rather than the internalization of particles binding lipoproteins by maintaining them in close interaction with LPL. In our study, the concomitant upregulation of VLDLR and LPL supports this concept, and the increase in FAT/CD36 mRNA levels also supports the possible involvement of this transporter in the uptake of FAs released. Some other studies also suggest close relationships between FAT/CD36 and LPL ( 33 ) and similarly between VLDLR and LPL ( 34 ). Nevertheless, as far as we are aware, this is the first study to report a concomitant induction of the expression of VLDLR, LPL and FAT/CD36 in liver, which suggests a functional cooperation of these proteins to face the lipoprotein abundance. LPL and FAT/CD36 are peroxisome proliferator-activated receptor γ-responsive genes ( 35 ), and recent studies have established a role for hepatic peroxisome proliferator-activated receptor γ in the development and maintenance of liver steatosis ( 36 , 37 ). Therefore, the induction of FAT/CD36 and LPL could be related to the greater delivery of FA to liver cells. This seems to apply particularly to FAT/CD36, whose mRNA levels increased concomitantly with liver TG infiltration ( Fig. 6 ). Interestingly, the comparison of gene expression between wild-type and LDLR −/− apoB 100/100 mice indicates that LRP mRNA levels were comparable, which does not ascribe any apparent role for LRP in the metabolism of apoB-100-containing lipoproteins, even in the absence of LDLR, as was reported previously ( 38 ). However, mRNA levels of LRP increased after 4 weeks of t10c12 CLA treatment, suggesting that LRP stimulation could be secondary to the establishment of CLA-induced hyperinsulinemia ( 9 ), as has been demonstrated in adipocytes ( 39 ), and this receptor likely also participates in the clearance of lipoproteins. Surprisingly, mRNA levels of HL, which could also provide an alternative clearance pathway for apoB-100-containing lipoproteins independent of LDLR ( 40 ), were not induced in LDLR −/− apoB 100/100 mice and even decreased after t10c12 CLA feeding. This supports the possibility that HL would be inversely regulated by the cholesterol supply ( 41 ). The same hypothesis could be retained to explain the SR-BI downregulation, because convergent arguments support the view that HL and SR-BI would be coexpressed to exert coordinated functions in cell cholesterol homeostasis ( 42 ). According to our data, the overexpression of VLDLR, LPL, and FAT/CD36 observed in LDLR −/− apoB 100/100 mice after 4 weeks of CLA feeding accelerated liver lipoprotein clearance to such an extent that the increased serum TG levels observed after 2 weeks of CLA feeding were lower than in controls. It is worth noting that this TG lowering coincides with the nearly complete absence of adipose tissue. Under these conditions, the release of free FA from adipose tissue was necessarily decreased, reducing lipid flux to the liver and the subsequent VLDL secretion rates compared with the 2 week series. On the whole, we suggest that the t10c12 CLA-dependent TG-lowering effect was attributable to both the reduction of a source of FA for liver lipoprotein synthesis and the high capacity of liver to clear off plasma lipids. It is now well established that liver LDLR activity constitutes a key factor for the regulation of apoB-containing lipoproteins ( 43 , 44 ). Therefore, this study provides evidence that, in the absence of LDLR, some efficient alternative regulatory mechanisms also occur (e.g., see control LDLR −/− apoB 100/100 vs. wild-type mice) with convenient upregulation when the fat storage is defective in adipose tissue (e.g., after t10c12 CLA feeding). Acknowledgments The authors thank Mrs. Legendre for fast-performance liquid chromatography analysis and helpful discussions and Mrs. Baudoin for figure construction and typing of the manuscript. This work was supported by grants from the Ministère de la Recherche et de la Technologie and the Région Bourgogne (Dijon, France).
|
[
"FIELD",
"MCLEOD",
"MILLER",
"DELANY",
"PARK",
"WANG",
"DEGRACE",
"PARK",
"IDE",
"ANGULO",
"MURAYAMA",
"SANAN",
"DEGRACE",
"FOLCH",
"MORRISON",
"CHRISTIE",
"SEBEDIO",
"LUND",
"DEGRACE",
"IVERIUS",
"VEERKAMP",
"SMITH",
"LAEMMLI",
"CHOMCZYNSKI",
"DOOLITTLE",
"ABUMRAD",
"KIRCHGESSNER",
"OKA",
"TAKAHASHI",
"CHEN",
"OKA",
"TACKEN",
"FEBBRAIO",
"YAGYU",
"SCHOONJANS",
"INOUE",
"MEMON",
"VENIANT",
"DESCAMPS",
"DICHEK",
"PERRET",
"ACTON",
"HORTON",
"TWISK"
] |
1715a9d0da5a4dffb49feb1967c2bc96_Examining maternal beliefs and human papillomavirus vaccine uptake among male and female children in_10.1016_j.pvr.2016.02.002.xml
|
Examining maternal beliefs and human papillomavirus vaccine uptake among male and female children in low-income families
|
[
"Fuchs, Erika L.",
"Rahman, Mahbubur",
"Berenson, Abbey B."
] |
Purpose
This study examines within-family differences in the uptake of the HPV vaccine and HPV-related beliefs by children׳s sex.
Methods
From a 2011–2013 survey of mothers of children aged 9–17 years in Texas, mothers with both male and female children (n=350) were selected.
Results
Mothers were more likely to report having initiated and completed HPV vaccination for their daughters than sons. Mothers did not express differences by children׳s sex in HPV-related beliefs. Among those who had not completely vaccinated either child, mothers were more likely to report they wanted their daughters compared to sons vaccinated and were more likely to report feeling confident they could get their daughters vaccinated than their sons.
Conclusion
In this population, mothers were more likely to report HPV vaccination of and motivation to vaccinate daughters compared to sons, although maternal beliefs about HPV did not differ by children׳s sex.
|
1 Introduction The human papillomavirus (HPV) is responsible for approximately 18,000 cancers among females and 8000 cancers among males per year in the United States [1] . HPV vaccination has the potential to prevent the majority of HPV-related cancers and genital warts. While the Advisory Committee on Immunization Practices recommends that males and females receive routine HPV vaccination beginning at 11 or 12 years of age [2] , recent estimates from the National Immunization Survey – Teen of the receipt of ≥1 dose were 60.0% in adolescent females and 41.7% in adolescent males [3] . Thus far, interventions aiming to increase HPV vaccination, have had limited success, with mixed results found in two recent reviews [4,5] . Studies have identified differences in barriers to HPV vaccination of sons compared to daughters, including differences in provider recommendation, concerns about safety, and parents not knowing that boys could get the vaccine [6] . Though studies have examined within-family differences in HPV vaccination intention by children׳s sex [7] , no studies have reported on within-family differences in HPV-related beliefs by children׳s sex. The aim of this study was to examine within-family differences by children׳s sex in HPV vaccine uptake and HPV-related beliefs. 2 Methods Between September 2011 and October 2013, women with ≥1 child aged 9–17 years were identified through review of the daily census and approached at four reproductive health clinics operated by the University of Texas Medical Branch (UTMB) to participate in a survey on HPV vaccination. Eligible participants were invited to complete a self-administered survey, available in either English or Spanish, and were reimbursed $5 for their time and effort. The UTMB Institutional Review Board approved this study. Of the 1436 women who met eligibility criteria, 1392 (97%) participated and 44 (3%) declined [8] . A subset of the original study, mothers who had both a son and a daughter 9–17 years old ( n =350), were included in these analyses. Participants responded to questions regarding the HPV vaccine separately for their oldest daughter and oldest son in the 9–17 years age range. Mothers were asked whether their daughter/son had completed the HPV vaccine series, had started (but not completed) the series, had scheduled an appointment to receive it, or had not received any doses. Dichotomous variables were created for initiation (≥1 dose received) and completion (≥3 doses) of the vaccine series. Mothers stated their agreement with statements about their beliefs beginning with, “If my daughter/son gets HPV,” and ending with, “it could harm her/his future health,” “it could harm her/his future relationship with her/his partner,” and, “I will be devastated.” Response options were dichotomized with those who reported they strongly disagreed, disagreed, or were neutral versus those who reported they agreed or strongly agreed. Mothers also responded to the following question on a 0–100% Likert scale with eleven points, “If your 9–17 year old daughter/son does NOT get Gardasil, what are the chances that she/he will contract HPV?” A similar question was asked about their child developing genital warts. Response options were dichotomized with those reporting 0% versus >0%. Mothers also were queried, “I want my daughter/son vaccinated against the human papillomavirus (HPV) within the next year,” and, “I feel confident that I could get Gardasil for my daughter/son.” Response options were dichotomized with those who reported they strongly disagreed, disagreed, or were neutral versus those who reported they agreed or strongly agreed, while mothers with one or both children having completed the vaccine series were excluded from analysis. McNemar׳s chi-squared tests for paired samples were used to examine marginal homogeneity across children׳s sex in maternal beliefs and children׳s HPV vaccination uptake. Statistical significance was assessed at the α =0.05 level. All analyses were performed using Stata Version 14.0 [9] . 3 Results A total of 350 mothers indicated they had both a daughter and a son between 9–17 years of age. Most mothers were between 30 and 39 years of age, Hispanic, and married or cohabitating ( Table 1 ). Mothers were more likely to report their daughters compared to sons had initiated the series and completed it, but overall vaccination was low, with 72.3% reporting no HPV vaccination for either child ( Table 2 ). There were no differences by children׳s sex in mothers׳ beliefs about HPV, perceived risk of their children contracting HPV, or perceived risk of their children developing genital warts. Among those who had not yet completely vaccinated either child ( n =277), mothers were more likely to report they wanted their daughters compared to sons vaccinated in the next year ( Table 2 ). Mothers were also more likely to report feeling confident they could get their daughters vaccinated than their sons. 4 Conclusions/discussion In this population, HPV vaccine uptake differed by children׳s sex, but maternal beliefs about HPV by children׳s sex were similar. These results suggest beliefs may not be driving sex differences in HPV vaccination. Mothers were more likely to report they wanted their daughters vaccinated than their sons and were more likely to report feeling confident they could get their daughters vaccinated, despite similar perceptions of risk. Since some physicians report a preference to vaccinate girls [10] , associations between confidence in getting children vaccinated by sex and provider recommendation should be further explored. This study has several strengths and limitations. While a strength of this study was the inclusion of a diverse, low-income population, the small sample size limited our ability to conduct multivariate analyses. We focused on views and behaviors that differ by children׳s sex within a family, essentially controlling for maternal characteristics, though reliance on maternal report of children׳s HPV vaccination is subject to recall bias. The survey used in this study had not been validated. Mothers did not report different perceptions about how HPV would impact their sons compared to daughters, yet were less likely to vaccinate sons. This may lead to male adolescents being exposed to vaccine-preventable strains of HPV prior to initiating the series. Future interventions should address the disparity in uptake by ensuring parents receive both adequate information about HPV vaccination and equal access to the vaccine for their sons and daughters. Disclaimer This work reflects the opinions of the authors and does not represent the opinions or influences of the National Institutes of Health. Conflict of interest statement The authors declare that there are no conflicts of interest. Acknowledgments Federal support for this study was provided by an institutional training grant ( T32HD055163 : PI AB Berenson) from the Eunice Kennedy Shriver National Institute of Child Health and Human Development to Dr. Fuchs as a postdoctoral fellow.
|
[
"CENTERSFORDISEASECONTROLANDPREVENTION",
"PETROSKY",
"REAGANSTEINER",
"FU",
"NICCOLAI",
"HOLMAN",
"REITER",
"GROSS",
"ALLISON"
] |
9bce4d1e96ec4fe0a99cdaa7d6fce8b4_Microbiota composition data for wild and captive bluestreak cleaner wrasse Labroides dimidiatus Vale_10.1016_j.dib.2020.106120.xml
|
Microbiota composition data for wild and captive bluestreak cleaner wrasse Labroides dimidiatus (Valenciennes, 1839)
|
[
"Okomoda, Victor Tosin",
"Nurul, Ashyikin Noor Ahmad",
"Danish-Daniel, Abdullah Muhd",
"Oladimeji, Abraham Sunday",
"Abol-Munafi, Ambok Bolong",
"Alabi, Korede Isaiah",
"Nur, Asma Ariffin"
] |
The Labroides dimidiatus is known as the “doctor fish” because of its role in removing parasites and infectious pathogens from the body of other fishes. This important role played both in wild and captive conditions could represent a novel form of parasitic transmission process mediated by the cleaning activity of the fish. Yet, there is a paucity of data on the microflora associated with this fish which is important for tracking disease infection and generally monitoring the health status of the fish. This article, therefore, represents the first dataset for the microbiota composition of wild and captive L. dimidiatus. Wild fish samples and carriage water were gotten in Terengganu Malaysia around the corals of the Karah Island. The captive sample, however, was obtained from well-known ornamental fish suppliers in Terengganu Malaysia. Thereafter, bacteria present on the skin, in the stomach and the aquarium water were enumerated using culture-independent approaches and Next Generation Sequencing (NGS) technology. Data obtained from the three metagenomic libraries using NGS analysis gave 1,426,740 amplicon sequence reads which are composed of 508 operational taxonomic units (OTUs) for wild samples and 3,238,564 valid reads and 828 OTUs for captive samples. All sequence reads were deposited in the GeneBank (Accession numbers SAMN14260247, SAMN14260248, SAMN14260249, SAMN14260250, SAMN14260251, and SAMN14260252). The dataset presented is associated with the research article “16S rDNA-Based Metagenomic Analysis of Microbial Communities Associated with Wild Labroides dimidiatus From Karah Island, Terengganu, Malaysia” [1]. The microbiota data presented in this article can be used to monitor the health and wellbeing of the ornamental fish, especially under captivity, hence preventing possible cross-infection.
|
Specifications Table Subject Biological Science Specific subject area Aquatic Science Type of data Figures and Tables How data were acquired DNA extraction and sequencing Microsoft Excel software for computation of bacterial composition Data format Raw and Analyzed Parameters for data collection Fishes for the study had no physical abnormality or showed signs of stress. Description of data collection In this data article, DNA extraction was done from samples of the skin, gut, and carriage water samples collected from the Labroides dimidiatus fish gotten from both wild and captive environments. The 16S rRNA gene was obtained from the samples, amplified, sequenced, and deposited in the GeneBank. Data source location Institution: Universiti Malaysia Terengganu City/Town/Region: Terengganu Country: Malaysia Data accessibility Deposited in the GeneBank (Accession numbers SAMN14260247, SAMN14260248, SAMN14260249, SAMN14260250, SAMN14260251, and SAMN14260252) and With the article Related research article A.N.A., Nurul, A., Muhd Danish-Daniel, V.T., Okomoda, A. A. Nur, 2019. 16S rDNA-Based Metagenomic Analysis of Microbial Communities Associated with Wild Labroides dimidiatus From Karah Island, Terengganu, Malaysia. Biotechnology Reports. 21: e00303. https://doi.org/10.1016/j.btre.2019.e00303 [1] Value of the Data • Microbiota associated with Labroides dimidiatus was presented for the very first time. • Data can be used by ornamental fish hobbyists and other scientists working in the area of fish microbiota especially as it relates to ornamental fishes. • The data is a reference for future studies and useful for comparison with the microbiota of other ornamental fishes obtained from the wild or maintained in captivity. • Data can assist in the monitoring of the health status of the fish as any substantial variation in the structure as well as the abundance of the bacteria presented in this research can be used as an early sign for disease infection in the species (especially under captivity). • The analysis of the data as given in the Microsoft excel can serve as a guide for future studies in, processing data for presentation and publication. 1 Data Description The raw data deposited in the GeneBank represent the sequence reads of the bacterial from the fish skin (SAMN14260247), stomach (SAMN14260248) and carriage water (SAMN14260249) in captivity as well as those from the wild (SAMN14260250, SAMN14260251, and SAMN14260252 respectively). Data presented in the Microsoft excel (Filename: Chart in Microbial of Labroides dimidiatus ) are the various representations of the compositions in graphs. The percentage of the bacterial phyla associated with L. dimidiatus in both environments is presented in Table 1 , while the relative abundance of all the bacterial phyla is presented in Figure 1 . The bacterial phyla abundance as obtained in the captive and wild environment is as presented in Figs. 2 and 3 respectively. Also, Fig. 4 denotes the vein diagram of numbers of shared and exclusive bacterial families observed in the captive and wild samples of L. dimidiatus. Lastly, the standard Illumina forward and reverse primers used for this research are presented in Table 2 . 2 Experimental Design, Materials and Methods L. dimidiatus samples with a weight range of 0.5 and 2.8 g were obtained from Terengganu Malaysia. The wild samples were gotten from the coral of the Karah Island while captive samples were obtained from well-known ornamental fish suppliers in Terengganu who had also obtained it from the wild and had maintained them in the aquarium for a month. For the water samples, the collection was done for the ocean and aquarium water in sterilized blue cap bottles (1 L volume), placed on ice [2] . The fish and the carriage water were subsequently taken to the AQUATROP Laboratory for further analysis. In the laboratory, ten healthy fish were killed by pitching after been appropriately tranquilized with tricaine methane sulphonate (MS222) at 150 mg/1 solutions [3] . Skin mucus samples were obtained by dorsolaterally scraping the surface of the dead specimens using an already sterile scalpel [1] . The samples were then processed using the method by Balcázar et al. [4] before storage at −80 °C for further analysis. Also, the same technique used by Balcázar et al. [4] was adopted for gut sample collection, processing, and storage for analysis. The DNA from the skin and gut samples were extracted using a commercial DNA kit (NucleoSpin® Tissue Kit Machery-Nagel, Germany) without any modification of the manufacturer's protocol. However, water samples were first conditioned according to the method previously used by Wolf et al. [5] before DNA was extracted from them. The amplification of the 16S rRNA gene was achieved using the universal bacteria primer set 63F (5‟-CAGGCCTAACACATGCAAGTC-3‟) and 1389R (5‟-ACGGGCGGTGTGTACAAG-3‟) reported by Hongo et al. [6] following the PCR reaction volume and protocol earlier used by Nurul et al. [1] . In line with the method previously used by Nurul et al., [1] , a second PCR was done using 1 μL of the amplicon. Thereafter, the V3 hypervariable region of the 16S rRNA genes was selected according to Bartram et al. [7] . The V3 region amplification of the 16S rRNA gene was then done with the 341F and 518R universal primers reported by Muyzer and de Waal [8] . All the primers used for the construction of the Illumina library are presented in Table 1 . Because the V3 specific priming regions were complementary to the standard Illumina primers, they were composed of a 6-bp indexing sequence to allow for multiplexing. The amplification of the primers was then designed with Illumina adapters. PCR amplification condition was according to an earlier report by Nurul et al. [1] . Using gel electrophoresis of 2% agarose, the PCR products were viewed to see if the desired size was gotten and clean-up was done accordingly. Adapter sequences necessary for binding to the flow cell were denoted by Lowercase letters, while binding sites for the Illumina sequencing primers are the underlined lowercases. Bold uppercase, however, highlighted the indexed sequences while the V3 region primers for the 341F and 518R primers are presented in regular uppercases [7] . The generated “reads” were processed according to the method adopted by Schloss et al. [9] (i.e. trimming and assembling using the software “Mothur”). Overlapping regions within Illumina paired-end reads were aligned to generate “contigs”. The paired-end sequences of a mismatch and those with ambiguous base calls were not used, hence discarded. Thereafter, based on naïve Bayesian classification (RDP classifier) followed by Wang et al. [10] , the sequences were assigned taxonomic affiliations. The sequences were then assigned to operational taxonomic units of six samples of the 16S rRNA gene fragments shortly after trimming, screening, and alignment of the same. Thereafter they were connected to the server to download the fastq file. A tab-delimited “oligos” file containing the primer and barcode information was created. Then, the data were analyzed using the Greengenes reference files obtained from the Mothur website. Following the method according to Cole et al. [11] , a pairwise similarity cutoff of 97% using the Ribosomal Database Project pyrosequencing pipeline was used to define the operational taxonomic units (OTU) of the bacteria colonies. All the sequence reads generated were deposited in the GeneBank with Accession numbers SAMN14260247, SAMN14260248, SAMN14260249, SAMN14260250, SAMN14260251, and SAMN14260252. Ethics Statement The approval for the experimental protocols used for this research was obtained from the Universiti Malaysia Terengganu committee on research. This includes and not limited to methods used for the care and use of animal specimens which were aligned with guidelines of international, national, and institutional standards. Declaration of Competing Interest The authors wish to declare that there are no conflicts of interest whatsoever, be it financial or personal. Hence, none of this was perceived to have influenced the outcome of the research reported herein in this data article. Acknowledgments The authors owe a debt of gratitude to the management and staff of the AQUATROP as well as the School of Fisheries and Food Sciences, University Malaysia Terengganu in whose facility the data presented in this research was obtained. We are also grateful to Miss Nor Aiffa Wahyu Abu Bakar and appreciate the assistant rendered by the staff of Net loft during the sample collection for this research. The financing of this research was made possible by a grant obtained from the Ministry of Higher Education Malaysia and it was utilized by Ashyikin Noor Ahmad Nurul as part of her MSc. thesis. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.dib.2020.106120 . Appendix Supplementary materials Image, application 1 Image, application 2
|
[
"NURUL",
"SMITH",
"WAGNER",
"BALCAZAR",
"WOLF",
"HONGO",
"BARTRAM",
"MUYZER",
"SCHLOSS",
"WANG",
"COLE"
] |
893deefc46c24ae1ae5aa8f513166638_Evaluation of optimum adsorption conditions for Ni II and Cd II removal from aqueous solution by mod_10.1016_j.bjbas.2016.03.001.xml
|
Evaluation of optimum adsorption conditions for Ni (II) and Cd (II) removal from aqueous solution by modified plantain peels (MPP)
|
[
"Garba, Zaharaddeen N.",
"Ugbaga, Nkole I.",
"Abdullahi, Amina K."
] |
The most ideal conditions for the adsorption of Ni (II) and Cd (II) ions onto modified plantain peel (MPP) from aqueous solution were investigated. The effects of three adsorption variables (pH, MPP dose and initial adsorbate concentration) were studied using central composite design (CCD), a subset of response surface methodology (RSM). Quadratic models were developed for both Ni (II) and Cd (II) percentage removals. The prime adsorption conditions obtained were pH of 4.36, MPP dose of 0.82 g and initial concentration of 120 mg/L with desirability of 1.00 which gave good monolayer adsorption capacities of 77.52 and 70.92 mg/g for Ni (II) and Cd (II) respectively. The adsorption data were modelled using Langmuir and Freundlich adsorption isotherms; the equilibrium adsorption of both Ni (II) and Cd (II) on MPP obeyed Langmuir model, and pseudo-second-order kinetics was the order that best described the two adsorption processes.
|
1 Introduction Air, food, soil and water were narrated to be the media where heavy metals such as copper, cadmium, nickel, lead, and zinc are introduced into the environment ( Garba et al., 2015c; Sadaf et al., 2015 ). These heavy metals are reported to be hazardous resulting in damage to ecosystems as well as human health ( Ozdes et al., 2009; Tuzen et al., 2009 ) especially if their concentration is more than the accepted limit ( Alslaibi et al., 2013 ). Their main sources include wastewater discharged from hospitals ( Verlicchi et al., 2010 ), different industries such as Cd–Ni battery, metal plating and alloy manufacturing ( Khavidaki and Aghaie, 2013; Kobya et al., 2005; Krishnan et al., 2011; Kula et al., 2008 ). Presence of these metals in waste stream and ground water is a very serious environmental concern since these metal ions are toxic to various life forms; therefore, removing them as well as controlling their levels in waste waters is very crucial ( Serencam et al., 2008 ). Chemical precipitation, ion exchange, electrodialysis, solvent extraction, coagulation, evaporation and adsorption are among the most prevalent technologies for the removal of metal ions from aqueous solutions ( Garba and Afidah, 2014; Garba et al., 2014, 2015b; Mohammadi et al., 2015; Mohan et al., 2008; Mondal et al., 2015 ), with adsorption being the most widely used method for removing contaminants from wastewater ( Farghali et al., 2013; Garba et al., 2015a ). Sorption methods are considered flexible and easy to operate with much less sludge disposal problems ( Cao et al., 2014; Mohammadi et al., 2015 ). Various adsorbents were narrated in the literature for the removal of heavy metal ions; however, new adsorbents with local availability, high adsorption capacity as well as economic suitability are still needed. This prompted many researchers into investigating cheaper substitutes such as zeolites, silica gel, chitosan, clay materials and agricultural wastes ( Mekatel et al., 2015; Shirzad-Siboni et al., 2015; Tsai et al., 1998 ). Response surface methodology (RSM) is a mathematical model that was reported to be a very useful tool in optimizing the preparation conditions of activated carbons ( Garba and Afidah, 2015 ), but not much was reported on its application in optimizing adsorption process parameters. Therefore, the innovative aspect of this research is to optimize the paramount parameters for an effective adsorption of Ni (II) and Cd (II) from an aqueous solution using CCD. CCD was chosen to evaluate the interaction of the most crucial adsorption parameters such as pH, MPP dose as well as initial concentrations. 2 Materials and methods 2.1 Reagents All the chemicals used in this work were of analytical reagent grade purchased from Sigma-Aldrich and Merck (Darmstadt, Germany); they were used without any further purification. All the glassware used was washed and rinsed several times. Nickel and cadmium solutions and standards were prepared by using analytical grade nickel chloride (NiCl 2 ⋅6H 2 O) and cadmium chloride (CdCl 2 ) with distilled water. The solutions of Ni (II) and Cd (II) were prepared from stock solutions containing 1000 mg/L of Ni (II) and Cd(II), respectively. 2.2 Preparation of adsorbent material Plantain peels used in this study were collected from local food sellers, restaurants and eateries around Samaru and Sabon Gari Local Government of Kaduna state, Nigeria. They were washed and sun dried for 7 days. The dried plantain peels were then crushed into smaller particles in a mortar and sieved with 150 µm sieve until a reasonable quantity of that particle size is obtained, followed by repeated washing to eliminate dust and other impurities. It was then dried in an oven at 25 °C for about 48 h after which it was stored in sterilized closed glass bottles prior to use as an adsorbent. The powdered plantain peels were then modified by immersing in 5% solution of NaOH and autoclaved at 121 °C for 15 min at 10 psi. After keeping at 25 °C for 48 h, it was filtered and washed many times with distilled water until clear water with neutral pH was obtained ( Ashrafi et al., 2014 ). Then, the modified plantain peel (MPP) was dried at 25 °C for 48 h. The MPP was applied for all the adsorption experiments. 2.3 Metal ions adsorption experiments In order to study and evaluate the significance of variables on the percentage removal of Ni (II) and Cd (II), the adsorption experiments were carried out using a batch procedure by shaking 100 mL of the metal ions solutions in a 250 mL Erlenmeyer flask according to the pH, MPP dose and initial concentration as shown in Table 1 . The coded points and their corresponding values are presented in Table 2 . During the adsorption process, the flasks were agitated on a mechanical shaker at 150 rpm. The aqueous samples were analysed using an inductively coupled plasma-atomic emission spectrometer. The adsorption efficiencies were evaluated using the following equation: where C (1) Adsorption efficiency ( % ) = C o − C e C o × 100 o and C e are the liquid-phase concentrations at initial and equilibrium states (mg/L), respectively. The equilibrium amounts q (mg/g) adsorbed per unit mass of adsorbent were evaluated using Equation e (2) : where (2) q e = ( C o − C e ) V W q (mg/g) is the equilibrium amount of the metal ions adsorbed per unit mass of MPP; V (L) is the volume of the solution and W (g) is the mass of MPP used. e The kinetic tests were identical to those of equilibrium. The aqueous samples were taken at preset time intervals and the metal ions concentrations were measured. The amount adsorbed at time t, q (mg/g) was calculated using Equation t (3) : where C (3) q t = ( C o − C t ) V W o and C t (mg/L) are the liquid-phase concentration at the initial and any time t, respectively. 2.4 Adsorption isotherms and kinetic models 2.4.1 Adsorption isotherm The equilibrium characteristics of this adsorption study were described through Langmuir (Lang) and Freundlich (Freund). Lang model presumes monolayer adsorption onto a surface containing finite number of adsorption sites ( Langmuir, 1916 ). Its linear form is given as: (4) C e q e = 1 K L ⋅ Q a 0 + C e Q a 0 and Q a 0 ( mg / g ) K (L/mg) are Lang constants related to adsorption capacity and rate of adsorption, respectively. L The essential characteristics of Lang model can be described by dimensionless separation factor, R L , given as: where (5) R L = 1 1 + K L C o C is the highest initial solute concentration. o R values indicate whether the adsorption is unfavourable ( L R > 1), linear ( L R = 1), favourable (0 < L R < 1), or irreversible ( L R = 0). L Freund model on the other hand assumes heterogeneous surface energies. Its linear form is given by the following equation ( Freundlich, 1906 ): where K (6) log q e = log K F + 1 n log C e F and n are Freund constants. Generally, n > 1 suggests favourable adsorption. It has also been used to evaluate whether the adsorption process is physical (n > 1), chemical (n < 1) or linear (n = 1) ( Martins et al., 2015 ). 2.4.2 Kinetic models The kinetic data were fitted using the pseudo-first-order and pseudo-second-order models. The rate constant of adsorption is determined from the pseudo-first-order equation given as ( Lagergren and Svenska, 1898 ): where q (7) log ( q e − q t ) = log q e − k 1 2.303 t e and q t are the amount of metal ions adsorbed (mg/g) at equilibrium and at time t (h), respectively while k 1 is the adsorption rate constant (h −1 ). The pseudo-second-order equation based on equilibrium adsorption is expressed as ( Ho and Mckay, 1998 ): where k (8) t q t = 1 k 2 q e 2 + 1 q e t 2 (g/mgh) is the rate constant of second-order adsorption. As the pseudo-first-order and pseudo-second-order kinetic models could not identify the sorption mechanism, the kinetic results were further analysed for the diffusion mechanism by applying the intraparticle diffusion model. The intraparticle diffusion equation is expressed as ( Weber and Morris, 1963 ): where k (9) q t = k ip t 0.5 + C ip is rate constant of the intra-particle diffusion equation and C gives information about the boundary layer thickness: larger value of C is associated with the boundary layer diffusion effect. If the adsorption process follows the intraparticle diffusion model, then q t versus t 0.5 will be linear; if the plot passes through the origin, then intraparticle diffusion is the sole rate limiting step. Otherwise, some other mechanism along with intraparticle diffusion is also involved ( Tan et al., 2009 ). 2.5 Design of experiments using response surface methodology (RSM) In this work, a standard RSM design known as central composite design (CCD) was applied to study the Ni (II) and Cd (II) adsorption parameters (pH, MPP dose and initial adsorbate concentrations). The detailed CCD process was described in our previously published paper ( Garba and Afidah, 2014 ). Design expert statistical software (version 6.0.8, Stat-Ease, Inc., Minneapolis, MN, USA) was used for the model fitting and significance for the Ni (II) and Cd (II) adsorption efficiencies. 3 Results and discussion 3.1 Development of regression model equations using CCD The design matrix comprising the preparation variables, their ranges and the responses (Y Ni and Y Cd ) respectively were displayed in Table 2 . In order to compare and correlate the responses, CCD was applied for the development of the polynomial regression equations which were all quadratic expressions as suggested by the software. The model expression was selected in accordance with sequential model sum of square that is based on the polynomial's highest order where the model was not aliased and the additional terms were significant ( Sahu et al., 2010 ). The correlation between predicted and experimental data was blatant as shown by the model's R 2 values of 0.9448 for Ni (II) and 0.9383 for Cd (II) which were within desirability range ( Gómez Pacheco et al., 2012 ). The final empirical model's equations for percentage removal of both Ni (II) (Y Ni ) and Cd (II) (Y Cd ) responses are given as Equations 10 and 11 respectively. (10) Y N i = 86.37 + 8.31 x 1 + 9.84 x 2 + 9.96 x 3 − 1.37 x 1 2 − 3.65 x 2 2 − 4.79 x 3 2 − 6.64 x 1 x 2 − 5.76 x 1 x 3 − 4.62 x 2 x 3 (11) Y C d = 83.96 + 9.24 x 1 + 15.51 x 2 + 3.21 x 3 − 0.28 x 1 2 − 7.30 x 2 2 + 0.90 x 3 2 − 11.95 x 1 x 2 − 3.37 x 1 x 3 − 6.06 x 2 x 3 The positive and negative signs before the terms indicate synergetic and antagonistic effect of the respective variables ( Garba and Afidah, 2014 ). The appearance of a single variable in a term signified a uni-factor effect; two variables imply a double factor effect and a second order term of variable appearance indicate the quadratic effect ( Ahmad and Alrozi, 2010 ). The Ni (II) and Cd (II) percentage removals range from 22.46 to 99.79% and from 10.44 to 99.43% respectively; these can be found on the total experimental design matrix, and the values of the responses obtained are presented in Table 1 . Quadratic model was used as selected by the software for the two responses. The six replicate variables at the centre points, run 15–20, were conducted to determine the experimental error and the reproducibility of the data. 3.2 Statistical analysis In order to evaluate the individual, interaction as well as quadratic effects of the variables influencing the removal efficiency of both Ni (II) and Cd (II), analysis of variance (ANOVA) was performed. The sum of squares and mean square of each factor, F-value as well as Prob. > F values are shown in Table 3 for both Ni (II) and Cd (II) percentage removals. ANOVA validated the importance and adequacy of the models. From Table 3 , dividing the sum of the squares of each of the variation sources, the model and the error variance by the respective degrees of freedom gives the mean square values. The model terms with value of Prob.>F less than 0.05 are considered as significant ( Ahmad and Alrozi, 2010 ). With respect to Ni (II) percentage removal from Table 3 , it can be seen that the model F-value is 19.00 and Prob. > F of <0.0001 signifies the model's significance. The significant model terms were , x 1 , x 2 , x 3 , x 2 2 with only x 3 2 , x 1 x 2 , x 1 x 3 and x 2 x 3 insignificant to the response. Still from x 1 2 Table 3 , F-value of 16.91 as well as Prob.>F of <0.0001 indicated that the model was also significant. In this case were the significant model terms with the insignificant terms being x 1 , x 2 , x 2 2 , x 1 x 2 and x 2 x 3 . From the statistical results obtained, it can be seen that the models were suitable in predicting both Ni (II) and Cd (II) removals within the range of the studied variables. Additionally, x 3 , x 1 2 , x 3 2 and x 2 x 3 Fig. 1(a) and (b) shows the predicted values versus the experimental values for Ni (II) and Cd (II) removals respectively, portraying that the developed models successfully captured the relation between the adsorption process variables and the responses. 3.3 Individual and interaction effects of the variables From Table 3 , it can be observed that the individual effects inflicted by initial concentration and MPP dose on Ni (II) percentage removal were superior with the highest F-values of 49.61 and 45.03 respectively, whereas pH effect was inferior on this response with F-value of 34.50. The quadratic effect of solution pH was more pronounced with F-value of 12.11 while that of MPP dose and initial concentration were low with F-values of 6.43 and 0.99 respectively. The interaction effects between pH–MPP dose and that of pH–initial concentration were analogous with F-values of 11.67 and 10.08 respectively, while that of interaction between MPP dose–initial concentration was less significant with F-value of 5.66. Fig. 2(a) and (b) showed the 3D response surface plots for the studied variables, with Fig. 2(a) demonstrating the effect of pH and MPP dose with initial concentration fixed at zero level (Co = 88 mg/L), whereas Fig. 2(b) demonstrates the effect of pH and initial concentration with MPP dose fixed at zero level (W = 0.6 g), on the same response (Y Ni ). From both figures, the percentage Ni removal can be seen to increase with upsurge in all the studied variables. Also from Table 3 , the three variables (pH, MPP dose and initial concentrations) had uneven impact on the Cd (II) percentage removal. Initial concentration had the least effect as its F-value was smaller and also its variation did not have a significant effect on the process. The individual factors of pH and MPP dose as well as their interactions had the most significant effect on the Cd (II) adsorption as can be seen by their largest F-values as shown in Table 3 . The F-values for pH (x 1 ), MPP dose (x 2 ) and initial concentration (x 3 ) were 27.17, 71.15 and 3.28 respectively. The 3-D surface response analysis of pH and MPP dose interaction effect when the initial concentration was fixed (C o = 88 mg/L) and that of MPP dose and initial concentration when pH was fixed (pH = 7) for Cd (II) adsorption are shown in Fig. 3(a) and (b) respectively. Increase in pH and MPP dose was proportional to high adsorption of Cd (II). 3.4 Process optimization The optimization of Ni (II) and Cd (II) adsorption onto MPP was carried out by a multiple response method called the function of desirability, which was applied using design-expert software (Stat-Ease, Inc., Minneapolis, MN, USA). In the optimization analysis, the target criterion was set as maximum values for the two responses. The optimum adsorption conditions obtained were initial concentration of 120 mg/L, MPP dose of 0.82 g at pH of 4.36 with desirability of 1.00. At the optimum conditions, the Ni (II) and Cd (II) removals were found to be 94.88% and 94.82%, respectively. However, the experimental values obtained at optimum conditions for Ni (II) and Cd (II) removals were 93.05% and 94.21% respectively, showing good agreement between the experimental values and those predicted from the models, with relatively small errors which were only 1.83 and 0.61, respectively for Ni (II) and Cd (II) adsorption. Mohammadi et al. (2015 ) reported in their work a pH of 5.5 as the optimum pH for the adsorption of Ni (II) and Cd (II) onto dolomite powder, observing the appearance of a metal precipitation at a pH greater than 6 ( Mohammadi et al., 2015 ). Alslaibi et al. (2013 ) and Subbaiah et al. (2011 ) also reported optimum pH of 5 in their work of Cd (II) adsorption onto activated carbon from olive stone ( Alslaibi et al., 2013 ) as well as its biosorption by fungus ( Trametes versicolor ) biomass ( Subbaiah et al., 2011 ). Gutha et al. (2015 ) also observe no appreciable increase in the percentage removal of Ni (II) onto tomato leaf powder at a pH and adsorbent dose greater than 5.5 and 0.4 g respectively, revealing them as the optimum adsorption conditions ( Gutha et al., 2015 ). 3.5 Adsorption isotherm To select the best fitted model for the experimental data, chi square (χ 2 ) was incorporated since correlation coefficient (R 2 ) may not justify the basis for selection of the best adsorption model because it only represents the fit between experimental data and linearized forms of the isotherm equations while chi square (χ 2 ) represents the fit between the experimental and the predicted values of the adsorption capacity. The lower the χ 2 value, the better the fit. Table 4 summarizes all the constants; R 2 and χ 2 values obtained from the isotherm models applied for the two metal ions adsorption on MPP. The values obtained for Ni (II) and Cd (II) from the linear Langmuir plot ( Q a 0 Fig. 4a ) were 77.52 and 70.92 mg/g respectively. According to the fitting results listed in Table 4 , Langmuir isotherm model appeared to be much more applicable than the Freundlich having the highest R 2 as well as lowest χ 2 values. The fitness of the Langmuir model to the adsorption process connotes that the metal ion molecules from bulk solution were adsorbed on specific monolayer which is homogeneous in nature. As can also be seen from Table 4 , all the R L values lie between 0 and 1 which confirms the adsorption processes to be favourable under the studied conditions. The n values obtained from the Freundlich plot ( Fig. 4b ) were found to be greater than unity for the adsorbates, further indicating favourable adsorption conditions as well as physical adsorption processes for the two metal ions in aqueous solution. The monolayer adsorption capacity ( ) values of 77.52 mg/g for Ni (II) and 70.92 mg/g for Cd (II) observed in this study compared well with some other adsorbents reported from literature such as 5.41 mg/g from dolomite powder ( Q a 0 Mohammadi et al., 2015 ), 42.41 mg/g by modified chitosan ( Cheng et al., 2014 ), 44.44 mg/g from Bacillus laterosporus MTC C 1628 ( Kulkarni and Shetty, 2014 ) and 58.82 mg/g from tomato leaf powder ( Gutha et al., 2015 ) for Ni (II) adsorption as well as 11.72 mg/g using olive stone activated carbon ( Alslaibi et al., 2013 ), 1.62 mg/g obtained from dolomite powder ( Mohammadi et al., 2015 ), 18.34 mg/g obtained using chemically modified onion skin ( Agarry et al., 2015 ) and 85.47 mg/g from B. laterosporus MTC C 1628 ( Kulkarni and Shetty, 2014 ) for the adsorption process of Cd (II). 3.6 Kinetic studies The values of q e cal , k 1 , R 2 and χ 2 obtained from the plot of pseudo-first-order and q ecal , k 2 , R 2 and χ 2 obtained from the plot of pseudo-second-order kinetic models for the two metal ions adsorption on the MPP were tabulated in Table 5 . It can be seen that the R 2 values obtained from the pseudo-first-order model did not show a consistent trend and also the experimental q e (q e exp ) values did not agree with the calculated values (q e cal ) obtained from the linear plot ( Fig. 5a ). This shows that adsorption of the two metal ions onto MPP does not follow a pseudo first-order kinetic model. However, all the R 2 values obtained from the pseudo-second-order model were closer to unity with good agreement between the experimental and the calculated q e values obtained from the linear plot ( Fig. 5b ), further confirming that the adsorption of the two metal ions on MPP fitted well into the pseudo-second-order model. The values of k pi , C i as well as the correlation coefficients, R 2 , obtained from the intra particle plot ( Fig. 5c ) are also given in Table 5 . Higher diffusion rate constant for Cd (II) can be observed from the table, indicating that the diffusion of Cd (II) was more rapid than that of Ni (II). It can also be observed from Fig. 5(c) that the linear lines of the plot for both adsorbates did not pass through the origin which suggested the presence of intraparticle diffusion but was not the only rate controlling step, some other rate controlling steps might be involved in the process ( Garba et al., 2014 ). 4 Conclusion Three adsorption parameters were optimized with the help of CCD, a subset of RSM for the adsorption of two heavy metal ions onto modified plantain peel (MPP) with their percentage removals (Y Ni and Y Cd ) as the analysis responses. Based on the results obtained, the three factors (pH, MPP dose and initial concentration) have varying impacts on the adsorption processes with MPP dose and initial concentration posing analogous influence on the Ni (II) adsorption while MPP had the greatest effect on Cd (II) adsorption. Highest removal percentage of the adsorbates was obtained at the optimum conditions of pH (4.36), MPP dose (0.82 g) and initial concentration (120 mg/L) with desirability of 1.00. Equilibrium and kinetic data analysed show Langmuir and pseudo-second-order to be the best fitted models respectively. Presence of intraparticle diffusion was also suggested to be involved in the adsorption process along with some other rate controlling steps. Acknowledgement Zaharaddeen N. Garba would like to express his gratitude to the Tertiary Education Trust Fund and Ahmadu Bello University , Zaria, Nigeria for the study fellowship offered to him.
|
[
"AGARRY",
"AHMAD",
"ALSLAIBI",
"ASHRAFI",
"CAO",
"CHENG",
"FARGHALI",
"FREUNDLICH",
"GARBA",
"GARBA",
"GARBA",
"GARBA",
"GARBA",
"GARBA",
"GOMEZPACHECO",
"GUTHA",
"HO",
"KHAVIDAKI",
"KOBYA",
"KRISHNAN",
"KULA",
"KULKARNI",
"LAGERGREN",
"LANGMUIR",
"MARTINS",
"MEKATEL",
"MOHAMMADI",
"MOHAN",
"MONDAL",
"OZDES",
"SADAF",
"SAHU",
"SERENCAM",
"SHIRZADSIBONI",
"SUBBAIAH",
"TAN",
"TSAI",
"TUZEN",
"VERLICCHI",
"WEBER"
] |
591a83b34d4c447693a35880470dcb4e_Prevalence profile and associations of cognitive impairment in Ugandan first-episode psychosis patie_10.1016_j.scog.2021.100234.xml
|
Prevalence, profile and associations of cognitive impairment in Ugandan first-episode psychosis patients
|
[
"Mwesiga, Emmanuel K.",
"Robbins, Reuben",
"Akena, Dickens",
"Koen, Nastassja",
"Nakku, Juliet",
"Nakasujja, Noeline",
"Stein, Dan J."
] |
Introduction
The MATRICS consensus cognitive battery (MCCB) is the gold standard for neuropsychological assessment in psychotic disorders but is rarely used in low resource settings. This study used the MCCB to determine the prevalence, profile and associations of various exposures with cognitive impairment in Ugandan first-episode psychosis patients.
Methods
Patients and matched healthy controls were recruited at Butabika Hospital in Uganda. Clinical variables were first collated, and after the resolution of psychotic symptoms, a neuropsychological assessment of seven cognitive domains was performed using the MCCB. Cognitive impairment was defined as two standard deviations (SD) below the mean in one domain or 1SD below the mean in two domains. Descriptive statistics determined the prevalence and profile of impairment while regression models determined the association between various exposures with cognitive scores while controlling for age, sex and education.
Results
Neuropsychological assessment with the MCCB found the burden of cognitive impairment in first-episode psychosis patients five times that of healthy controls. The visual learning and memory domain was most impaired in first-episode psychosis patients, while it was the working memory domain for the healthy controls. Increased age was associated with impairment in the domains of the speed of processing (p < 0.001) and visual learning and memory (p = 0.001). Cassava-rich diets and previous alternative and complementary therapy use were negatively associated with impairment in the visual learning (p = 0.04) and attention/vigilance domains (p = 0.012), respectively. There were no significant associations between sex, history of childhood trauma, or illness severity with any cognitive domain.
Conclusion
A significant burden of cognitive impairment in Ugandan first-episode psychosis patients is consistent with prior data from other contexts. However, the profile of and risk factors for impairment differ from that described in such work. Therefore, interventions to reduce cognitive impairment in FEP patients specific to this setting, including dietary modifications, are required.
|
1 Introduction The MATRICS consensus cognitive battery (MCCB) has been suggested as the gold standard for neuropsychological assessment in patients with psychotic disorders ( Green et al., 2004 ; Nuechterlein et al., 2004 ). Developed during the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) initiative in 2002, the MCCB assesses for impairment in the seven cognitive domains of i) working memory, ii) attention/vigilance, iii) verbal learning and memory, iv) visual learning and memory, v) reasoning and problem-solving vi) information processing speed, and vii) social cognition ( Kern et al., 2007 ; Nuechterlein et al., 2008 ). In psychosis populations from high-income countries (HIC) where the MCCB has been used, the prevalence of cognitive impairment has been found to range from 36% to 78%, primarily in the domains of processing speed and verbal learning and memory ( Lystad et al., 2014 ; Mesholam-Gately et al., 2009 ). Notable differences as significant as two standard deviations (2SDs) exist between the mean domain scores of first-episode psychosis (FEP) patients versus scores of healthy controls ( Green et al., 2019 ). Cognitive impairment is also associated with various clinical, environmental, and sociodemographic variables, and these associations differ across the seven domains. For example, older age has been associated with impairment in the visual learning and speed of processing domains, while reasoning and problem solving and working memory were associated with female sex ( Lee et al., 2020 ; Rodriguez-Jimenez et al., 2015 ; Atake et al., 2018 ; Li et al., 2019 ; Navarra-Ventura et al., 2018 ). There is also an association between childhood trauma with impairment in the verbal learning and memory domain ( Ayesa-Arriola et al., 2020 ). Long durations of untreated psychosis have been associated with impairment in selected cognitive domains in patients from HIC but primarily among chronically medicated and not first episode psychosis patients ( Stone et al., 2020 ; Bora et al., 2018 ). There is limited research on the prevalence, profile and associations of cognitive impairment in FEP patients from low and middle-income countries (LMICs) ( Green, 2016 ; Vinogradov, 2019 ; Kline et al., 2019 ). It is crucial to determine how FEP patients compared to their healthy controls when assessed with the MCCB in LMICs as it prevents wrongly assigning cognitive impairment ( Reichenberg, 2010 ). There are also differing clinical, environmental, and sociodemographic exposures in low resource settings, yet how they are associated with cognitive impairment is still limited. For example, previous work and education history may be the best source of a patient's previous level of cognitive functioning when determining if there is a current decline, yet their association with cognitive impairment has not been well described in low resource settings ( Stone et al., 2020 ; Stone et al., 2016 ). The association between cognitive impairment and clinical characteristics that are more prevalent in FEP patients from low resource settings like longer duration of untreated psychosis (DUP), higher rates of childhood trauma, and greater psychosis severity are unclear ( Aas et al., 2013 ; Kilian et al., 2018 ; Lezak et al., 2004 ; Fawzi et al., 2013 ; Hecker et al., 2015 ). Diet is closely associated with cognitive function, yet the association of different dietary patterns like carbohydrate-rich diets with cognitive impairment have not been reviewed ( Beilharz et al., 2015 ; Jakobsen et al., 2018 ). Finally, many FEP patients use alternative and complementary therapies before presenting later to care, and understanding if these therapies are associated with cognitive impairment is essential ( Woolhouse, 2007 ). To address these gaps in the current literature, we investigated the prevalence, profile and clinical variables associated with cognitive impairment in FEP patients from a resource-limited setting. These findings may be crucial in developing interventions for cognitive impairment that is a more significant driver of psychosis disease burden than positive, negative or affective symptoms ( Vigo et al., 2016 ; Mihaljević-Peleš et al., 2019 ). 2 Methods 2.1 Study design, setting and participants These have been previously described ( Mwesiga et al., 2021 ). Briefly, this was a cross-sectional study design undertaken at the National Psychiatric Mental referral hospital in Uganda (Butabika hospital). The participants were in-patients aged 18–60 years with confirmed first-episode of psychosis. Additional inclusion criteria included never having been treated with antipsychotic medication or on medication for less than six weeks' duration. The six-week cut-off (as opposed to twelve weeks) was informed by prior evidence that untreated psychosis may resolve more quickly in low- and middle-income countries (LMICs) versus high-income countries ( Chiliza et al., 2012 ; Rangaswamy et al., 2012 ; Kaminga et al., 2018 ; Emsley et al., 2006 ). A cut-off of 18 years was applied to mitigate the challenges of neuropsychological assessment in adolescents versus adults. In Uganda, patients older than 60 are deemed elderly, and these individuals were excluded from participation to eliminate the potential effects of normal aging and dementia ( UBOS, 2012 ). In addition, patients with HIV/AIDS, syphilis and substance use were excluded from participation, as these are common clinical presentations in this setting; and may each also be associated with cognitive impairment ( Nakasujja et al., 2012 ; Sacktor et al., 2005 ; Nakimuli-Mpungu et al., 2006 ). Age, sex and education matched healthy controls were recruited from the outpatient dental department at Butabika Hospital and assessed on the day of recruitment to generate normative values for cognitive function in this population. Inclusion criteria for control participants were 1) no evidence of psychosis or substance use, as assessed by the Mini International Neuropsychiatric Interview (MINI), and 2) no evidence of HIV/AIDS or syphilis. 2.2 Instruments The consent forms, sociodemographic questionnaire, Mini International Neuropsychiatric Interview (MINI) version 7.0 ( Sheehan et al., 2010 ), Positive and Negative Signs and Symptoms of Schizophrenia (PANSS) ( Kay et al., 1987 ) and Childhood trauma questionnaire (CTQ) ( Bernstein et al., 1998 ) used in this study were previously described ( Mwesiga et al., 2021 ). The MATRICS consensus cognitive battery is the gold standard for assessment of cognition in patients with psychosis ( Nuechterlein et al., 2008 ). It assesses for cognitive impairment in the seven cognitive domains of i) working memory, ii) attention/vigilance, iii) verbal learning and memory, iv) visual learning and memory, v) reasoning and problem-solving vi) information processing speed, and vii) social cognition ( Green et al., 2004 ; Nuechterlein et al., 2008 ). The complete battery can be completed in approximately 90 min, excluding the time needed to score. The neuropsychological assessment procedure with the MCCB was performed as previously described Nuechterlein et al. (2008) . Briefly, the MCCB comprises ten different neuropsychological tests; the Trail Making Test (TMT): Part A; Brief Assessment of Cognition in Schizophrenia (BACS): symbol coding; Hopkins Verbal Learning Test-Revised (HVLT-R); Wechsler Memory Scale-Third Edition (WMS-III): Spatial Span; Letter-Number Span (LNS); Brief Visuospatial Memory Test-Revised; Category Fluency: Animal Naming; Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT): Managing Emotions (D & H); Continuous Performance Test-Identical Pairs (CPT-IP), MATRICS International Version 2. The ten neuropsychological tests of the MATRICS are administered in the order above to ensure 1) patients start the battery with less cognitively taxing activities that are relatively straightforward to understand to facilitate optimal test-taking performance; and 2) alternate verbal with nonverbal measures, thus aiming to alleviate processing burden and minimize interference among tests ( Nuechterlein et al., 2008 ). The MCCB was administered by two experienced clinical psychologists currently pursuing their doctoral studies. The MCCB was not translated to any Ugandan local language. However, adjustments were needed in the administration of the MCCB tests since the study population composed of a mixed population with people from different ethnic groups and mixed age ranges. The education level of the population was also varied with majority having less than 9 years of formal education. The language of choice in most cases was one of the many local dialects even among individuals with a relatively high education. Most participants had not used a computer before, which was required for administration of the CPT-IP. The participants had never participated in psychological testing and a single reading of the MCCB instructions was often not enough for them to understand the test expectations. If we had used the age and educational criteria of the MCCB, we would have needed to exclude more than 75% of the desired population, giving us an unrepresentative sample and perpetuating the theoretical and ethical problem of excluding the most vulnerable individuals. We administered this cognitive battery to all consenting eligible participants with first-episode psychosis but revised the administration procedures to improve the validity of test scores. The formal scored portion of each test remained unchanged, but test administrators repeated or rephrased instructions for each test up to 5 times until respondents understood the test expectations. If respondents were not sure, the administrators asked them to describe the test expectations before beginning the formal test. We also added a training maze before administering the NAB Mazes, and we trained respondents to use a computer mouse before starting the CPT-IP. Despite these considerations, several patients had difficulty understanding the test requirements or were otherwise unable to complete some of the tests. Standard practice when scoring the MCCB is to use the worst score (i.e., 300 s for the TMT-A and 0 points for all other tests); however, given our sample's demographic differences from participants in almost all previous studies that used the MCCB, it was important to determine whether respondents understood the assessment tasks. Therefore, we developed test-specific rules to make sure respondents understood the test expectations and the distinction was made by the objective view of the interviewer during the assessment of the individual respondents' comprehension of the required task and their ability to complete the task. For tests that required multiple trials (i.e., the HVLT-R, WMS-III Spatial Span, BVMT-R, and CPT-IP), each trial had to have an initial simplified instruction before the test administration. Scoring of all subtest was not adjusted. 2.3 Research procedure First-episode psychosis patients were enrolled into the observational study. After obtaining informed consent, the diagnosis was confirmed using the MINI. Sociodemographic information was compiled using a standard questionnaire, and illness severity was assessed using the PANSS. Patients were then followed up weekly with the PANSS until resolution of psychotic symptoms, at which point the MCCB was administered. In addition, data on previous traumatic experiences were assessed using the CTQ. Consenting healthy controls (matched for age and level of education) were also recruited from the hospital's dental wards and assessed using the MCCB. 2.4 Data analysis Data were analyzed using Stata version 14 ( Stata, 2018 ). Raw scores of the ten tests of the MCCB were first tested for normality. Scores of the mazes sub-test of the Neuropsychological Assessment Battery (NAB-mazes) and trail making test deviated from normal and were log-transformed. The trail making test was also reverse-scored, with lower scores suggesting more inferior cognitive function. These raw scores were then standardized using the means and standard deviations of the healthy controls after matching for age, sex and level of education. Next, composite scores were generated for each of the seven domain scores by summing the standardized scores of individual tests per domain. Standardized test scores of the Trail Making Test (TMT): Part A and Brief Assessment of Cognition in Schizophrenia (BACS): symbol coding and Category Fluency: Animal Naming were combined to represent the speed of processing domain. Hopkins Verbal Learning Test-Revised (HVLT-R) sum scores represented the verbal learning and memory domain. Wechsler Memory Scale-Third Edition (WMS-III): Spatial Span and the Letter-Number Span (LNS) were combined for the working memory domain. Brief Visuospatial Memory Test-Revised sum scores represented the visual learning and memory domain. Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT): Managing Emotions (D & H) represented social cognition and the Continuous Performance Test- Identical Pairs (CPT-IP), MATRICS International Version 2 represented attention/vigilance domain. Cognitive impairment was classified as a categorical variable to determine the prevalence and profile of impairment. For the burden of a general cognitive impairment, participants with mean domain scores two standard deviations (SD) below the mean in one domain of the MCCB; or as 1SD below the mean in two or more domains of the MCCB were classified as impaired ( Revell et al., 2015 ). Standardized scores of 2SD below the mean signified cognitive impairment in a specific domain. Duration of untreated psychosis was calculated by subtracting the age at which patient symptoms first presented from the patient's age at admission. Participants with diagnoses of schizophrenia, schizophreniform and schizoaffective disorder on the MINI were classified as non-affective psychosis. Patients with diagnoses of bipolar (irrespective of the phase or type) and depression with psychotic features were classified as affective psychoses. All other psychosis diagnoses were classified as non-affective psychoses. Descriptive statistics were employed for the prevalence and profile for both a general cognitive impairment and impairment in particular cognitive domains. In determining the mean differences between cases and healthy controls and factors associated, cognitive impairment was classified as a continuous variable (higher scores implying better cognitive function). Student t -tests determined if mean cognitive domain scores differed across the FEP patients and healthy controls. Regression coefficients were calculated for the associations between various clinical variables and the seven different standardized domain scores in the FEP patients while controlling for sex and level of education. Due to multiple comparisons, a Bonferroni adjusted significance level was calculated to account for the increased possibility of type-1 errors. A level of significance of 0.05 was used for all analyses. 3 Results After data cleaning, the final sample included 129 FEP patients and 52 healthy controls. The median age of the sample was 29 years (IQR 22–34). Most participants were female [108/181, 64%], single [75/181, 45%] and in non-formal employment [64/181, 38%]. Among the FEP patients, the median age for first seeking help was 26 years [IQR 21–32]. The mean time between the onset of symptoms and presenting to the hospital was 0.932 years [SD 2.798, range (0–18)]. Most participants [76/120; 63%] presented with symptoms for the first time, while 95/113 (84%) presented to a hospital for the first time. Those who had previously presented to a hospital largely used the regional referral hospitals [8/13, 61.5%]. Approximately 13% of FEP patients had previously received antipsychotic medication for less than six weeks' duration. Most participants ate diets rich in legumes (93.3%) and grains (90.8%) in the week before admission. Most participants reported no history of previous trauma in all the domains. The proportions of participants that reported no prior history of childhood trauma in the different domains were physical neglect (36.0%), emotional neglect (42.7%), sexual abuse (72.0%), physical abuse (58.7%), emotional abuse (46.7%). 36% of participants had scores suggestive of underreporting traumatic events ( Table 1 ) . 3.1 Comparison of the burden of cognitive impairment in patients and healthy controls We found that 80/129 (62%) of the FEP patients and 6/52 (12%) of the healthy controls had a general cognitive impairment. There were no statistical differences in the proportions of participants with a general cognitive impairment across sex [prevalence ratio (PR) = 1.15 (p = 0.79)], age [PR = 0.46 (p = 0.19)] or diagnosis (affective versus non-affective psychosis) [PR = 1.17 (p = 0.76)]. 3.2 The burden of impairment in specific cognitive domains Most FEP patients were impaired (2 SD below the mean) in the visual learning and memory domain [38% (CI 30.0–47.5)], while most healthy controls were impaired in the working memory domain [6.1% (CI 1.9–17.80)]. Conversely, the social cognition domain was least impaired in the FEP patients [17% (CI 10.9–24.6)]. No healthy controls were assigned as impaired in visual learning and memory or verbal learning and memory ( Fig. 1 ). 3.3 Profile of impairment Statistically significant differences in mean cognitive scores between FEP patients and healthy controls were found across all domains, except social cognition. The most significant difference was in the reasoning and problem-solving domain, a statistically significant decrease of 1.834 (p ≤ 0.0001). By contrast, the difference in mean scores between cases and controls was lowest in the social cognition domain, a statistically significant decrease of 0.189 (p = 0.62). Other comparisons are shown in Table 2 and Fig. 2 . 3.4 Clinical variables associated with impairment in different cognitive domains For standardized z scores on visual learning and memory domain (most impaired domain), significant associations were found with increased age (p = 0.001), more years of education (p = 0.042), being married (p = 0.016), Hamitic ethnicity (p = 0.017), living with non-family members (p = 0.001), cassava diet in the week before admission (p = 0.041) and a non-affective psychosis diagnosis (p = 0.026). In addition, there were significant associations in the sociodemographic variable of living with a family member (p = 0.001) and wage incomes for the household primary income earner (p = 0.024) with the reasoning and problem-solving domain. There were no significant associations between trauma exposures and standardized z scores across the seven domains except for a positive association between the attention/vigilance domain and the physical abuse domain (p = 0.024). Complete results for the visual learning and memory domain are shown in Table 3 . Raw tables for associations between various clinical variables and the other cognitive domains are in the supplementary files at the end of this article. 4 Discussion The main findings were 1) the prevalence of cognitive impairment in FEP patients was five times higher than age, sex and level of education matched healthy controls. 2) The most frequently impaired domains were visual learning and memory in FEP patients and working memory in healthy controls. 3) Strength of associations differed across the seven different cognitive domains. Increased age was associated with impairment in the speed of processing and visual learning and memory domains. Having a wage income was associated with higher cognitive scores in all domains except the social cognition domain. Cassava-rich diets and previous alternative and complementary therapy use were negatively associated with cognitive impairment in the visual learning and attention/vigilance domains. There were no significant associations between sex, history of childhood trauma, or illness severity with any cognitive domain. 4.1 Comparison of cognitive impairment between patients and healthy controls In line with prior work conducted in diverse settings, the prevalence of cognitive impairment at the first episode of psychosis in our study sample was higher than in the healthy controls ( Lystad et al., 2014 ; McCleery et al., 2014 ; Rodriguez-Jimenez et al., 2019 ; Holmén et al., 2010 ). However, unlike studies reporting the most significant burden in the attention/vigilance and working memory domains, most patients with first-episode psychosis in our study sample were found to have impairment in the visual learning and memory domain ( Braw et al., 2008 ; Rhinewine et al., 2005 ). We found the most significant difference in mean scores in the reasoning and problem-solving domain unlike studies from HIC which reported the most considerable mean differences in the working memory domain ( Aas et al., 2014 ). One explanation is that – given that reasoning and problem solving is a higher-order domain – performance in this sphere is often dependent on lower-order domains such as processing speed ( Kalkstein et al., 2010 ). Thus, the cumulative effect of impairment in lower order domains may be responsible for this finding. In our study, the prevalence of 17% impairment in the social-cognitive domain among FEP patients was relatively low. Further, no statistically significant differences were evident between the mean scores of FEP patients and healthy controls in this domain. This finding is consistent with work from other LMICs suggesting poor psychometric properties of the MSCEIT in such settings ( Kurtz et al., 2018 ; Mehta et al., 2011 ; Emsley et al., 2005 ; Lim et al., 2020 ; Stone et al., 2020 ). There were no healthy controls categorized as impaired in visual learning and memory and verbal learning and memory. There are few studies on this domain in LMIC with a limited number of validated tools for assessing patients with psychotic disorders ( Vingerhoets et al., 2013 ). Thus, further research on neuropsychological testing of the visual learning and memory domain in first-episode psychosis patients is warranted. 4.2 Sociodemographic risk factors for cognitive impairment A 48% increase in standardized scores of the visual learning and memory domain was observed for every unit increase in age. For the speed of processing domain, a unit increase in age was associated with a 57% increase in cognitive scores. This age-associated difference is consistent with literature from high-income countries ( Lee et al., 2020 ; Rodriguez-Jimenez et al., 2015 ; Rajji et al., 2009 ; Atake et al., 2018 ). The lack of associations between sex and impairment in any cognitive domain differs from literature in HIC that often highlight considerable heterogeneity in associations of cognitive impairment in men and women ( Mendrek and Mancini-Marïe, 2016 ). For example, in Hong Kong, the reasoning and problem solving and working memory domains were associated with the female sex, while processing speed was associated with the male sex. In addition, both sexes were associated with impairment in the attention/vigilance domain, and negative symptoms mediated the relationship ( Li et al., 2019 ; Navarra-Ventura et al., 2018 ; Zhang et al., 2017 ). These sex differences have been linked to a disturbance in typical sexual dimorphism (males having larger ventricles and smaller frontal lobes) due to hormonal and immunological factors ( Mendrek and Mancini-Marïe, 2016 ). In all domains except the social cognition domain, having a wage income was associated with higher cognitive scores, keeping with elsewhere literature ( Tan, 2009 ). However, the direction of this association needs further review as it is unclear from this study design if better cognitive function ensures employment through more excellent job opportunities and income, or rather employment and income is protective of cognitive function ( Llerena et al., 2018 ; McGurk et al., 2009 ). 4.3 Diet and cognitive impairment There was a positive association between meat and legume-rich diets with impairment in the working memory domain consistent with other published studies ( Wonodi and Schwarcz, 2010 ; Cao et al., 2021 ). Among 195 Chinese patients with schizophrenia, kynurenic acid (KYNA) was associated with worse performance in the working memory domain, while 5-hydroxy indole was associated with improved performance ( Huang et al., 2021 ). Meat and legumes are rich in tryptophan, whose metabolites include KYNA and 5-hydroxy indole 75, 76. Going forward, better quantification of these metabolites in Ugandan FEP patients may elucidate the underlying mechanisms of this association. Fruit rich diets were not associated with impairment in any cognitive domain. High flavonoids found in citrus may be associated with improved cognitive functioning in schizophrenia patients from other settings ( Bruno et al., 2017 ; Pontifex et al., 2021 ). Previous research has shown that different fruits have different quantities of flavonoids content ( Stangeland et al., 2009 ). This flavonoid content also differs depending on the size of the portion and the part of the fruit (peelings of pomegranate, for example, have more content) ( Shams Ardekani et al., 2011 ). The flavonoid content in the fruits could be quantified in future studies. Cassava was associated with an increased risk for cognitive impairment in the visual learning domain. This cassava association with cognitive impairment is due to thiocyanate toxicity in poorly processed cassava ( Boivin et al., 2017 ; Rivadeneyra-Domínguez and Rodríguez-Landa, 2020 ). To our knowledge, this is the first study to highlight an association between cassava diets and a specific cognitive domain in FEP patients. Given that cassava is a staple diet in Uganda, further work examining this association is required. 4.4 Trauma and its association with cognitive impairment This study found only one association between the attention/vigilance cognitive domain and the physical abuse trauma domain. One possibility is underreporting of traumatic experiences ( Church et al., 2017 ; Read et al., 2001 ; Radhakrishnan et al., 2017 ; Mall et al., 2020 ). Previous studies associated childhood trauma and cognitive impairment in the working memory and reasoning and problem-solving domains. The few studies that have highlighted an association in the attention/vigilance domain reported an association with physical neglect, not physical abuse ( Mørkved et al., 2020 ; Olivier et al., 2015 ). 4.5 Clinical risk factors for cognitive impairment There was a positive correlation between shorter DUP and increased impairment in the domains of attention vigilance which differs from other studies that correlated increased DUP with increased impairment in reasoning and problem-solving and verbal learning and memory domains ( Fraguas et al., 2014 ; Bora et al., 2018 ; Lappin et al., 2007 ; Stone et al., 2020 ). This finding may support evidence that the attention/vigilance domain is impaired earliest during psychotic disorders, as shown in other settings ( Hou et al., 2016 ). Longitudinal studies and studies in the psychosis prodrome are recommended. To our knowledge, this is the first study to report a negative association between previous use of alternative and traditional therapies and cognitive impairment in the attention/vigilance domain. Previous work in Uganda showed that patients with psychotic disorders are treated with herbs and rituals ( Abbo et al., 2012 ; Abbo et al., 2019 ). However, it is unclear if the herbs used by alternative and complementary therapies are associated with cognitive impairment. Recently a study in China highlighted an association between oxidative damage and cognitive impairment in FEP patients treated with herbal remedies ( Xie et al., 2019 ). The nature of the herbs used needs assessment to determine if they cause oxidative damage and cognitive impairment. It is also vital to document previous use of alternative therapies and the medication provided during neuropsychological assessment. Significant associations were observed between non-affective psychoses with impairment in visual learning and memory and the working memory domains. However, among 64 Croatian patients, the association between non-affective psychoses and impairment in the working memory domain were not replicated ( Žakić Milas and Milas, 2019 ). This finding might be due to a more considerable burden of affective psychoses in high-income countries than low-income countries ( Rodriguez-Jimenez et al., 2015 ; Rodriguez-Jimenez et al., 2019 ; Reichenberg et al., 2009 ; McCleery and Nuechterlein, 2019 ; Mwesiga et al., 2020 ). 4.6 Strengths and limitations Several limitations should be borne in mind when interpreting the current study findings. First, the cross-sectional study design undertaken here does not allow a determination of causality. Second, the descriptions for the duration of untreated psychosis were prone to recall bias. Future studies should use standardized instruments like the Nottingham Onset schedule for the duration of untreated psychosis (NOS-DUP) ( Singh et al., 2005 ) and the Interview for retrospective assessment of Schizophrenia onset (IRAOS) ( Mwesiga et al., 2019 ; Haefner and Maurer, 2006 ). We also used the Childhood Trauma Questionnaire, which assesses for trauma retrospectively and is prone to recall bias ( Aas et al., 2011 ; Charak et al., 2017 ; Kilian et al., 2018 ). Third, the diet was not assessed using standardized tools like the 24-hour dietary recall or cluster analysis of dietary patterns, which may have led to misclassification bias in defining the exposure ( Hu, 2002 ; Reedy et al., 2010 ; Thompson and Subar, 2001 ). Also, patients with psychotic disorders are at higher risk for lifestyle disorders like diabetes and hypertension, which are also risk factors for cognitive impairment ( Pillinger et al., 2017 ). This increased risk is thought to be due to patients with psychotic disorders often preferring starch-rich diets and an underlying genetic risk ( Perry et al., 2016 ). These need to be assessed in future studies. These limitations notwithstanding, this is one of few studies in Africa to use the gold standard for neuropsychological assessment in FEP patients ( Kilian et al., 2018 ). The literature on cognitive impairment has primarily focused on schizophrenia, so including both affective and non-affective psychoses highlight an important field for future study. This study also determined associated factors for cognitive impairment specific to this setting, such as dietary patterns, early childhood trauma and previous use of alternative therapies. There is an exciting basis for future studies on cognition and diet in patients with psychotic disorders from low resource settings. Future studies determining the role of various effect modifiers on the association between diet and cognitive impairment are required ( Chen et al., 2016 ; Bioque et al., 2021 ). First, diet is one of the known exposures that can change genetic expression ( Bottero and Potashkin, 2020 ). Second, the microbiome and gut-brain axis may modify the association between a diet with cognitive impairment ( Luca et al., 2020 ). 5 Conclusion Consistent with literature from high-income countries, there is a significant burden of cognitive impairment in Ugandan first-episode psychosis patients than their healthy controls. However, in this setting, the prevalence profile and clinical variables associated with cognitive impairment differ from the literature in HIC. Therefore, neuropsychological assessment in first-episode psychosis patients in this setting must consider the different domains impaired and exposures associated with impairment in this setting when developing interventions to reduce the burden of cognitive impairment. Interventions to improve cognitive function like atypical antipsychotics and cognitive remediation must be undertaken ( Houthoofd et al., 2008 ; Meltzer and McGurk, 1999 ; Linssen et al., 2014 ; Koola et al., 2017 ; Fountoulakis, 2020 ; Bowie et al., 2020 ; Zaytseva et al., 2013 ; Revell et al., 2015 ). Dietary interventions deserve study as a cheap intervention for reducing cognitive impairment. CRediT authorship contribution statement Conceptualization: EKM, DA, NK, NN, DJS; Methodology: EKM, RR; Validation: EKM, RR; Formal analysis: EKM, RR; Investigation: EKM, JN; Data curation: EKM; Writing original draft: EKM, RR, DA, NK, JN, NN, DJS; Visualization: EKM; Supervision: EKM, DA, NK, NN, DJS; Project Administration: EKM, JN, NN; Funding acquisition: DA, DJS. Declaration of competing interest The authors declare no conflict of interest. Acknowledgements We are indebted to the participants who consented to participate in the study. Miss Joy Louise Gumikiriza and Miss Shubaya Kasule are clinical psychologists who administered the MCCB. Ethics approval and consent to participate The study obtained ethical approval from the Human Research Ethics Committee (HREC) of the Faculty of Health Sciences, University of Cape Town (UCT) (#574/2017), the Ugandan National Council of Science and Technology (UNCST) (#HS142ES) and the School of Medicine Research and Ethics committee (SOMREC) (#REC REF 2017-153) of the College of Health Sciences, Makerere University. Institutional permission to carry out the study was obtained from the administration of Butabika hospital. Patients were reimbursed $3 for their time, either completing the entire study assessment or withdrawing consent. Data and material availability All data generated or analyzed during this study are available from the corresponding author. Funding This work was supported by the Neuropsychiatric Genetics in African Populations (NeuroGAP) Study ( Stevenson et al., 2019 ). The content of the protocol is solely a responsibility of the authors, and the funder had no role in the development of the protocol. Appendix A Supplementary data Supplementary tables Image 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.scog.2021.100234 .
|
[
"AAS",
"AAS",
"AAS",
"ABBO",
"ABBO",
"STEVENSON",
"ATAKE",
"AYESAARRIOLA",
"BEILHARZ",
"BERNSTEIN",
"BIOQUE",
"BOIVIN",
"BORA",
"BOTTERO",
"BOWIE",
"BRAW",
"BRUNO",
"CAO",
"CHARAK",
"CHEN",
"CHILIZA",
"CHURCH",
"EMSLEY",
"EMSLEY",
"FAWZI",
"FOUNTOULAKIS",
"FRAGUAS",
"GREEN",
"GREEN",
"GREEN",
"HAEFNER",
"HECKER",
"HOLMEN",
"HOU",
"HOUTHOOFD",
"HU",
"HUANG",
"JAKOBSEN",
"KALKSTEIN",
"KAMINGA",
"KAY",
"KERN",
"KILIAN",
"KLINE",
"KOOLA",
"KURTZ",
"LAPPIN",
"LEE",
"LEZAK",
"LI",
"LIM",
"LINSSEN",
"LLERENA",
"LUCA",
"LYSTAD",
"MALL",
"MCCLEERY",
"MCCLEERY",
"MCGURK",
"MEHTA",
"MELTZER",
"MENDREK",
"MESHOLAMGATELY",
"MIHALJEVICPELES",
"MORKVED",
"MWESIGA",
"MWESIGA",
"MWESIGA",
"NAKASUJJA",
"NAKIMULIMPUNGU",
"NAVARRAVENTURA",
"NUECHTERLEIN",
"NUECHTERLEIN",
"OLIVIER",
"PERRY",
"PILLINGER",
"PONTIFEX",
"RADHAKRISHNAN",
"RAJJI",
"RANGASWAMY",
"READ",
"REEDY",
"REICHENBERG",
"REICHENBERG",
"REVELL",
"RHINEWINE",
"RIVADENEYRADOMINGUEZ",
"RODRIGUEZJIMENEZ",
"RODRIGUEZJIMENEZ",
"SACKTOR",
"SHAMSARDEKANI",
"SHEEHAN",
"SINGH",
"STANGELAND",
"STATA",
"STONE",
"STONE",
"TAN",
"THOMPSON",
"UBOS",
"VIGO",
"VINGERHOETS",
"VINOGRADOV",
"WONODI",
"WOOLHOUSE",
"XIE",
"ZAKICMILAS",
"ZAYTSEVA",
"ZHANG"
] |
d79d3fbb87054501aa42c094963edd4f_Clinical phenotypes of patients with non-valvular atrial fibrillation as defined by a cluster analys_10.1016_j.ijcha.2021.100885.xml
|
Clinical phenotypes of patients with non-valvular atrial fibrillation as defined by a cluster analysis: A report from the J-RHYTHM registry
|
[
"Watanabe, Eiichi",
"Inoue, Hiroshi",
"Atarashi, Hirotsugu",
"Okumura, Ken",
"Yamashita, Takeshi",
"Kodani, Eitaro",
"Kiyono, Ken",
"Origasa, Hideki"
] |
Background
Atrial fibrillation (AF) is a heterogeneous condition caused by various underlying disorders and comorbidities. A cluster analysis is a statistical technique that attempts to group populations by shared traits. Applied to AF, it could be useful in classifying the variables and complex presentations of AF into phenotypes of coherent, more tractable subpopulations.
Objectives
This study aimed to characterize the clinical phenotypes of AF using a national AF patient registry using a cluster analysis.
Methods
We used data of an observational cohort that included 7406 patients with non-valvular AF enrolled from 158 sites participating in a nationwide AF registry (J-RHYTHM). The endpoints analyzed were all-cause mortality, thromboembolisms, and major bleeding.
Results
The optimal number of clusters was found to be 4 based on 40 characteristics. They were those with (1) a younger age and low rate of comorbidities (n = 1876), (2) a high rate of hypertension (n = 4579), (3) high bleeding risk (n = 302), and (4) prior coronary artery disease and other atherosclerotic comorbidities (n = 649). The patients in the younger/low comorbidity cluster demonstrated the lowest risk for all 3 endpoints. The atherosclerotic comorbidity cluster had significantly higher adjusted risks of total mortality (odds ratio [OR], 3.70; 95% confidence interval [CI], 2.37–5.80) and major bleeding (OR, 5.19; 95% CI, 2.58–10.9) than the younger/low comorbidity cluster.
Conclusions
A cluster analysis identified 4 distinct groups of non-valvular AF patients with different clinical characteristics and outcomes. Awareness of these groupings may lead to a differentiated patient management for AF.
|
1 Introduction Atrial fibrillation (AF) poses a significant public health burden and is caused by underlying processes and disorders leading to a very heterogeneous patient population [1] . A large variety of risk factors for non-valvular AF have been identified, including the age, male sex, hypertension, diabetes, obesity, sleep apnea, heart failure, and coronary artery disease [1] . Racial differences have also been reported to affect the incidence of AF and risk of bleeding from oral anticoagulants [2,3] . Currently, AF is classified based on the symptoms or duration of the AF episodes (e.g., paroxysmal, persistent, and permanent). Although this classification has several prognostic roles, we believe a more sophisticated classification of AF is highly desirable, not only to prevent strokes and bleeding events, but also to provide more individualized adjustments to rhythm- or rate-control therapy. A cluster analysis, an unsupervised data-driven approach, has been used in the cardiovascular realm [4–9] . It successfully classifies subjects from heterogeneous populations into similar groups based on the clinical information. Recent data using heart failure with a reduced (preserved) ejection fraction has indicated that clustering techniques analyzed by standard clinical features can classify patients into several different phenotypes (clusters) that exhibit a different mortality, hospitalization rate, and response to pharmacological therapy or exercise training [4–7] . A recent cluster analysis study using a prospective registry of AF patients in the US demonstrated an improvement in the phenotypic categorization of the disease [8] . That study, though unique and useful, was lacking in having a significant Asian cohort. In this study, we set two objectives: [1] to perform the cluster analysis to identify unique clinically relevant phenotypes of AF using a prospective Japan-wide AF registry and [2] to examine the phenotype-based clinical outcomes. 2 Methods 2.1 Data source and study population The study design and the main outcome analysis of the J-RHYTHM Registry have been reported elsewhere [10–12] . Briefly, the J-RHYTHM Registry is an observational, prospective cohort study that enrolled patients with AF between January 2009 and July 2009 at 158 sites in Japan. Eligible patients were those ≥20 years of age who had at least one episode of AF captured on a standard 12-lead electrocardiogram, who were able to provide informed consent, and who adhered to a local follow-up. This post-hoc study included 7406 patients after excluding patients with mitral stenosis or that had undergone a mechanical valve replacement (n = 410). Warfarin was used as an oral anticoagulation therapy because no direct oral anticoagulant was available when this registry was carried out. The study protocol conformed to the 1975 Declaration of Helsinki and was approved by the institutional review board of the participating institutions. All patients gave their written informed consent. 2.2 Outcomes The primary outcome was defined as all-cause mortality, thromboembolisms, or major bleeding. Thromboembolisms included ischemic strokes, transient ischemic attacks, and systemic embolisms. Major bleeding included intracranial hemorrhages, gastrointestinal bleeding, and other causes of bleeding requiring hospitalization. We defined an ischemic stroke as a sudden neurological deficit lasting >24 h, corresponding to a vascular territory in the absence of a primary hemorrhage that was not explained by other causes such as trauma or an infection. The diagnosis of a stroke was made with computed tomography or magnetic resonance imaging. The patients were followed for 2 years, or until an endpoint, whichever occurred first. All analyses of the rates of the endpoints were based on the first event during the follow-up. A local investigator ascertained the events. 2.3 Definitions The components of the CHA 2 DS 2 -VASc score [13] was defined by congestive heart failure, hypertension, age ≥ 75 (2 points), diabetes, strokes (2 points), vascular disease, an age 65–74, and the sex category (female). With regard to the CHA 2 DS 2 -VASc score, we modified the “V” criterion to include coronary artery disease only, because no data were available regarding peripheral artery disease or aortic plaque. The components of the HAS-BLED bleeding risk score for major bleeding [14] were defined by hypertension, abnormal renal/liver function (1 point each), strokes, a bleeding history or predisposition, a labile international normalized ratio (INR) (therapeutic time in a range [TTR] < 60%), elderly (>65 years), and the use of drugs (antiplatelet agents and nonsteroidal anti-inflammatory drugs) or alcohol > 8 U/week (1 point each). Abnormal renal function was defined as the presence of chronic dialysis, renal transplantation, or serum creatinine > 200 mmol/L was classified as abnormal kidney function. Abnormal liver function was defined as biochemical evidence of significant hepatic derangement (eg, bilirubin > 2x upper limit of normal, in association with aspartate aminotransferase/alanine aminotransferase > 3x upper limit normal). The time in a therapeutic range (TTR) was determined by the method of Rosendaal et al. [15] . For this determination, the target INR level was set at 1.6–2.6 for patients aged 70 years or older and at 2.0–3.0 for patients aged younger than 70 years, according to the Japanese guidelines [16] . 2.4 Statistical analysis The baseline variables of the patients are presented as the number and frequency or mean ± standard deviation (SD) values. There were several variables with missing data including the height (13.8%), body weight (13.1%), hemoglobin (11.5%), platelets (11.6%), creatinine (11.1%), creatinine clearance (11.1%), aspartate aminotransferase (11.1%), and alanine aminotransferase (11.1%). These numerical missing data were imputed with a sequential regression multivariate imputation [17] . In this study we used a hierarchical cluster analysis (Ward’s method) using 40 data items recorded for each patient in the J-RHYTHM Registry shown in the Supplementary file (Appendix S1). We show dendrogram, cubic clustering criterion and constellation tree diagram to estimate the number of likely clusters within our population (Supplementary file, Figs. S1–S3). Between-cluster comparisons were performed using analysis of variance or χ 2 test. To compare the outcomes between the clusters, Kaplan-Meier estimates with log-rank testing were applied to assess the equality of the survival distributions for each endpoint. A logistic regression model was used to test the association between clusters and outcomes, and whether the type of AF (paroxysmal vs. non-paroxysmal [persistent or permanent]) was associated with outcomes for each cluster and all patients. The models were adjusted by the age and sex for all-cause death, by the CHA 2 DS 2 -VASc score for thromboembolisms, and by the HAS-BLED scores for major bleeding. The odds ratios (ORs) for each cluster are presented with 95% confidence intervals (CIs). We used JMP 15 software (SAS Institute, USA) and R-project (R foundation, Vienna, Austria) for the analyses, including the cluster analysis. A two-tailed p-value of <0.05 was considered significant. 3 Results 3.1 Clinical characteristics of the identified phenotypes In the overall study population at baseline (n = 7406), the mean age was 70 ± 10 years, with 29.2% women and 100% Asian participants. A total of 6382 (86.1%) patients were taking warfarin, the mean CHA 2 DS 2 -VASc score was 2.8 ± 1.6, and the mean HAS-BLED score was 2.7 ± 1.2. The cluster analysis identified 4 clinical phenotypes, and Table 1 shows the clinical characteristics across them. 3.2 Younger/low comorbidity cluster This cluster (n = 1876) was composed of younger patients (mean age 67 ± 10 years) with a relatively lower body weight (mean 59 ± 13 kg) and higher rates of paroxysmal AF (45%). They had considerably lower rates of risk factors and comorbidities, including the lowest rates of heart failure (14%), hypertension (12%), diabetes (13%), a prior stroke or transient ischemic attack (TIA) (13%), coronary artery disease (1%), cardiomyopathy (4%), and malignancy (7%). The key characteristic of this cluster was highest rate of alcohol use > 8U/week. Notably, they had a relatively preserved renal function (mean creatinine clearance 67 ± 32 ml/min) and the highest total cholesterol level (193 ± 30 mg/dL). The rate of class I antiarrhythmic drug use (22%) was the highest, but antiplatelet agent (19%) and statin (16%) use was lowest. Reflecting the younger age of the patients in this cluster, they had the lowest CHA 2 DS 2 -VASc score (1.8 ± 1.4) and HAS-BLED score (2.2 ± 1.1). 3.3 Hypertensive cluster This was the largest cluster (n = 4579). The distinguishing characteristic of this cluster was that it had the highest proportion of hypertension (79%) and angiotensin converting enzyme inhibitor or angiotensin II type 1 receptor blocker use (73%). However, the mean value of the office systolic blood pressure (127 ± 17 mmHg), though statistically higher than that of the other clusters, was only 4 mmHg greater than the mean value of the lowest cluster. This cluster had the highest percentage of females (32%). It had the second lowest rates of diabetes, coronary artery disease, and chronic obstructive pulmonary disease after the younger/ low comorbidity cluster. It also had the second lowest CHA 2 DS 2 -VASc and HAS-BLED scores. 3.4 High bleeding risk cluster This was the smallest cluster (n = 302) and exhibited an intermediate age (mean age 71 ± 9 years). The key characteristic of this cluster was that 100% of the patients had a history of some bleeding compared to 1 % or less for the other 3 clusters. Reflecting the presence of a bleeding history, they had the highest HAS-BLED score (3.6 ± 1.1). Ninety percent of the patients were on warfarin and had the highest TTR. Of the four clusters, this cluster had the highest percentage of permanent AF (56%), a history of a stroke or TIA (22%), malignancy (15%), hepatitis (10%), abnormal renal function (5.3%) and abnormal liver function (5.0%). 3.5 Atherosclerotic comorbid cluster This cluster (n = 649) had the oldest patients (mean age 73 ± 8 years) and highest proportion of male patients (84%). A major feature of this cluster was that 99.2% of the patients had coronary artery disease as compared to less than 2% for the younger/low comorbidity and hypertensive clusters and 15% for the high bleeding risk cluster. They also had the highest rates of congestive heart failure (45%) and diabetes (36%), and lowest creatinine clearance (55 ± 27 ml/min). They had the highest rate of antiplatelet agent (70%) and statin (49%) use. Reflecting the presence of multiple comorbidities, this cluster demonstrated the highest CHA 2 DS 2 -VASc score (4.3 ± 1.5). 3.6 Association with conventional grouping We examined the relationship between the four clusters and conventional AF classifications including the AF subtype, CHA 2 DS 2 -VASc score, and HAS-BLED score (Supplementary files, Figure S4. The differences in the distribution of the AF subtype, CHA 2 DS 2 -VASc score, and HAS-BLED score varied significantly across the clusters. These results suggest that the cluster analysis included and integrated information on the AF subtype, CHA 2 DS 2 -VASc score, and HAS-BLED score. 3.7 Prognostic relationship between AF clusters and the outcomes The Kaplan-Meier curves of 3 outcomes across the 4 clusters are shown in Fig. 1 . For all-cause death (Panel A) and thromboembolisms (Panel B), the patients in the younger/low comorbidity cluster had the lowest risk, followed by the hypertensive cluster, high bleeding risk cluster, and atherosclerotic comorbid cluster in that order. For major bleeding (Panel C), the pattern was the same except that the order of the high bleeding risk cluster and atherosclerotic comorbid cluster were flipped. To reiterate, the patients in the younger/low comorbidity cluster demonstrated the lowest risk followed by the hypertensive cluster for all 3 endpoints. A logistic regression analyses showed the difference in outcomes across the clusters after an adjustment for the covariates ( Fig. 2 ). Compared to the younger/low comorbidity cluster, the adjusted risk of all-cause mortality was significantly higher in the atherosclerotic comorbid cluster (OR, 3.70; 95% CI, 2.37–5.80). While there was no significant difference in the risk of thromboembolisms among the 4 clusters, the risk of major bleeding was significantly higher in the 3 other clusters as compared to the younger/low comorbidity cluster: hypertensive cluster (OR, 2.79; 95% CI, 1.58–5.40), high bleeding risk cluster (OR, 14.6; 95% CI, 7.45–30.3), and atherosclerotic cluster (OR, 5.19; 95% CI, 2.58–10.9). A comparison of the C-indices among the models are shown in the Supplementary file (Table S1). The combination of the existing risk scores and cluster analysis improved the prediction accuracy of the three endpoints. 3.8 Prognostic relationship between AF types and the outcomes We further examined whether the type of AF has an impact on outcomes in all patients and each cluster ( Fig. 3 ). A multivariate logistic regression analyses showed that patients with non-paroxysmal AF had a worse prognosis than paroxysmal AF regarding the risks of all-cause mortality (OR, 1.38; 95 %CI, 1.01–1.91), thromboembolism (OR, 1.61; 95 %CI 1.07–2.40), and major bleeding (OR 1.50; 95 %CI, 1.03–2.17) in all patients. The non-paroxysmal AF was prognostic for major bleeding in atherosclerotic comorbid cluster (OR, 3.62; 95 %CI 1.05–12.4). 4 Discussion 4.1 Major findings We performed a cluster analysis on a nationwide cohort of AF patients. The major findings were as follows: a cluster analysis identified four clinically distinct phenotypes and those four clusters were associated with a significantly different risk for the outcomes. Physicians tracking large numbers of AF patients have long been accustomed to discrepancies between the type of AF, presence or absence of heart failure, and patient outcomes. This is due to the current crude phenotype of a highly heterogeneous disease as AF and the effects of comorbidities. A cluster analysis has been used to define the specific subtypes of various diseases with homogeneous clinical characteristics. In a recent study, Inohara et al. reported a cluster analysis of 9749 AF patients enrolled in the Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF) registry [8] . They identified 4 clinically distinct AF phenotypes, each of which was significantly associated with the clinical outcome. In their study, the largest cluster, which they named the low comorbidity cluster (n = 4673), had a considerably lower burden of risk factors and comorbidities than the other three clusters, and experienced the lowest mortality. Second, the younger/ behavioral disorder cluster (n = 963) included the youngest AF patients (median 69 years), and they were most likely to be male. The distinguishing behavioral features included a higher prevalence of liver disease, alcohol abuse, drug abuse, and current smoking. They exhibited the second lowest rates of mortality. Third, the device implantation cluster (n = 1651) included patients receiving cardiac electrical devices due to sinus node dysfunction or atrioventricular node ablation. They had the highest median age (77 years) and a considerably higher burden of risk factors and comorbidities. Fourth, the atherosclerotic comorbid cluster (n = 2462) was the second largest group and included predominantly elderly men with ischemic cardiomyopathy. They also had multiple risk factors and comorbidities that resulted in the highest mortality rate. Another cluster analysis using the Japanese AF cohort identified 3 AF phenotypes, each of which was significantly associated with the adverse events including all-cause death, myocardial infarction, and stroke. The 3 AF phenotypes were younger/ paroxysmal AF (n = 1190), persistent/permanent AF with light atrium enlargement (n = 1143), and atherosclerotic comorbid AF in elderly patients (n = 125). They found that conventional risk factors, such as those included in the CHA 2 DS 2 -Vasc score, contribute to cluster formation, whereas AF type or left atrial size, rather than behavioral risk factors, contribute to cluster formation. In our study, we identified 4 specific clusters, namely, the younger/low comorbidity cluster, hypertensive cluster, high bleeding risk cluster, and atherosclerotic comorbidity cluster. The younger/low comorbidity cluster was equivalent to parts of the low comorbidity cluster and younger/ behavioral disorder cluster in the ORBIT-AF registry [8] . The younger/low comorbidity cluster had a considerably lower burden of risk factors but had a higher alcohol consumption. A previous meta-analysis showed that alcohol was associated with a dose-related increased risk of incident AF [18] , and a recent randomized study confirmed that abstinence from alcohol reduced AF burden [19] . This cluster was likely to receive rhythm control therapy as was shown in the ORBIT-AF registry. This may be because physicians believe younger patients are more likely to benefit from maintaining sinus rhythm using class I or class III antiarrhythmic therapies. Accumulating evidence suggests that a lifestyle modification (weight loss and sleep apnea treatment) has a significant role in mitigating the AF burden and maintenance of sinus rhythm after catheter ablation [20,21] . The hypertensive cluster was the largest cluster and was characterized by a higher prevalence of female patients and both systolic and diastolic hypertension. Much previous research has identified hypertension as a highly prevalent and modifiable risk factor for AF patients [22] . A previous randomized controlled study showed that new-onset AF occurred less in the patients assigned to a target systolic blood pressure of less than 130 mm Hg than less than 140 mm Hg [23] . Further, a recent meta-analysis suggests that blood pressure lowering treatment reduces the risk of major cardiovascular events similarly in individuals with and without AF [24] . We identified a high bleeding cluster that was not defined in the previous AF cluster studies [8,9] . The distinguishing feature of this cluster was that all patients had a history of bleeding and had the highest rate of major bleeding events during the follow-up despite a relatively well-controlled TTR. A history of previous bleeding is a well validated risk factor considered in many bleeding scores [25,26] . Further, the higher prevalence of renal or liver dysfunction, hepatitis and malignancy have been shown to be associated with this cluster. Assessment of bleeding history and minimizing modifiable risk factors, together with correct dose of anticoagulants based on a patient’s characteristics and concomitant medications help reduce the risk of bleeding and mortality. We identified an atherosclerotic comorbid cluster that had the highest mortality rate. This cluster was also identified in the previous studies [8,9] , which were characterized by older male patients and high rates of comorbidities including hypertension, diabetes, reduced renal function, and heart failure. The atherosclerotic comorbid cluster seemed to be a high-risk group across several AF registries, including different races. Given that the atherosclerotic comorbid cluster had the highest mortality rate despite appropriate use of antithrombotic drugs and statin, an interdisciplinary team approach would be an optimal clinical approach. The relationship between AF types and outcomes has shown conflicting results regarding its impact on outcomes [27–29] . Therefore, the current risk scoring schemes do not include the type of AF and current practice guidelines provide the same recommendations for anticoagulant therapy, regardless of the type of AF. We showed that patients with non-paroxysmal AF were at higher risk of three outcomes than paroxysmal AF, and this could be explained by older age and comorbidities. The type of AF, however, was no longer prognostic for outcomes in 4 clusters except for a significantly increased risk of major bleeding in patients with non-paroxysmal AF in the atherosclerotic comorbid cluster compared to patients with paroxysmal AF. In recent subgroup analyses of the ENSURE-AF cardioversion trial [30] and ENTRUST-AF PCI trial [31] , Goette et al. showed that patients with paroxysmal AF had a higher incidence of myocardial infarction than those with non-paroxysmal AF. Future research is required to test whether type of AF or cluster analysis can improve risk assessment in various clinical setting and provide optimal treatment for patients with AF. 4.2 Study limitations This study was a prospective cohort study in warfarin era and may represent a selected population within the larger group of AF patients. This study was conducted with patients of Asian origin only, and therefore our results are less generalizable to the overall population. We did not collect any data on the symptoms, physical activity, caffein intake, biomarkers, echocardiography, device implantations, catheter ablation, sleep apnea, or genetic information. We used the data imputed by sequential regression multivariate substitution. The distinctive phenotypes identified in this study need further validation in an external AF cohort. The selection of the 4 clusters and 40 variables used for the cluster analysis were somewhat arbitrary. 5 Conclusions Our study highlighted a significant heterogeneity present in AF patients in Japan and the need to improve the identification of the phenotypes of this disorder. A cluster analysis can be used to take advantage of the various clinical variables in the AF cohort to find relevant patterns that enable new groupings of AF patients. Given the heterogeneity of risk factors and outcomes in patients with AF, future trials should focus on different interventions in the distinct phenotypes of patients with AF. Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Dr. Watanabe received a lecture fee from Daiichi-Sankyo; Dr. Inoue reports receiving research funds from Boehringer Ingelheim and Daiichi-Sankyo and remuneration from Daiichi-Sankyo, Bayer Healthcare and Boehringer Ingelheim; Dr. Atarashi , receiving lecture fees from Daiichi-Sankyo; Dr. Okumura , receiving research funds from Boehringer Ingelheim and Daiichi -Sankyo and remuneration from Boehringer Ingelheim, Bayer Healthcare, Daiichi-Sankyo, and Pfizer; Dr. Yamashita , receiving research funds from Daiichi-Sankyo, Bayer Healthcare and Bristol-Myers Squibb, and remuneration from Boehringer Ingelheim, Daiichi -Sankyo, Bayer Healthcare, Pfizer, Bristol-Myers Squibb, Ono Pharmaceutical and Toa Eiyo; Dr. Kodani received a lecture fee from Daiichi-Sankyo and Ono Pharmaceutical; and Dr. Origasa , receiving lecture fees from Daiichi -Sankyo. Acknowledgment The authors thank the J-RHYTHM Registry staff and participants for their important contribution to this work. Appendix A Supplementary material Supplementary data to this article can be found online at https://doi.org/10.1016/j.ijcha.2021.100885 . Appendix A Supplementary material The following are the Supplementary data to this article: Supplementary data 1
|
[
"HINDRICKS",
"OLDGREN",
"SHEN",
"AHMAD",
"SHAH",
"KAO",
"COHEN",
"INOHARA",
"INOHARA",
"ATARASHI",
"ATARASHI",
"INOUE",
"LIP",
"PISTERS",
"ROSENDAAL",
"JCSJOINTWORKINGGROUP",
"RAGHUNATHAN",
"LARSSON",
"VOSKOBOINIK",
"MIDDELDORP",
"WINGERTER",
"MORIN",
"VERDECCHIA",
"PINHOGOMES",
"FANG",
"OBRIEN",
"ZHANG",
"ATAR",
"GANESAN",
"GOETTE",
"GOETTE"
] |
2bb080d8e1c34a1d97dd8ff44e84b686_Assessing the impact of forest conversion to plantations on soil degradation and forest water conser_10.1016_j.geoderma.2023.116712.xml
|
Assessing the impact of forest conversion to plantations on soil degradation and forest water conservation in the humid tropical region of Southeast Asia: Implications for forest restoration
|
[
"Jiang, Xiao-Jin",
"Wang, Haofei",
"Zakari, Sissou",
"Zhu, Xiai",
"Kumar Singh, Ashutosh",
"Lin, Youxing",
"Liu, Wenjie",
"Liu, Jiaqing",
"Chen, Chunfeng"
] |
The deterioration of soil and water resources resulting from tropical rainforest (TR) conversion to monoculture plantations (e.g., rubber monoculture; RM) could be restrained and restored through intercropping. However, the response of soil properties and forest water conservation function to forest conversion, i.e., the conversion of RM to rubber rainforest (RR: derived from the invasion of wild native plants in abandoned RM), is still unclear. We involved four forests types, TR, RM, rubber-tea agroforestry (RTA), and RR, as transitional steps of forest conversion through a space-for-time substitution approach to examine the dynamic of soil physical, hydrological and chemical properties during the forest conversion (from TR to RM, RM to RTA, RTA to RR, and retransformation into TR). The results show that SOC, TN, TP, and TK decreased in the order of TR > RR > RTA > RM, which was followed by the trend of soil hydrological and physical properties among these forest types. The interrelation between soil physical and chemical properties was mediated by water flow behaviours. High macroporosity and related low Ks
in TR favoured the occurrence of water flow behaviours. Water flow behaviours not only influenced the distribution of soil chemical elements but also played a crucial role in forming appropriate conditions for nutrient turnover. The co-occurrence of preferential and matrix flow was more prevalent in the rainy season than dry season due to the higher frequency and higher amount of rainwater. The preferential flow promoted the soil water flow in the water flow paths and enhanced water storage in the soil pores. In short, the soil properties and soil water supply decreased in the following order: TR > RR > RTA > RM, suggesting that the severe soil degradation that occurred after TR conversion to RM can be restored back to the extent of TR after a period of succession. The results provided new insights for understanding the forest water conservation function and soil properties in response to forest conversion and highlighted that the RR appeared as a transitional stage during the course of forest restoration from RM to TR under low rubber demand. These findings improve the current knowledge of the relationship among soil physical, hydrological and chemical properties in the rubber-growing humid tropical region of Southeast Asia.
|
1 Introduction As the center of global biodiversity, TRs are important sources of renewable energy ( Zambelli et al., 2012 ) and regulate the global carbon cycle ( Heiskanen et al., 2019 ). They have been exposed to several constraints, such as deforestation, overexploitation, fragmentation, farmland reclamation, and forest conversion ( Tilman et al., 2017 ). Indeed, large areas of TR have been converted into rubber plantations in Southeast Asia for natural rubber production ( Ahrends et al., 2015 ). This land conversion has degraded the soil’s hydrological, physical, and chemical properties and water conservation functions ( Brinck et al., 2017; Rahman et al., 2019 ). Soil degradation is the deterioration of soil’s hydrological, physical and chemical properties, leading to loss of soil organic matter, decreased fertility, unbalanced elements, reduced aggregate stability, structure abnormality, acidification, salinization, high soil erosion, and declines in regional biodiversity ( Chaudhary et al., 2009; Mann, 2009 ). The water conservation function refers to the process and ability of soil to retain, intercept, redistribute, and store water ( Xu et al., 2022 ). Soil properties and water conservation function are facing severe challenges in the rubber-growing region of Xishuangbanna (Southwest China), part of a global biodiversity hotspot ( De Bruyn et al., 2014 ). As a result, the local government and scientists established intercropping with rubber plantations to improve biodiversity and ecosystem services ( van Noordwijk et al., 2012 ). However, these policies still limits the improvement of soil restoration and water conservation function as the planted area of rubber-based agroforestry is sensitive to factors such as field management and the economic benefits of intercrop ( Wu et al., 2016 ). Due to the limited economic benefits, most rubber plantation located at high elevations and steeper slopes have been abandoned and converted to RR with the current tendency to return to the natural ecosystem ( Fang et al., 2020 ). In the near future, RR could naturally become TR, more adapted to the local soil and climatic conditions. Therefore, it is necessary to understand and master the soil restoration and water conservation function during the conversion from RM to RR. Several parameters, such as canopy interception rate, evapotranspiration, soil saturated hydraulic conductivity ( K ) and non-capillary porosity, and maximum water holding capacity, are used to evaluate water conservation function ( s Li et al., 2021 ). However, these parameters fail to express partial processes of the water conservation function, such as dynamic features of soil water, preferential flow, matrix flow, and water interaction ( Lin et al., 2018; Jiang et al., 2019 and 2020). Preferential flow is the process by which water moves unevenly through soils via a preferred path rather than uniform flow ( Bundt et al., 2001 ). Matrix flow in forest soil is the process of water infiltrating through the soil matrix and is solely controlled by capillary action ( Zhang et al., 2017 ). Water interaction refers to the water drawn from macropores into the surrounding soil matrix by capillary forces during infiltration ( Weiler and Flühler, 2004 ). High and low levels of water interaction through the macropore walls can be traced and identified as strong and weak water interaction based on different dye concentrations ( Weiler and Flühler, 2004; Jiang et al., 2015 ). Soil hydrological properties, i.e., soil K and water flow behaviors, are closely related to soil chemical properties ( s Bundt et al., 2001; Jarvis et al., 2012 ). For instance, preferential flow can substantially transport soil nutrients under high water infiltration rates into deep soil ( Shen et al., 2019 ). Thus, the partial fluctuation of soil chemical properties depends on the soil water content and water flow behaviors ( Wang et al., 2019 ). Moreover, soil chemical properties, such as organic matter content, can affect soil water content and water flow behaviors. For instance, decreased organic matter content can reduce soil porosity ( Li et al., 2007 ), lowering soil water infiltration and storage capacity ( Celik, 2005 ). Soil hydrological, physical, and chemical properties usually exhibit conflicting results after the conversion of TR to RM ( Zakari et al., 2020 ). For instance, the C content can increase ( Maggiotto et al., 2014 ), decrease ( Chiti et al., 2014 ), or not change ( Frazão et al., 2013 ) after forest conversion. While soil water content could increase ( Liu et al., 2019 ) or decrease ( Tan et al., 2011 ) in RM. These inconsistencies may arise by limiting the number of studied forests (i.e., from TR to RM and mixed rubber plantation, and RR) as the inherent forest conversion mechanisms could influence the soil hydrological, physical, and chemical properties ( Rahman et al., 2019 ). To date, there have been few studies that provide useful guidance on how to restore RM or mixed rubber plantations to RR. Therefore, the interaction among soil physical, hydrological, and chemical properties could be more complex in the process of forest conversion from TR to RM, RM to a mixed rubber plantation, mixed rubber plantation to RR, and retransformation into TR over time through natural succession. This study aims to explore the effect of forest conversion on soil degradation and water conservation function during forests conversion in the humid tropical region of Xishuangbanna (Southwestern China). Thus, soil temperature and soil volumetric water content, soil organic carbon (SOC), total nitrogen (TN), total phosphorus (TP), total potassium (TK), bulk density (BD), macroporosity, soil water flow behaviors, and K were measured and determined in the four forests ecosystems including TR, RM, RTA and RR (RM and RTA were developed and converted from TR, RR was converted from abandoned RM). Implying a space-for-time substitution approach, these forests ecosystems were regarded as stages of forward and backward forest conversion that began with TR to RM, RM to RTA, and RTA to RR and were expected RR would be retransformed into TR over time through natural succession ( s Fig. 1 E). Our specific objectives were to examine i) the dynamics of soil volumetric water content; ii) the soil chemical properties (i.e., SOC, TN, TP, TK); iii) the soil physical and hydrological properties (i.e., macroporosity, K , water flow behaviors); and iv) the relationship among soil physical, hydrological and chemical properties during forests conversion. The findings of this study would be useful in the sustainable management of RM and mixed rubber plantations under low rubber demand scenarios and would be used as a reference for forest restoration from plantation to natural forest in the rubber-growing humid tropical region of Southeast Asia. s 2 Materials and methods 2.1 Experimental site The experimental site is located in the Xishuangbanna Tropical Botanical Garden (21°55′39″ N, 101°15′55″ E; 594 m asl), Mengla County, SW China. This region is characterized by a tropical monsoon type of climate, with a rainy season starting from May to October and a dry season from November to April. The mean annual precipitation is approximately 1500 mm (from 2005 to 2018), and the monthly mean air temperature is 22.5 °C ( Lin et al., 2018 ). The soil is classified as Oxisol (USDA-SCS, 1994) or Acric Ferralsols type ( IUSS Working Group WRB, 2014 ), which is derived from Cretaceous yellow sandstone ( Zhou et al., 2019 ). The soil is approximately 2 m deep and rich in iron (Fe) and aluminum (Al) oxides due to intensive weathering and leaching ( Zhao et al., 2017 ). Four plots (30 m × 18 m) were established in each studied forest type (i.e., TR, RM, RTA, RR). Field sampling was conducted in the four plots in both rainy and dry seasons ( Fig. 1 A). The dominant trees in TR were Pometia tomentosa , Terminalia myriocarpa , Gironniera subaequalis , and Garcinia cowa , forming a climax community after decades of succession ( Fig. 1 B). RM site was established after the clearance of native vegetation in 1989 ( Liu et al., 2015 ). Rubber trees ( Hevea brasiliensis ) in RM are arranged in a double row and planted at a distance of 3 m × 4 m, and each set of double rows is separated by a gap (width) of 20 m. Tea is intercropped in the gaps at a distance of 4 m × 4 m in 2010 to form the RTA. RM and RTA were subjected to regular land management practices, such as latex tapping (approximately 120 times year −1 ), herbicide (applied at least once a year) and fertilizer applications to remove weeds and promote rubber tree growth, respectively. According to the local farming practices, at rate of 0.5 kg of mineral fertilizer containing 15 % N as (NH 2 ) 2 CO, 15 % P as NH 4 H 2 PO 4 , and 15 % K as KCl to a trench of 100 cm (long) × 20 cm (wide) × 10 cm (deep) between two rubber trees at the end of March and at the beginning of September. Some abandoned RMs at high elevation were recently invaded by native species (i.e., Indosasa hispida , Callicarpa bodinieri , and Millettia pulchra ) to form the RR. Detailed information about the stand characteristics of the four plots is shown in Table 1 . The trees heights were determined utilizing a portable laser distance meter (T60G+; THINRAD, China). Canopy cover and leaf area index were measured using an LAI-2200 plant canopy analyzer (Li-Cor Inc., USA). The particle size of soil samples was determined by the laser granulometry method (Mastersizer E, Malvern) ( Pieri et al., 2006 ). The roots were collected in 0.1 m intervals down to the 0.5 m depth with a root sampler (inner diameter, 25 mm; height, 150 mm) from fifteen random points in each plot. The root samples from the same plot were pooled to form a composite sample for each plot. 2.2 Measurement of rainfall, soil volumetric water content and soil temperature The rainfall record was obtained from the Xishuangbanna Meteorological Station, which is located near the study site ( Fig. 1 A). Soil volumetric water content and soil temperature at a depth of 20 cm were measured using 5TE sensors (Decagon Devices, Pullman, WA, USA). The 5TE sensors (10.0 cm length, 3.7 cm width, 0.7 cm thickness) have three prongs to enable simultaneous measurement of soil temperature and soil volumetric water content ( Rosenbaum et al., 2010 ). The sensor data were collected at an interval of 5 min by the ECH2O Utility software (Decagon Devices, Inc©, Pullman, WA) through an Em50 data logger. Given that the slope could affect soil water flow to the subsurface, the four 5TE sensors were inserted at the downhill slope of the four plots ( Fig. 1 C and D). The impact of the sensor installation on water flow and soil properties was almost negligible after four months during which several rainfall events saturated the soil; afterward, we started to collect the data from 5TE sensors. The accuracy (0.01 cm 3 cm −3 ) of the 5TE sensor for measuring soil volumetric water content was calibrated using repacked core samples in the laboratory. 2.3 Measurement of soil properties Bulk density (BD) and macroporosity are important predictors of soil thermal, hydraulic, and mechanical properties ( Assouline, 2006 ). In our studied site, many cracks and fissures emerged in the dry season and disappeared in the rainy season. Porosity is linked to the presence of cracks and fissures, so we used BD and macroporosity to express the size of cracks and fissures quantitatively. The BD and macroporosity at a soil depth of 20 cm in each plot (nine replicates per plot) were measured by cutting ring (inner diameter, 70.00 mm; height, 52.00 mm; and volume, 200 cm 3 ) in dry and rainy seasons ( Fig. 1 C and D). The porosity (i.e., total porosity, capillary and macroporosity) was calculated involving the bulk density data. The soil samples were collected from nine quadrats in each plot during the rainy and dry seasons to measure soil chemical properties (SOC, TN, TP, TK) ( Wang et al., 2007 ). The soil sample was mixed, stored and shipped the same day to the Biogeochemistry Laboratory of the Xishuangbanna Tropical Botanical Garden. Soil clods were manually broken into smaller pieces, and air-dried at room temperature (about 25 °C) for 30 days ( Wang et al., 2007 ). The air-dried soil was separated into two parts. The first part was sieved through 0.25 mm mesh for SOC, TN, and TK analysis; and the rest passed through 0.15 mm mesh for TP analysis ( Wang et al., 2007 ). The soils of the study area were free of inorganic carbon, as the soil samples did not react with HCl; therefore, all the measured carbon was considered SOC ( Chen et al., 2017 ). The elemental carbon concentration was measured by dry combustion with a Vario MAX CN elemental analyzer (Langenselbold, Germany). TN and TK were measured using a carbon–nitrogen analyzer (Vario MAX CN, Elementar Analysensysteme, Germany); and inductively coupled plasma spectrometers after melting with sodium hydroxide (ICP; SPECTRO ARCOS EOP, Germany), respectively. TP concentration in soil was determined using the molybdate colorimetric method following perchloric acid digestion and reduction by ascorbic acid ( O'Halloran and Cade-Menun, 2006 ). 2.4 Interpretation of soil water flow behaviors Three dye-tracer experiments were conducted in each studied plot to visualize the infiltration pattern and water flow paths in the dry season ( Fig. 1 C and D). Water flow paths include root channels, soil cracks, macropores and earth worm burrows ( Flury and Wai, 2003 ). In this experiment, a four-sided metal frame of 2 m × 0.5 m × 0.3 m (lbh) was carefully inserted into the soil. Minor fissures between the frame and soil were sealed using wet soil to prevent dye solution leakage, and the field surface was kept intact simultaneously. The surface vegetation and a thin layer of the soil (less than 2.0 cm) in the plot was carefully removed to ensure a horizontal surface. Each metal-frame was filled with Brilliant Blue FCF dye solution at a concentration of 4.0 g L −1 ( Flury and Wai, 2003 ). A filter screen (1.95 m × 0.45 m) mesh of 0.2 cm diameter was previously placed in the plots to ensure similar flooding conditions in all subplots. Water was successively added to the plot to maintain the water head at a constant head of 2.0 cm during the infiltration period. The total dye infiltration volume was 117 L for each subplot. Three vertical soil sections were dug 24 h in each plot after the end of dye infiltration experiment. Each soil section and its dye-stained patches were photographed using a digital camera (Canon EOS Rebel T3, Japan) ( Weiler and Flühler, 2004 ). The obtained images were separated into two parts: nonstained areas as matrix flow ( Bundt et al., 2001 ) and the dye-stained areas. The dye-stained areas were further separated into three water flow behaviors: preferential flow, strong water interaction, and weak water interaction which correspond to concentration > 2.0, 0.5–2.0, and 0.05–0.5 g L −1 , respectively, based on dye staining intensity using ERDAS IMAGINE version 9.0. Detailed information on the image processing of the vertical soil sections is provided by Cey and Rudolph (2009) , and Forrer et al. (2000) , and additional information on the field measurements and image processing is given by Jiang et al. (2015) . 2.5 Measurement of saturated hydraulic conductivity (K s ) Nine single-ring infiltrometer experiments were conducted in each plot at depths of 20 cm to estimate the soil saturated hydraulic conductivity ( K ) in dry and rainy seasons ( s Fig. 1 C and D). Stainless-steel cylinders were carefully inserted 5.0 cm into the soil one month before K experiments to minimize the disturbance of infiltrometer installation on soil pores. The sidewalls and edges of cylinders were kept water-tight. Each cylinder was initially filled with an equivalent water head of 10.0 cm (using a reference ruler). The time taken for the water level to decrease to 1.0 cm in the cylinder was recorded. Thereafter, a volume equivalent to 1.0 cm depth was successively added to the cylinder until a constant infiltration time for three consecutive measurements at 5-minutes intervals, at which we assumed a steady-state flow ( s Zhu et al., 2019 ). The refilling procedure took approximately 1.2 h. The steady-state infiltration rate ( I , cm min s −1 ) and K (cm s s −1 ) were calculated based on the last three consecutive measured values ( Reynolds and Elrick, 1990; Bodhinayake et al., 2004 ) (Eq. (1) ). where (1) K s = I s π r 2 H C 1 d + C 2 r + 1 α C 1 d + C 2 r + 1 I (cm s 3 s −1 ) is the quasi-steady state infiltration rate, r (cm) is the radius of the ring, H (cm) is the average ponding depth, d (cm) is the insertion depth of the cylinder into the soil, C and 1 C are dimensionless quasi empirical constants, and 2 α is the soil macroscopic capillary length. For this work, both r and d are 5 cm. The sorptive number ( α = 0.12 cm −1 ) was estimated based on textural classes (using laboratory particle size analysis). The constants C and 1 C were 0.316 2 π and 0.184 π , respectively, for d ≥ 3 cm and H ≥ 5 cm ( Reynolds and Lewis, 2012 ). 2.6 Statistical analysis First, soil physical, hydrological, chemical properties were tested for normality, and a log- or square root transformation was performed for variables with non-normal distribution data. Then, a one-way analysis of variance was applied to evaluate the effects of four plots (RM, TR, RTA, and RR) on the dye-stained area. Simultaneously, two-way analysis of variance was performed to assess the effects of season and studied plot on the soil properties (BD, macroporosity, K , SOC, TN, TP, TK). Significant differences between means were detected based on the least significant difference test at s P < 0.05. ANOVA was successful performed to detect the difference among the four plots, assuming that ANOVA would not introduce any systematic error into the statistical analysis due to similar edaphic-climatic conditions of the studied sites. Spearman’s correlation was employed to analyze the relationship between soil physical or hydrological properties and soil chemical properties. The analyses were performed with SPSS 20.0 (Statistical Package for the Social Sciences, USA). In addition, we applied linear redundancy analysis (RDA) to explore the relationships between soil physical, hydrological, and soil chemical properties. For this RDA analysis, we conducted a Monte Carlo permutation test based on 499 random permutations to evaluate the significance of the canonical axes eigenvalues ( ter Braak and Smilauer, 2002 ). This ordination analyses was performed with CANOCO 4.5. Moreover, we conducted a structural equation modelling to analyze the best possible explanatory relationship among soil physical, hydrological and chemical properties. Data were fitted to the model using the maximum likelihood estimation method. The adequacy of the model was determined using the χ 2 test, root square mean errors of approximation (RMSEA), and Akaike information criteria (AIC). Adequate model fits are indicated by a non-significant χ 2 test ( P > 0.05), low RMSEA (<0.08), and AIC ( Grace, 2006 ). The structural equation model was computed using AMOS 22.0 (SPSS Software, Chicago, IL). 3 Results 3.1 Rainfall and soil volumetric water content In 2018, a total of 75 and 50 rainfall events were recorded in the rainy and dry seasons, respectively. The rainfall events were dominated by lower rain intensity in the dry season and higher rain intensity in the rainy season. The total rainfall amount was 630.8 mm in the rainy season and 198.4 mm in the dry season ( Fig. 2 A). The rainfall amount in the four sampling plots was not statistically correlated with soil temperature ( P > 0.05), but significantly correlated with soil volumetric water content ( P < 0.05) ( Fig. 2 B, and C). Both the season and plot type significantly affected the mean soil volumetric water content ( Table 2 ). The average soil volumetric water content was higher in TR, followed by RR, RTA, and RM during the whole year (2018) ( Fig. 2 C). The lowest soil volumetric water contents occurred in RM and RTA in the rainy and dry seasons; and ranged from 0.16 to 0.25 cm 3 cm −3 in the rainy season and 0.11 to 0.20 cm 3 cm −3 in the dry season. However, the highest soil volumetric water contents recorded in TR and RR; and ranged from 0.26 to 0.35 cm 3 cm −3 in the rainy season and 0.21 to 0.25 cm 3 cm −3 in the dry season ( Fig. 3 C and D). In a word, the highest values and widest range of the soil volumetric water content occurred in TR, and the lowest and narrowest values were found in RM both in rainy and dry seasons ( Fig. 3 and Table 2 ). Besides, correlation analysis shows that the relationship between the soil temperature and soil volumetric water content was significantly negative in the rainy season and positive in the dry season ( P < 0.05) ( Fig. 2 D). Both the season and plot type significantly affected the mean soil temperature ( Table 2 ), which followed the decreasing order of RM > RTA > RR > TR in the rainy season and TR > RR > RTA > RM in the dry season ( Table 2 ). The soil temperature ranged between 24.1 °C and 26.0 °C in the rainy season, and 18.1 °C and 24.0 °C in the dry season ( Fig. 2 B). The highest frequencies of soil temperature (24.1 to 26.0 °C) were in the decreasing order of RM > RTA > TR > RR in the rainy season ( Fig. 3 A). Simultaneously, the lowest frequencies of soil temperature (18.1 °C to 20.0 °C) were dominated by RR in the dry season, followed by RM and RTA ( Fig. 3 B), the medium frequencies (20.1 °C to 24.0 °C) by TR, and the highest frequencies (22.1 °C to 24.0 °C) ranged in the decreasing order of RR > RTA > RM > TR. 3.2 Soil chemical properties The SOC and TN were higher in TR than RM and RTA ( Table 3 ). The SOC was 38.0 % (rainy season) and 37.6 % (dry season) lower in RM compared to TR. Both the SOC and TN did not change ( P > 0.05) in the two seasons after RM conversion to RTA. However, the SOC increased by 40.1 % (rainy season) and 41.8 % (dry season), and TN by 63.1 % (rainy season) and 64.7 % (dry season) after RM conversion to RR. The TP significantly decreased by 44.9 % in RM, 34.7 % in RTA, and 12.2 % in RR compared to TR in the rainy season. Moreover, significant differences in TP occurred after TR conversion to other forests systems in the dry season, and the TP ranged in the order TR > RR > RTA > RM. The TK was higher in TR compared to other forests during the two seasons; but TK did not significantly change ( P > 0.05) after RM conversion to RTA. Finally, TK increased by 101.1 % (rainy season) and 103.4 % (dry season) after RM conversion to RR. 3.3 Soil physical properties and hydrological properties The soil BD and macroporosity also significantly differ with plot types and seasons ( Table 3 ). The lowest BD occurred in TR, and the highest BD occurred in RM. The soil macroporosity decreased by 15.4 % (rainy season) and 17.2 % (dry season) after TR conversion to RM; but increased by 14.8 % (rainy season) and 14.7 % (dry season) after RM conversion to RR. Both macroporosity and K did not significantly ( s P > 0.05) differ between RM and RTA, irrespective of the rainy or dry season. The K , however, was 40.5 % (rainy season) and 28.6 % (dry season) higher in TR compared to RM; and 48 % (rainy season) and 30 % (dry season) lower in RM compared to RR. Overall, the s K followed the decreasing order of TR s RR > RM ≅ RTA in the two seasons. ≅ Water infiltration patterns exhibit that various water flow behaviors emerged in the soils of the four forest ecosystems ( Fig. 4 A). Preferential flow occurred as patch distribution pattern in all plots across the four forest types. Strong and weak water interactions (i.e., the intensity of water flow from the macropores into the surrounding soil) occur when water moves through the water flow path (e.g., macropores and cracks). Preferential flow bypassed the soil matrix with low permeability to reach the deep soil, but matrix flow slowly moved downward into the soil matrix. In top soil layers (0–20 cm), preferential flow followed the decreasing order of TR > RR > RTA > RM. The strong water interaction ranged in the decreasing order of RM > RTA > RR > TR, and the weak water interaction followed the order of RR > TR > RM RTA. The matrix flow ranged in the order RM ≅ RTA > RR ≅ TR ( ≅ Fig. 4 B). The water flow behaviors and soil volumetric water content were correlated at different levels ( Fig. 5 ). The preferential flow had significant positive correlations with S(1/4) (1/4 sum of soil volumetric water content reorder from high to low) (R = 0.996, P < 0.01), S(1/4–2/4) (R = 0.957, P < 0.05), S(2/4–3/4) (R = 0.926, P < 0.05), S(3/4–4/4) (R = 0.973, P < 0.05). The strong water interaction had significant negative correlations with S(1/4–2/4) (R = -0.941, P < 0.05), S(2/4–3/4) (R = -0.912, P < 0.05), S(3/4–4/4) (R = -0.961, P < 0.05). 3.4 Relationships among soil physical, hydrological and chemical properties Redundancy analysis shows that the first and second axes explained 76.5 % and 0.2 % of the variation in soil chemical properties, respectively; most selected variables were highly linked to the first axis ( P = 0.002) ( Fig. 6 A). The first axis was mainly related to macroporosity, K and BD; and the second axis was only linked to soil volumetric water content. The eigenvalue followed the order macroporosity (eigenvalue = 0.721, V = 38.88) > s K (eigenvalue = 0.556, V = 29.97) > bulk density (eigenvalue = 0.390, V = 21.02) > soil volumetric water content (eigenvalue = 0.187, V = 10.08) > soil temperature (eigenvalue = 0.001, V = 0.05). It reflects that the soil chemical properties (SOC, TN, TP, and TK) were affected by these factors in decreasing order of influence strength: macroporosity > s K > bulk density > soil volumetric water content > soil temperature ( s Tables 4 and 5 ). Furthermore, structural equation model supports that macroporosity positively influences the soil volumetric water content and K , and these soil hydrological factors positively influenced SOC, TN, TP, and TK. However, the SOC and TN negatively affected BD, which negatively affected macroporosity. In short, the relationship between soil physical and chemical properties was mediated by soil water features ( s Fig. 6 B). 4 Discussion 4.1 Soil hydrological properties mediate the relationship between soil physical and soil chemical properties The redundancy analysis and spearman's correlation ( Fig. 6 A and Tables 4 and 5 ) show that the most important variables affecting soil chemical properties were related to macroporosity, followed by K , BD, and soil volumetric water content, and the interactions among soil physical and chemical properties were mediated by soil hydrological functions ( s Fig. 6 B). Our results of the negative correlation between BD and soil chemical properties correspond to the results from Joshi and Garkoti (2023) . The litterfall is intercepted by soil cracks and tree trunks during runoff ( Sayer and Tanner, 2010 ) and decomposed into soil chemical elements and fine organic compounds, which are easily mobilized and carried into subsurface soil by preferential flow ( Dibbern et al., 2014 ). As a result, the high accumulation of SOC, TN, TP, and TK was beneficial to soil aggregate formation, causing higher porosity and low BD. However, among the soil chemical properties, SOC and TN negatively affected BD, which further negatively influenced the macroporosity ( Fig. 6 B). In other words, lower BD promoted higher macroporosity. High macroporosity and related low K beneficially favored the occurrence of different water flow behaviors that mediates the distribution of soil chemical properties. As a result, some soil chemical elements reach the deep soil layers during water infiltration and accumulate there and improve soil pores, which further enhance the soil infiltration capacity ( s Bundt et al., 2001 ); particularly higher SOC can promote the soil water storage in micropores ( Li et al., 2007 ). Based on these findings, we can point out that the relationship between soil physical properties and soil chemical properties is mediated by water flow behaviors. 4.2 Effects of forest conversion on soil degradation Water flow behaviors not only mediated the distribution of soil chemical elements but also played a key role in the formation of appropriate conditions for nutrient turnover. In the present humid tropical zone, soil temperature (20.79 °C to 25.75°C) and soil volumetric water content (0.18 to 0.29 cm 3 cm −3 ) were suitable ( Table 2 ); simultaneously, the aeration (oxygen) can be replenished during the appearance-disappearance process of water flow behaviors ( Fig. 4 ), and a part of substrate (or organic matter) had transported into deep soil by preferential flow ( Fig. 6 ). Moreover, the alternative occurrence of preferential flow and matrix one led to high variation in soil moisture, oxygen, and substrate, leading to fluctuation of soil chemical properties. This fluctuation in soil chemical properties depended on forest structure (or types), with their decreasing order of TR > RR > RTA > RM corresponding to the order of soil hydrological and physical properties in these four forest types. Our findings corroborate a previous study reported that the soils suffering severe degradation after TR conversion to RM reach better chemical levels after 10 years of agroforestry (or intercropping) practices ( Chen et al., 2019 ). We found that soil chemical properties increased with increasing soil water content, which was oppsite to the previous findings where soil moisture increases up to a threshold lowered the soil elements accumulation (e.g., soil carbon) ( Maggiotto et al., 2014 ). In fact, higher soil water content or soil water saturation without fluctuation usually inhibits aerobic microbial activity or organic matter mineralization. However, if the availability of water and oxygen alternatively fluctuates due to preferential flow, this favors nutrient turnover and microbial activities ( Mu et al., 2021 ). 4.3 Effects of forest conversion on water conservation function The water infiltration patterns ( Fig. 4 ) and related parameters (macroporosity and K ) were significantly varied with forest ecosystems ( s Table 3 ). The preferential flow was the dominant water flow behavior since the beginning of rainwater infiltration, and later on, both preferential flow and matrix flow occurred once the soil became saturated with infiltrated rainwater ( Fig. 4 and Table 2 ). This co-occurrence of preferential flow and matrix flow was more prevalent during the rainy season due to the high frequency and high amount of rainwater ( Fig. 3 ), although macroporosity and K were higher during the dry season than the rainy season ( s Table 3 ). Further, the positive correlations between preferential flow and S(1/4), S(1/4–2/4), S(2/4–3/4), and S(3/4–4/4) indicate that preferential flow not only channeled water to follow the soil water flow paths (e.g., soil macroporosity), but also enhanced soil water storage ( Fig. 5 ). Simultaneously, the negative correlations between strong water interaction and S(1/4–2/4), S(2/4–3/4), and S(3/4–4/4) imply that this water flow behavior (strong water interaction) did not help soil water storage into soil ( Fig. 5 ). Namely, the soil volumetric water content in the unsaturated soil matrix highly depended on preferential flow and associated characteristic or parameters, especially the soil macroporosity but not K ( s Fig. 6 ). This finding may contradict the previous study’s results where the soil volumetric water content was found to be mainly affected by water infiltration capacity ( K ) ( s Zhu et al., 2019 ). When soil was unsaturated, the rainwater only flowed into the soil pores via preferential flow paths. Once the soil had become saturated with rainwater, both preferential flow and matrix flow determined the soil water transport, and the infiltrated water was simultaneously redistributed and stored into the adjacent soil through water interactions ( Figs. 4 and 5 ). The stored water can therefore be transported to the root zone by capillary force for plant uptake during the dry season ( Jiang et al., 2020 ). These water transport and storage mechanisms were more pronounced in old-growth and undisturbed primary forests (i.e., TR). Further, a substrantial amount of the soil volumetric water content in TR and RR during the dry season comes from underground sources through capillary rise ( Jiang et al., 2020 ). In our case, the synergistic effect of various water flow behaviors (preferential flow, matrix flow, water interaction, and upward capillary water) on soil water supply was in the order TR > RR > RTA > RM. As a result, the soil volumetric water content significantly varied with forest ecosystems ( Fig. 2 ). Based on these findings, we affirmed that the low water supply efficiency in RM has resulted in water stress during the long-lasting dry season in this region. However, water stress has been markedly alleviated in RTA and in RR, especially in TR because of its high water supply efficiency. We found that forest water conservation function has alleviated and improved temperature on the forest floor. The high soil temperature occurred in the rainy season ( Fig. 2 B), and the correlation analysis suggests that the relationship between soil temperature and soil volumetric water content was significantly negative in the rainy season ( P < 0.05) ( Fig. 2 D). This relationship between the soil temperature and soil volumetric water content has likely appeared due to three processes: (1) water evaporation from soil carries heat away from soil ( Zhang et al., 2022 ); (2) water supply characteristics, especially the water flow under high-frequency could disperse the extreme soil temperature to the surrounding soil; and (3) air and soil temperature are regulated by forest crown that intercepts sunlight ( Lin et al., 2018 ), and this depends on forest type and forest structure ( Li et al., 2015; Lin et al., 2020 ). As a result, the temperature regulation processes were more pronounced in RR, especially in TR. In short, extreme soil temperatures could occur in RM both in the rainy and dry seasons, and this could be improved by the robust temperature regulation mechanisms in RTA and RR. 5 Conclusions Variations in soil properties and forest water conservation function were investigated during the forest conversion (from tropical rainforest to rubber monoculture, rubber monoculture to rubber-tea agroforestry, rubber-tea agroforestry to rubber rainforest, and rubber rainforest retransformation into tropical rainforest) in the humid tropical region of Southeast Asia. The soil’s physical and chemical properties were interdepended, and many of these interrelationships were mediated by water flow behaviors. The accumulation of SOC TN, TP, and TK was beneficial to formation of soil aggregate which exhibited different levels of porosity and BD. However, macroporosity and K favored the occurrence of different water flow behaviors that controlled the distribution of soil chemical elements and played a vital role in the formation of appropriate conditions for nutrient turnover. Thus, soil physical and chemical properties followed the decreasing order of tropical rainforest > rubber rainforest > rubber-tea agroforestry > rubber monoculture. These findings revealed that degraded soil under rubber monoculture converted from tropical rainforest could be restored greatly by converting them into rubber rainforest and then to a climax stage of tropical rainforest after several years of natural succession. s At the start of the rainwater infiltration experiment, preferential flow was the primary water flow behavior. However, both preferential and matrix flow occurred after the soil was saturated with infiltrated water. This co-occurrence of preferential and matrix flow was more prevalent in the rainy season due to the high frequency and amount of rainwater. The preferential flow promoted soil water transport in the water flow paths (e.g., soil macroporosity) and enhanced soil water storage into soil pores. Therefore, the water supply capacity declined in the following order throughout the year: tropical rainforest > rubber rainforest > rubber-tea agroforestry > rubber monoculture. As a result, water stress occurred in RM but was alleviated in RTA and RR. Thus, the potential effects of water flow behaviors on soil properties and forest water conservation function should be considered when converting forests. These results are important for forest management decisions to recover tropical landscapes from devastating rubber plantation (pure and mixed) under a low rubber-latex demand scenario. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments The meteorological data was provided by Xishuangbanna Station for Tropical Rainforest Ecosystem Studies. Content determination of available nutrient in soil was accomplished by Institutional Center for Shared Technologies and Facilities of XTBG, CAS. We also thank Mr. Liu MN for his help in the field and laboratory experiments. This research was supported by the National Natural Science Foundation of China ( 32271648 , 32371608 , 32101380 , 32001221 , 32001168 , 32360367 ), the Yunnan Fundamental Research Projects (grant NO. 202201AT070216, 202101AT070056, 202101AS070010, 202101AT070572, 202001AU070131), the ‘Yunnan Revitalization Talent Support Program’ in Yunnan Province, the Youth Innovation Promotion Association CAS (2018430), and the Chinese Academy of Sciences (CAS) ‘‘Light of West China’’ program.
|
[
"AHRENDS",
"ASSOULINE",
"BODHINAYAKE",
"BRINCK",
"BUNDT",
"CELIK",
"CEY",
"CHAUDHARY",
"CHEN",
"CHEN",
"CHITI",
"DEBRUYN",
"DIBBERN",
"FANG",
"FLURY",
"FORRER",
"FRAZAO",
"HEISKANEN",
"JARVIS",
"JIANG",
"JIANG",
"JIANG",
"JOSHI",
"LI",
"LI",
"LI",
"LIN",
"LIN",
"LIU",
"LIU",
"MAGGIOTTO",
"MANN",
"MU",
"OHALLORAN",
"PIERI",
"RAHMAN",
"REYNOLDS",
"REYNOLDS",
"ROSENBAUM",
"SAYER",
"SHEN",
"TAN",
"TERBRAAK",
"TILMAN",
"WANG",
"WANG",
"WEILER",
"WU",
"XU",
"ZAKARI",
"ZAMBELLI",
"ZHANG",
"ZHANG",
"ZHAO",
"ZHOU",
"ZHU"
] |
cf2c96f5a1254fe4bbb054a0185b1633_Engaging high and low burden countries in the TB end game_10.1016_j.ijid.2016.03.015.xml
|
Engaging high and low burden countries in the “TB end game”
|
[
"Marais, B.J.",
"Outhred, A.C.",
"Zumla, A."
] | null |
Tuberculosis (TB) is now the single biggest infectious disease killer in the world, surpassing malaria and HIV/AIDS. In 2014, there were an estimated 9.6 million incident TB cases and 1.5 million deaths. It is not widely appreciated that TB is also a major cause of disease and death in young children. 1 New estimates from the World Health Organization (WHO) are that 1 million children developed TB during 2014. 2,3 This is disconcerting because children have poor access to TB services in most resource-limited settings and paediatric cases provide an accurate reflection of uncontrolled TB transmission within communities. Although the cost-effective DOTS strategy helped to bring the global tuberculosis (TB) epidemic under control in many parts of the world, progress has been limited in areas affected by poverty, war and rising rates of drug resistant TB. 1 4,5 The emergence and spread of multi-drug resistant (MDR)-TB pose a major threat to recent gains. It is estimated that nearly half a million (480 4,5 000) MDR-TB cases occurred in 2014; accounting for 3.3% of new and 20% of re-treatment TB cases. The highest MDR-TB case-loads exist in the Indian subcontinent, China, the Russian Federation and Southern Africa. 1 For many years the epidemic potential of transmitted MDR-TB was ignored and the dogma that most MDR-TB cases acquire drug-resistance because of poor treatment adherence became firmly entrenched. The perception that drug resistant strains have reduced “fitness” and are unlikely to be transmitted had a major influence on TB control policy. It motivated a renewed focus on basic DOTS to stop the generation of MDR-TB cases; “turning off the tap” was considered an adequate public health response. 1 The relative over-representation of MDR-TB among re-treatment cases is often used to support this dogma; although the majority of MDR–TB cases are now diagnosed among new cases. Recent modelling data suggest that even among MDR-TB cases diagnosed at re-treatment, the majority represent transmitted (not acquired) MDR-TB disease. 1 The description of multiple well-defined clonal MDR-TB outbreaks provides genotypic evidence of epidemic spread, 6 as does the fact that ∼60% of Mongolian TB patients in whom first-line treatment failed were resistant to streptomycin; a drug to which they have never been exposed before. 7,8 The high number of children with MDR-TB and the fact that child MDR-TB cases are consistently co-located with adult cases provide epidemiological proof of MDR-TB transmission within households and communities. 9 A recent analysis of 100 paediatric specimens held in the strain library of the Chinese Centre for Disease Control and Prevention demonstrated high rates of drug-resistance; any drug resistance in 55% and MDR in 22%. 2,10 11 It is important to ensure optimal basic TB program performance and to limit the generation of newly acquired drug resistance. However, if TB treatment and prevention programs focus exclusively on drug susceptible disease, uncontrolled MDR-TB transmission could lead to future epidemic replacement, where MDR-TB strains become more prevalent than drug-susceptible strains. The possibility of epidemic replacement is illustrated by parts of the Russian Federation where over 30% of newly diagnosed cases have MDR-TB. Sub-Saharan Africa represents the epicentre of human immunodeficiency virus (HIV) and TB co-infection. Swaziland report TB/HIV co-infection rates exceeding 80%, with high rates of MDR-TB among co-infected patients. 3 Since delayed MDR-TB diagnosis might facilitate transmission among immune compromised patients, the occurrence of an 12 rpoB Ile491Phe mutation that is not detected by the Xpert MTB/RIF® assay is particularly problematic. 12 High and rising rates of MDR-TB have relevance beyond the worst affected areas, since TB does not respect national borders. People are highly mobile and their mobility underpins global economic activity. Large scale population movements are also triggered by war and famine, with appeals for safe refuge. Interventions to screen for active TB and latent M. tuberculosis infection are compromised if prophylactic treatment options are ineffective in those harbouring MDR-TB strains. Current diagnostic tests are unable to identify latent infection with an MDR strain, or to detect a re-infection event after previous preventive therapy or TB treatment. There is an urgent need for improved epidemiological understanding of MDR-TB spread, guided by a better description of the evolution and transmission dynamics of drug-resistant M. tuberculosis strains. 1 The new “End TB strategy” The World Health Assembly approved the new End TB Strategy in May 2014. The End TB Strategy includes ambitious targets to reduce TB deaths by 95% and cut new cases by 90% from 2015 to 2035, and to ensure that no family is burdened with catastrophic expenses due to TB. It calls on all governments to demonstrate high-level political commitment by prioritizing efforts to end TB, backed by adequate resource allocation and inclusion of the most vulnerable sections of society. The main focus of the “End TB strategy” is to reduce global disease burdens, with the greatest gains to be made in high burden countries. The strategy does not include specific targets for low burden countries apart from encouragement to aim for TB pre-elimination, defined as an annual TB incidence of less than 1 case/100 000 population. The reality in most low burden countries is that TB is essentially an imported disease with minimal local transmission. Given its limited health impact, compared to things like obesity, diabetes and cardiovascular disease or cancer, it is difficult to maintain high-level engagement and justify continued domestic investment in TB control efforts. A new paradigm is required to engage low TB burden countries and add momentum to global TB control efforts. 13 2 Engaging low burden countries A potential mechanism to encourage continued TB investment in low-burden countries is to create a pathway for formal recognition as being “TB transmission free”. Achieving and maintaining a “TB transmission free” status could provide strong impetus for regional action in low burden areas, similar to the focus provided by the “Roll back Polio” campaign. Challenging low burden countries to aspire to this goal may galvanize national action and encourage the incorporation of cutting-edge molecular tools into routine TB control activities, together with the development of active response systems. Benefits of rapid advances in pathogen genomics and whole genome sequencing include simultaneous detection of drug-resistance mutations (allowing for earlier initiation of effective medications, thereby cutting transmission) and accurate identification of transmission clusters to guide outbreak investigation. It will allow TB control efforts to be at the forefront of the “genomic revolution”, linking sophisticated strain and drug-resistance mutation analysis to enhanced patient care and better targeted public health responses. 14 15,16 A policy of TB elimination that focuses exclusively on absolute case numbers, as defined in the WHO “Framework for TB elimination in low-incidence countries”, raises practical and ethical challenges. Increasing the intensity and scope of screening programs for latent TB infection (LTBI) is clearly important as part of an overall TB elimination strategy, given the long latency periods experienced by some TB patients. 17 However, careful consideration should be given to the strategies required to ensure safe and efficient implementation. 18 Managing LTBI in vulnerable and disadvantaged groups will require new ways of working with local communities, social welfare organisations, and government departments. No comprehensive analysis has been undertaken to explore the ethical, economic and social impacts of a policy shift towards TB elimination, intending to eliminate the “pool of latent infection” from which future cases may arise. Given high population mobility and significant re-infection risk, eradicating the “pool of latent infection” is not a feasible aim. Careful consideration should be given to the risk:benefit ratio of preventive therapy in individual patients, with clear benefit in young children and immune compromised patients. 18,19 However, there is a difficult ethical tension between the interests of low risk individuals with LTBI, who stand to benefit very little from preventive therapy, and potential societal benefits if the “pool of latent infection” is reduced. In “TB transmission free” settings, where local transmission is limited to an absolute minimum (<1 case of locally transmitted TB/100 000 population), the societal benefit derived from the LTBI treatment is minimal and ethically the patient's best interest becomes the sole determining factor. This provides additional motivation for countries to strive towards “TB transmission free” status. 20 3 Engaging high burden settings The stigma associated with TB, at the individual and community levels, is well characterized and presents a major hurdle to TB control activities in many high burden settings. However, an issue that is less often discussed or studied is the political stigma associated with TB. 20 Politicians in countries with rapidly growing economies aspire to be seen as progressive and making a contribution to rid their country of the “shackles of poverty”. Given TB's intimate association with poverty and deprivation there is reluctance to acknowledge the full extent of the TB disease burden, especially in settings where this remains stubbornly high. This may explain some of the discrepancies observed between notified cases, disease burden estimates and actual prevalence surveys. Re-assessment of Indonesia's estimated TB incidence, after a recent prevalence survey detected double the number of case expected, now places Indonesia ahead of China as the country with the second highest number of TB cases, surpassed only by India. 21 Issues related to political stigma are compounded by rising rates of MDR-TB in many Asian countries, with pressure on TB control programmes to “solve the problem”, despite inadequate resource allocation. 1 Due to rapid economic growth many countries that were previously supported by the Global Fund no longer qualify. It is imperative that the Global Fund establishes a clear transition pathway to secure domestic funding streams (or other support mechanisms) that can sustain MDR-TB treatment programmes and prevent a recurrence of the setbacks suffered by MDR-TB treatment programmes in China when Global Fund support ended in 2015. Increased domestic resources could be secured through innovative health financing mechanisms, such as universal health insurance and social protection schemes. However, low income countries will continue to require external donor support. Major funding shortfalls demonstrate the need for greatly increased advocacy and strong regional political commitment. Innovative regional funding mechanisms should be explored that are dynamic and responsive to local circumstances, especially in the Asia-Pacific where economic growth has been strong and contributions to traditional funding mechanisms limited. 21 Dr. Margaret Chan, Director General of the WHO, made the following call when announcing the ambitious End TB strategy : “Everyone with TB should have access to the innovative tools and services they need for rapid diagnosis, treatment and care. This is a matter of social justice, fundamental to our goal of universal health coverage. Given the prevalence of drug-resistant tuberculosis, ensuring high quality and complete care will also benefit global health security. I call for intensified global solidarity and action to ensure the success of this transformative End TB Strategy.” The real challenge is identifying the international “levers” that can translate these worthy ambitions into concerted action with strong contributions from high and low burden countries. 13 Conflicts of interests: Authors declare no conflicts of interest.
|
[
"JENKINS",
"GRAHAM",
"RAVIGLIONE",
"ABUBAKAR",
"KENDALL",
"MARAIS",
"CASALI",
"DOBLER",
"SCHAAF",
"JIAO",
"SANCHEZPADILLA",
"MARAIS",
"OUTHRED",
"OUTHRED",
"RANGAKA",
"HILL",
"MURRAY",
"ISLAM"
] |
782c6b0e149c447d9b27517555c0f3df_Mediation of the association between sleep disorders and cardiovascular disease by depressive sympto_10.1016_j.pmedr.2023.102183.xml
|
Mediation of the association between sleep disorders and cardiovascular disease by depressive symptoms: An analysis of the National health and Nutrition Examination Survey (NHANES) 2017–2020
|
[
"Zhou, Wen",
"Sun, Lu",
"Zeng, Liang",
"Wan, Laisiqi"
] |
We aimed to investigate the role of depressive symptoms between sleep disorders and cardiovascular disease (CVD). Data used in this cross-sectional study were collected from the National Health and Nutrition Examination Survey (NHANES) database in the United States between 2017 and 2020. Univariate and multivariate logistic regression analyses were performed. Causal mediation analysis was conducted to investigate the role of depressive symptoms between sleep disorders and CVD. Subgroup analyses were performed in populations with diabetes, hypercholesteremia, and hypertension. A total of 5,173 participants were included, and 652 (12.6%) participants had CVD. Sleep disorders [odds ratio (OR) = 1.66; 95% confidence interval (CI), 1.35–2.03] and depressive symptoms (OR = 1.92; 95 %CI, 1.44–2.56) were associated with greater odds of CVD, and sleep disorders (OR = 3.87; 95 %CI, 3.09–4.84) were also related to greater odds of depressive symptoms after adjusting for confounders. Causal mediation analysis showed that the average direct effect (ADE) was 0.041 (95 %CI, 0.021–0.061; P < 0.001), the average causal mediation effect (ACME) was 0.007 (95 %CI, 0.003–0.012; P = 0.002), and 15.0% (0.150, 95 %CI, 0.055–0.316; P = 0.002) of the association of sleep disorders with CVD appeared to be mediated through depressive symptoms. Subgroup analyses indicated that the mediating effect of depressive symptoms on sleep disorders and CVD was also observed in populations with hypercholesterolemia or hypertension (all P < 0.05). Depressive symptoms may be a mediator in the relationship between sleep disorders and CVD. Improving depressive symptoms in patients may reduce the odds of CVD due to sleep disorders.
|
1 Introduction Cardiovascular disease (CVD) is the leading cause of death worldwide, with an estimated 55 million deaths in 2017, of which 17.8 million died from CVD ( GBD 2017 Causes of Death Collaborators, 2018, Yusuf et al., 2020 ). Known CVD risk factors include smoking, obesity, diabetes, hypertension, dyslipidemia, physical inactivity, and unhealthy diet ( Krist et al., 2020 ). Poor sleep quality has also been reported to play an important role in the development and progression of CVD ( Jackson et al., 2015, Tobaldini et al., 2019 ). A comprehensive understanding of these easily modifiable risk factors, such as sleep, is important for the prevention and control of CVD. Sleep disorder is an umbrella term for a series of sleep problems, mainly comprising insomnia, sleep-disordered breathing disorders, central disorders of hypersomnolence, circadian rhythm sleep-wake disorders, parasomnias, and sleep-related movement disorders ( Sateia, 2014 ). Previous studies have been reported that sleep disorder was associated with an increased risk of CVD ( Sofi et al., 2014, Wang et al., 2021 ). Physiological mechanisms that have been identified for CVD caused by sleep disorders include inflammation, autonomic nervous system dysfunction, and metabolic dysfunction ( Javaheri and Redline, 2017, Hall et al., 2018 ). However, psychological mechanisms between sleep disorders and CVD have rarely been reported. There is a bidirectional association between sleep disorders and depression, that is, sleep disorders are risk factors and are also symptoms of depression ( Steiger and Pawlowski 2019, Li et al., 2016 ). Depressive symptoms are also related to a higher risk of CVD development and progression ( Harshfield et al., 2020, Tobaldini et al., 2020 ). Given the relationship between sleep disorders, depressive symptoms, and CVD, depressive symptoms may play a role in the association between sleep disorders and CVD. However, the role of depressive symptoms in the relationship between sleep disorders and CVD has not been reported. 1 1 CVD: Cardiovascular disease; NHANES: National Health and Nutrition Examination Survey; MEC: Mobile Examination Center; MCQ: Medical Conditions Questionnaires; SLQ: Sleep Questionnaire; PHQ-9: Patient Health Questionnaire 9; DIQ: Diabetes Questionnaire; BPQ: Blood Pressure & Cholesterol Questionnaire; PIR: poverty income ratio; BMI: body mass index; SD: standard deviation. Herein, we hypothesized that depressive symptoms played a mediating effect in the relationship between sleep disorders and CVD. The association of sleep disorders with CVD, depressive symptoms with CVD, and sleep disorders with depressive symptoms were first analyzed. Then the role of depressive symptoms in the relationship between sleep disorders and CVD was explored. 2 Methods 2.1 Data source and study populations Data used in this cross-sectional study were extracted from the National Health and Nutrition Examination Survey (NHANES) database between 2017 and 2020. The NHANES database is a program to assess the health and nutritional status of adults and children in the United States ( Latib et al., 2012 ). Through stratified multistage probability sampling, NHANES recruits a nationally representative sample of approximately 5,000 civilians annually. The survey was completed by the trained study team consisting of a physician, medical and health technicians, and dietary and health interviewers. The selected participants first underwent a health interview at home. One to two weeks after the home interview, participants were asked to visit a Mobile Examination Center (MEC) to complete other interviews, examinations, and laboratory assessments. The inclusion criteria for participants in this study were as follows: (1) aged ≥18 years old; (2) with completed information of CVD, sleep disorders, and depressive symptoms. NHANES was conducted according to the Helsinki Declaration, the protocols of NHANES were approved by the National Center for Health Statistics Research Ethics Review Board ( Latib et al., 2015 ). Because of the retrospective study design and de-identified data from the NHANES database, this study was exempted from ethical review by the Institutional Review Board of The Second Affiliated Hospital of Guangzhou University of Chinese Medicine. 2.2 Definition 2.2.1 Cvd CVD was determined according to the Medical Conditions Questionnaires (MCQ), and included congestive heart failure, coronary heart disease, angina, heart attack, and stroke. Participants who answered “yes” to any of the following questions on the MCQ were considered to have CVD. (1) Congestive heart failure (MCQ160b): “Has a doctor or other health professional ever told you that you had congestive heart failure?”; (2) Coronary heart disease (MCQ160c): “Has a doctor or other health professional ever told you that you had coronary heart disease?” (3) Angina (MCQ160d): “Has a doctor or other health professional ever told you that you had angina, also called angina pectoris?” (4) Heart attack (MCQ160e): “Has a doctor or other health professional ever told you that you had a heart attack (also called myocardial infarction)?” (5) Stroke (MCQ160f): “Has a doctor or other health professional ever told you that you had a stroke?” 2.2.2 Sleep disorders Sleep disorders were determined based on the Sleep Questionnaire (SLQ) question SLQ060 “Have you ever told a doctor or other health professional that you have a sleep disorder”. Participants who answered “yes” were identified as having sleep disorders. 2.2.3 Depressive symptoms Depressive symptoms were assessed by the Patient Health Questionnaire 9 (PHQ-9) ( Kung et al., 2013, Kroenke et al., 2001 ), which was conducted during face-to-face MEC interviews. In this study, participants with PHQ-9 scores ≥10 were considered to have depressive symptoms. 2.2.4 Diabetes Diabetes Questionnaire (DIQ) question DIQ010 “Have you ever been told by a doctor or health professional that you have diabetes or sugar diabetes?” was used to assess the diabetes status. Participants who answered “yes” to the DIQ010 question or had a glycated hemoglobin (HbA1c) ≥ 6.5% were considered as having diabetes ( Leong and Wheeler, 2018 ). 2.2.5 Hypercholesteremia Blood Pressure & Cholesterol Questionnaire (BPQ) question BPQ080 “Have you ever been told by a doctor or other health professional that your blood cholesterol level was high?” was utilized to evaluate the hypercholesteremia status. Participants who answered “yes” to the BPQ080 question or had a total cholesterol ≥240 mg/dL were considered to have hypercholesteremia ( Lee et al., 2019 ). 2.2.6 Hypertension BPQ question BPQ020 “Have you ever been told by a doctor or other health professional that you had hypertension, also called high blood pressure?” was used to assess the hypertension status. Participants who answered “yes” to the BPQ020 question or had a blood pressure ≥ 130/80 mmHg were considered as having hypertension ( Whelton et al., 2018 ). 2.3 Variable extraction The outcome variable was the CVD cases. Data including age, poverty income ratio (PIR), metabolic rate during sitting, PHQ-9 score, sleep duration on weekdays, sleep duration on weekend, average sleep duration, body mass index (BMI), gender (male, female), race/ethnicity (Mexican-American, other Hispanic, non-Hispanic white, non-Hispanic black, non-Hispanic Asian, others), education level (<9th grade, 9-11th grade, high school, some college, college or above, unknown), marital status (married, divorced, never married, others), smoking (yes, no, unknown), CVD in relatives (yes, no, unknown), depressive symptoms (yes, no), sleep disorders (yes, no), diabetes (yes, no, unknown), hypercholesteremia (yes, no, unknown), hypertension (yes, no, unknown), sleep-related medication, and CVD (yes, no) were collected. The sleep-related medication in this study included barbiturates, benzodiazepines, miscellaneous anxiolytics, sedatives, and hypnotics. 2.4 Statistical analysis Continuous variables with normal distribution were expressed as mean ± standard deviation (SD), continuous variables with non-normal distribution were described as median and interquartile [M (Q1, Q3)], and the comparison between groups was analyzed by t -test or Kruskal-Wallis H test. Categorical variables were represented by number and percentage [n (%)], and the chi-square test (χ 2 ) or Fisher’s exact test was used for comparison between groups. Missing data were filled using multiple imputations, and the sensitivity analysis indicated that no statistical difference was observed between before and after imputation. Multivariate logistic regression analysis was used to assess the association between depressive symptoms, sleep disorders, and CVD. Causal mediation analysis can divide the total effect of outcomes into direct effect and indirect effect, and the indirect effect on the result is mediated by mediator variables ( Zhang et al., 2016 ). The analysis reports included the average causal mediation effect (ACME), average direct effect (ADE), and total effect. In this study, depressive symptoms were used as a mediator variable, and the mediation effect of the depressive symptoms between sleep disorders and CVD was analyzed. The bootstrapping test was used to calculate correlation coefficients in causal mediation effects. In addition, we further investigated the causal mediation effects of each PHQ-9 item between sleep disorders and CVD. Each item on the PHQ-9 was scored as 0 and ≥1, where 0 represents no symptoms and ≥1 represents symptoms. All statistical analyses were completed by R 4.0.3 software (Institute for Statistics and Mathematics, Vienna, Austria), and P < 0.05 was considered statistically significant. 3 Results 3.1 Characteristics of participants A total of 21,093 participants’ data were extracted from the NHANES database, and after excluding 11,400 participants aged <18 years, 8 participants with missing sleep disorders data, 4,468 participants with missing PHQ-9 data, and 44 participants with missing CVD data, 5,173 participants who met the criteria were included in this study ( Fig. 1 ). Table 1 shows the characteristics of all included participants. The median age and BMI of participants were 49.0 (32.0, 63.0) years and 29.1 (24.8, 34.5) kg/m 2 , respectively. The median PHQ-9 scores were 3.0 (2.0, 6.0), 2,324 (44.9%) participants were males, 2,714 (52.5%) participants were married. Of these participants, 652 (12.6%) participants had CVD, 548 (10.6%) participants had depressive symptoms, 1,844 (35.6%) participants had sleep disorders, 1,342 (25.9%) had diabetes, 1,852 (35.8%) participants had hypercholesteremia, and 1,999 (38.6%) participants had hypertension. The difference analysis between participants with or without sleep disorders indicated that except for PIR and sleep duration on weekdays, significant differences were observed between the two groups in all characteristics (all P < 0.05, Table 1 ). 3.2 Association between depressive symptoms, sleep disorders, and CVD Table 2 demonstrates the association of depressive symptoms with CVD, sleep disorders with CVD, and sleep disorders with depressive symptoms. When CVD was the outcome variable, the results indicated that sleep disorders (OR = 2.33; 95 %CI, 1.97–2.75) and depressive symptoms (OR = 2.21; 95 %CI, 1.77–2.76) were associated with greater odds of CVD. After adjusting for age, gender, race/ethnicity, metabolic rate during sitting, CVD in relatives, BMI, and sleep-related medication, sleep disorders (OR = 1.66; 95 %CI, 1.35–2.03) and depressive symptoms (OR = 1.92; 95 %CI, 1.44–2.56) were still related to higher odds of CVD. When depressive symptoms were the outcome variable, sleep disorders (OR = 4.24; 95 %CI, 3.52–5.12) were associated with higher odds of depressive symptoms. After adjusting for confounders, sleep disorders (OR = 3.87; 95 %CI, 3.09–4.84) were still related to greater odds of depressive symptoms. In addition, we further investigated the association between sleep disorders, depressive symptoms, and CVD in participants with and without sleep-related medication. In participants with sleep-related medication, sleep disorders (OR = 2.91; 95 %CI, 0.52–16.26) and depressive symptoms (OR = 1.35; 95 %CI, 0.45–4.08) may not be associated with odds of CVD after adjusting for confounders. Sleep disorders (OR = 4.72; 95 %CI, 0.91–24.48) may also not be associated with odds of depressive symptoms ( Supplement Table 1 ). The wider confidence intervals suggested that these results may have power issues and the results should be interpreted with caution. In participants without sleep-related medication, sleep disorders (OR = 1.54; 95 %CI, 1.17–2.02) and depressive symptoms (OR = 1.95; 95 %CI, 1.34–2.84) were related to greater odds of CVD after adjusting for confounders. Moreover, sleep disorders (OR = 3.60; 95 %CI, 2.69–4.83) were also related to higher odds of depressive symptoms ( Supplement Table 2 ). 3.3 Causal mediation analysis Table 3 and Fig. 2 show the detailed results of the causal mediation analysis. The results suggested that both direct and indirect effects played an important role in increasing the odds of CVD due to sleep disorders. The total effect was 0.048 (95 %CI, 0.027–0.067; P < 0.001), the ACME was 0.007 (95 %CI, 0.003–0.012; P = 0.002), the ADE was 0.041 (95 %CI, 0.021–0.061; P < 0.001), and the proportion of the effect mediated was 0.150 (95 %CI, 0.055–0.316; P = 0.002). 3.4 Subgroup analyses based on different items of the PHQ score We further analyzed the association between each item of the PHQ score and sleep disorders and CVD. The results demonstrated that sleep disorders were associated with greater odds of depressive symptoms in each item of the PHQ score (OR ranged from 1.35 to 3.43, all P < 0.05), and each item of the PHQ score was also connected with greater odds of CVD (OR ranged from 1.33 to 1.83, all P < 0.05) ( Table 4 ). The causal mediation analysis of each PHQ item between sleep disorders and CVD was shown in Table 5 . Causal mediating effects were observed in all items except items 3 and 6. Both items 2 and 8 had the highest ACME value of 0.004, and the proportion of their effect mediated was 0.089 (95 %CI, 0.029–0.202) and 0.094 (95 %CI, 0.038–0.205), respectively. 3.5 Subgroup analyses based on different populations Diabetes, hypercholesterolemia, and hypertension are commonly comorbid with CVD ( Einarson et al., 2018, Chapman and Sposito 2008 ). Subgroup analyses were conducted in populations with diabetes, hypercholesteremia, and hypertension, respectively. Among populations with or without diabetes, sleep disorders and depressive symptoms were associated with greater odds of CVD, and sleep disorders were also related to higher odds of depressive symptoms ( Supplement Tables 3 and 4 ). The causal mediation analysis demonstrated that in populations with diabetes, the total effect was 0.085 (95 %CI, 0.038–0.134; P < 0.001), the ACME was 0.005 (95 %CI, −0.002 to 0.014; P = 0.158) ( Supplement Table 5 ). In populations without diabetes, the total effect was 0.029 (95 %CI, 0.008–0.050; P = 0.008), and the ACME was 0.007 (95 %CI, 0.002–0.013; P = 0.002) ( Supplement Table 6 ). In populations with hypercholesterolemia, sleep disorders and depressive symptoms were also related to higher odds of CVD, and sleep disorders were also associated with greater odds of depressive symptoms ( Supplement Table 7 ). While no associations were observed in populations without hypercholesterolemia ( Supplement Table 8 ). The causal mediation analysis displayed that in populations with hypercholesterolemia, the total effect was 0.079 (95 %CI, 0.044–0.116; P < 0.001), and the ACME was 0.011 (95 %CI, 0.005–0.020; P = 0.002) ( Supplement Table 9 ). The associations between sleep disorders, depressive symptoms, and CVD were statistically significant in populations with hypertension ( Supplement Table 10 ), whereas no association was observed between sleep disorders and CVD in populations without hypertension ( Supplement Table 11 ). The causal mediation analysis showed that in populations with hypertension, the total effect was 0.070 (95 %CI, 0.040–0.099; P < 0.001), the ACME was 0.008 (95 %CI, 0.001–0.015; P = 0.012), and the proportion of the effect mediated was 0.114 (95 %CI, 0.021–0.265; P = 0.012) ( Supplement Table 12 ). 4 Discussion This study analyzed the association between depressive symptoms, sleep disorders, and CVD. The results showed that depressive symptoms and sleep disorders were independently associated with CVD, and sleep disorders were associated with greater odds of depressive symptoms. Depressive symptoms may play a mediating role in CVD caused by sleep disorders, and 15.0% of the association of sleep disorders with CVD appeared to be mediated through depressive symptoms. In addition, the mediating effect of depressive symptoms on sleep disorders and CVD was also observed in populations with hypercholesterolemia and hypertension, but not in populations without hypercholesterolemia or hypertension, and with or without diabetes. Sleep disorders include insomnia, sleep-disordered breathing disorders, central disorders of hypersomnolence, etc. ( Sateia. 2014 ), and are associated with an increased risk of CVD ( Wang et al., 2021, Parati et al., 2016 ). Our study indicated that populations with sleep disorders had 1.60 times higher CVD risk than those without sleep disorders. Several physiological mechanisms have been identified linking sleep disorders to CVD, including inflammation, autonomic nervous system dysfunction, and metabolic dysfunction ( Hall et al., 2018, Javaheri and Redline, 2017 ). Inflammation plays an important role in the development and progression of CVD ( Sorriento and Iaccarino, 2019 ), and sleep disorders are related to elevated levels of inflammatory cytokines interleukin-6 (IL-6) and C-reactive protein ( Morris et al., 2016, Fernandez-Mendoza et al., 2017 ). Autonomic nervous system activity in the form of reduced parasympathetic activity and increased sympathetic activity has been demonstrated to be a risk factor for CVD ( Hillebrand et al., 2013 ), while sleep disorders can reduce parasympathetic activity and increase sympathetic activity ( Tamisier et al., 2018 ). Chronic metabolic dysfunction in the form of insulin resistance and impaired glucose tolerance is a major risk factor for CVD ( Cai et al., 2020 ). Sleep disorders may lead to impaired glucose tolerance and reduced insulin sensitivity ( Buxton et al., 2012, Scheer et al., 2009 ). Our study demonstrated that depressive symptoms may play a mediating role in the relationship between sleep disorders and CVD. A systematic review of the link between sleep and CVD proposed a hypothesis that psychosocial factors may act as mediators or moderators of the relationship between sleep and CVD, or may influence CVD through their upstream effects on sleep ( Hall et al., 2018 ). Our results may confirm the hypothesis that depressive symptoms may be a mediator in the relationship between sleep disorders and CVD. In our results, sleep disorders and depressive symptoms were related to an increased risk of CVD, which was also consistent with previous studies ( Wang et al., 2021, Parati et al., 2016, Harshfield et al., 2020, Tobaldini et al., 2020 ). We also showed that sleep disorders were associated with a higher risk of depressive symptoms. Then the causal mediation analysis indicated that depressive symptoms may be a mediator in the relationship between sleep disorders and CVD, and 15.0% of the relationship of sleep disorders with CVD appeared to be mediated by depressive symptoms. Subgroup analyses revealed differences in the mediating effects of different items of the PHQ-9 score on sleep disorders and CVD. In addition, the mediating effect of depressive symptoms on sleep disorders and CVD was only observed in populations with hypercholesterolemia or hypertension. These results suggested that improving depressive symptoms in patients may reduce the odds of CVD due to sleep disorders. Furthermore, sleep disorders are an umbrella term for a range of sleep problems, and the relationship between specific types of sleep disorders, depression, and CVD needs to be further explored to derive specific clinical improvements. Recognizing sleep disorders and their association with patient-related outcomes remains a major challenge and the education of clinical healthcare staff and providers about the potential impact of sleep disorders on disease could potentially affect clinical practice. In this study, we provide the first evidence for the mediating effect of depressive symptoms between sleep disorders and CVD. Furthermore, we also conducted further analysis in the populations with diabetes, hypercholesterolemia and hypertension, respectively. However, several limitations should be considered. First, this study was a cross-sectional study and was unable to confirm the mediating effect of depressive symptoms between sleep disorders and CVD, with stronger evidence that may depend on prospective cohort studies. Second, outcomes of CVD and sleep disorders in the NHANES were based on the self-reported data of participants, which only represented the participants’ past disease status and lacked clinical diagnosis of the disease status at the time of the interview, and these may have influenced the results. Third, both sleep disorders and CVD are heterogeneous, and different types of sleep disorders and different types of CVD may be different in their effects, mechanisms, and treatments. Therefore, the use of composite variables of sleep disorders or CVD didn’t lead to more definitive conclusions. Future studies may require a more accurate analysis between sleep disorders and CVD. Fourth, some factors that have an impact on CVD, such as alcohol consumption, were not included in the analysis due to a large number of missing values of relevant variables in the NHANES database, which may affect the results. In conclusions, depressive symptoms and sleep disorders were independently associated with CVD, and sleep disorders were also related to greater risk of depressive symptoms. Depressive symptoms may play a mediating effect in the relationship between sleep disorders and CVD. The mediating effect of depressive symptoms on sleep disorders and CVD was also observed in populations with hypercholesterolemia or hypertension. Improving depressive symptoms in patients may reduce the odds of CVD due to sleep disorders. CRediT authorship contribution statement Wen Zhou: Conceptualization, Methodology, Writing – original draft. Lu Sun: Supervision, Writing – review & editing. Liang Zeng: Data curation, Formal analysis, Visualization, Software. Laisiqi Wan: Software, Data curation, Investigation, Validation. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements None. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.pmedr.2023.102183 . Appendix A Supplementary data The following are the Supplementary data to this article: Supplementary data 1
|
[
"BUXTON",
"CAI",
"CHAPMAN",
"EINARSON",
"FERNANDEZMENDOZA",
"GBDCAUSESOFDEATHCOLLABORATORS",
"HALL",
"HARSHFIELD",
"HILLEBRAND",
"JACKSON",
"JAVAHERI",
"KRIST",
"KROENKE",
"KUNG",
"LATIB",
"LATIB",
"LEE",
"LEONG",
"LI",
"MORRIS",
"PARATI",
"SATEIA",
"SCHEER",
"SOFI",
"SORRIENTO",
"STEIGER",
"TAMISIER",
"TOBALDINI",
"TOBALDINI",
"WANG",
"WHELTON",
"YUSUF",
"ZHANG"
] |
7aab0912a91d48d29d09efc4301d0542_Achieving grid resilience through energy storage and model reference adaptive control for effective _10.1016_j.ecmx.2024.100533.xml
|
Achieving grid resilience through energy storage and model reference adaptive control for effective active power voltage regulation
|
[
"Jarosz, Anna"
] |
This article presents a comprehensive examination of the utilization of energy storage units for voltage regulation in grids. Specifically, the focus is on the practical implementation of active power control using a Model Adaptive Control (MRAC) algorithm. The article provides a detailed description of the algorithm, considering grid parameters and showcasing the practical application of voltage regulation through energy storage active power control using MRAC. The results of implementing an energy storage unit for global voltage regulation are discussed, highlighting the advantages and superiority of this method.
|
Introduction Voltage regulation in the distribution grid becomes increasingly complex and challenging as the grid evolves into a more decentralized and dynamic structure [1] . The integration of renewable energy sources and the fluctuating nature of power generation pose significant challenges in maintaining voltage stability [28] . Energy storage technologies and sophisticated control methods have emerged as viable solutions to address these challenges. This article delves into the investigation of how grids, specifically those utilizing Model Reference Adaptive Control (MRAC), can effectively regulate voltage. MRAC presents a flexible and adaptive control technique that dynamically adjusts the characteristics of grids to ensure grid stability, minimize voltage deviations, and facilitate efficient power distribution [52] . This article explores the potential of deploying grids within distribution grids to enhance voltage regulation and mitigate voltage fluctuations. The study also aims to evaluate the effectiveness of different aspects such as storage system capacity, optimal location, and control strategies in achieving voltage regulation objectives. By shedding light on the benefits and challenges associated with employing Model Reference Adaptive Control in grids for voltage regulation, this article provides an extensive understanding of its effectiveness and potential applications. Addressing this research gap is crucial as it can pave the way for the development and implementation of more efficient and reliable voltage regulation strategies using energy storage units. This research hypothesizes that an energy storage system integrated with MRAC can effectively regulate voltage in distribution grids, resulting in reduced voltage deviations and improved grid stability. Through comprehensive analysis and experimentation, this article aims to validate this hypothesis and provide valuable insights into the potential of employing MRAC in grids for voltage regulation Literature review Control mechanisms in smart grids enable the optimization of energy consumption. Through real-time monitoring and control, smart grid systems can adjust energy generation, distribution, and consumption to match demand and minimize wastage. Control systems play a crucial role in maintaining grid stability and reliability. They help regulate voltage levels, frequency, and power flow to ensure a stable and balanced operation of the grid [7] . By continuously monitoring and adjusting grid parameters, control mechanisms can prevent disruptions and blackouts. The article [6] focuses on the development of an Energy Management System for a stand-alone droop-controlled microgrid, optimizing generator outputs in real-time using MRAC. The study [71] addresses the challenges of integrating distributed photovoltaic plants into distribution systems and proposes a coordinated control method for distributed energy storage systems to regulate voltage. The article [46] focuses on finding the optimal location and size of battery energy storage systems (BESSs) for voltage regulation in distribution networks, using a multi-objective optimization approach. The study [70] proposes using battery energy storage systems to mitigate voltage rise and drop in low-voltage distribution networks with high penetration of photovoltaic resources, employing a coordinated control approach. The paper [65] investigates the impact of integrating storage devices with PV sources on feeder voltages and proposes a coordinated control method for energy storage systems to regulate voltages within required limits. The article [37] presents an enhanced resilient control strategy that formulates load feeder voltage regulation and power balancing as a quadratic optimization problem. The proposed control strategy includes an impedance estimator and an optimal controller, simplifying communication requirements and reducing computational burden in micro-networks. The study [8] highlights the significant growth in the deployment of large-scale energy storage systems and emphasizes the need for energy management systems and optimization methods. It provides an overview of EMS architectures and applications for storage, serving as a foundation for understanding voltage regulation through active power from energy storage. The paper [55] proposes a smart microgrid energy management system that integrates time-of-use pricing principles, real-time electricity consumption comparison, and load distribution optimization. The system aims to reduce energy consumption and costs while ensuring optimal comfort levels, considering constraints related to battery charging/discharging and decentralized power generation. The article [2] addresses the challenges faced by low-voltage distribution grids due to the increased adoption of decentralized renewable energy generation. It introduces a multi-agent system that incorporates energy management systems of smart buildings, a central grid controller, and a local transformer controller. The system enables the coordination of ancillary services provision in both centralized and decentralized ways, ensuring resilience against electricity outages and communication failures. The study [50] suggests the use of Model Reference Adaptive Control (MRAC) for improving reactive power flow in electrical grids by integrating driven wind turbine-based self-excited induction generators. It focuses on enhancing low-voltage ride-through efficiency by addressing atypical operating conditions of wind energy conversion systems. In contrast, the paper [33] introduces a Direct Model Reference Adaptive Control (DMRAC) algorithm in a boost converter utilized in islanded grids with a photovoltaic system to stabilize output voltage changes. It emphasizes the necessity of various conditions for these current mode controllers, highlighting the complexities involved in regulating output voltage. Additionally, the paper [57] presents a novel robust model reference adaptive maximum power point tracking controller for PV systems under various cases, focusing on maximizing transient reactions by considering dynamics in voltage changes between the photovoltaic elements. The research [68] describes the design implementation of the controller for a three-phase boost rectifier using the MRAC method to regulate the rectifier's output voltage while preserving the unity power factor. In contrast, the article [63] discusses the use of a fixed and adaptive pole placement control technique via a boost converter's output voltage and the implementation of the maximum power point tracking using the MIT rule. Furthermore, the findings of the study [31] support the use of a composite MRAC scheme to reduce model mismatch caused by parametric uncertainty in DC-DC boost converters. It also introduces a cascade PI model predictive controller to enhance the system's dynamic response. The literature review emphasizes the potential of MRAC-based voltage control solutions using active power energy storage devices and highlights the need for more research explicitly focusing on applying model control for voltage regulation using energy storage’s active power. It also emphasizes the importance of further analysis and testing to investigate the applicability and scalability of these methods in actual grid systems. Model Reference Adaptive Control is a control strategy that uses a reference model to estimate the desired active power output and adjusts control parameters to minimize the error between the reference and actual power values. Several variables, including system features, control objectives performance criteria influence a control strategy's selection [29] . While MRAC-based control benefits voltage regulation, other techniques like PI control [23] , neural network [18,62] , model predictive control [64] , or fuzzy logic [34] control may also be appropriate depending on the application and system needs. MRAC is simple to tweak and customize to meet unique system objectives. The control gains and reference models changed due to getting the appropriate voltage regulation performance [36] . With this flexibility, the control system is adjusted and improved to match the voltage needs of the power system. MRAC coordinates the management of proper parameter uncertainties and disruptions [21] . Variable demand conditions, incorporating renewable energy sources, and grid disturbances can make voltage regulation in power grids difficult [51,60] . Increase the system's durability and endurance of accurate voltage regulation thanks to MRAC's capacity to adapt and change control parameters online. Based on the discrepancy between the reference and actual power values, MRAC is an adaptive control system that continuously modifies control settings [13] . This adaptability makes the control system suitable for voltage regulation in dynamic power systems because it can react to changing system conditions and uncertainties [27,10] . MRAC calculates the desired active power output using a reference model. This estimation was essential for efficient voltage management to use a model-based technique to precisely predict system behavior [32] . MRAC can consider system dynamics to improve control performance and stability. The method described in article [43] utilizes a power semiconductor device (PSD)--based bidirectional three-phase inverter module and an energy storage unit for power system management and compensation. It integrates control functions and algorithms through a modularized all-digital control scheme to improve system costs, reliability, efficiency, and flexibility. In contrast, MRAC adjusts control parameters based on a reference model, resulting in enhanced performance and stability. While both methods address power management and compensation, they differ in their approaches, with the method in [43] introducing complexity in system design and implementation, requiring specialized components and algorithms. Conversely, the approach described in the paper [48] focuses on the control approaches in DC microgrids, particularly in the context of multiple sources and voltage support. It reviews primary and secondary control methods, emphasizing the adaptability of MRAC to account for system uncertainties and changes in operating conditions, enabling improved performance and stability. Furthermore, in the article [17] , Particle Swarm Optimization is discussed as a method to optimize the regulating factors of energy storage systems for voltage regulation in the grid. It is observed that PSO may experience slower convergence compared to MRAC, which continuously adapts control parameters based on a reference model, providing faster response and adaptation to changing system conditions. The implementation of AI control in the proposed energy storage system further enhances its economic viability. By utilizing advanced control algorithms and artificial neural network algorithms, the system can optimize its round-trip efficiency and total cost rate, leading to improved energy and economic performance. This integration of AI control not only enhances the system's technical capabilities but also contributes to its overall economic sustainability. The article [3] proposes an energy storage system that combines compressed air energy storage with solar heliostat and multi-effect thermal vapor compression desalination units, aiming to produce both power and potable water. It utilizes low-price electricity during off-peak times for compressed air storage and employs waste heat recovery for freshwater production. Energy, exergy, and exergoeconomic analyses are performed, and a genetic algorithm is used for multi-objective optimization, showing economic viability with a calculated payback period of 2.65 years. In contrast, the paper [40] discusses the integration of deep learning models in energy storage systems to automate operational tasks, reduce manual intervention, and improve system reliability. Deep learning models can identify faults, optimize energy storage operations, improve grid stability, and enhance overall system efficiency, leading to cost savings by avoiding peak demand charges, reducing energy waste, and optimizing energy usage. Additionally, the article [53] proposes a strategy with the use of a genetic algorithm for multi-objective optimization to minimize battery capacity loss cost by optimizing power allocation and considering the state of charge of the supercapacitor. Deep reinforcement learning allows the system to adapt and learn from real-world driving conditions, optimizing power allocation based on vehicle acceleration and supercapacitor state-of-charge, resulting in improved energy efficiency and reduced operational costs. Based on the literature review, the Model Reference Adaptive Control in maintaining stable grid parameters regulation provides specific limitations. 1. The integration of renewable energy sources, such as solar and wind, can introduce grid instability due to their variable and intermittent nature [46] . This variability poses challenges for MRAC in maintaining stable voltage regulation, as the control system needs to adapt to changing power generation conditions. 2. MRAC may have limitations in terms of the control range it can effectively handle [37] . In situations where there are significant fluctuations in power generation or demand, the adaptive control algorithm of MRAC may struggle to maintain precise voltage regulation within the desired range. 3. MRAC is a sophisticated control algorithm that requires accurate modeling and parameter tuning [55] . The complexity associated with implementing and fine-tuning the control algorithm may pose challenges in practical applications, especially when considering the integration of multiple energy storage systems and renewable energy sources. 4. In a distributed grid environment, effective communication and coordination among various control systems are crucial [20] . Ensuring seamless integration and coordination between the MRAC system and other control systems, such as those managing renewable energy sources or other grid devices, can be challenging and may impact the overall performance of voltage regulation. 5. MRAC relies on adaptive algorithms to adjust control parameters based on system dynamics [33] . The adaptation speed of the control algorithm may be a limitation, as it needs to respond quickly to changes in power generation or demand to maintain voltage regulation within acceptable limits. Methodology Voltage regulation plays a crucial role in maintaining the stability and reliability of power grids. An approach to voltage regulation through the utilization of an energy storage unit can inject or absorb active power to balance the grid voltage [24,69] . Model Reference Adaptive Control is a powerful control strategy that is applied to energy storage systems that may operate voltage regulation [48] . The following Fig. 1 and text outline the methodology of voltage regulation by energy storage based on MRAC. The first step in implementing MRAC-based voltage regulation is to develop an accurate power system model. This model should capture the dynamics and characteristics of the grid, including the energy storage unit, power sources, and loads [35,19] . The accuracy of the model directly impacts the performance of the MRAC control strategy. The power system model can be developed using mathematical equations and simulation tools, considering factors such as the impedance of the grid, the characteristics of the energy storage unit, and the behavior of the photovoltaic installations. In MRAC, the reference model is designed to represent the desired behavior of the power system. It provides a target for the control system to track and regulate the voltage. The design of the reference model is based on the required voltage profile and system requirements [45] . The reference model can be designed using mathematical equations that describe the desired voltage behavior, taking into account factors such as voltage limits, response time, and stability criteria. MRAC utilizes adaptive control techniques to continuously adjust the control parameters based on the error between the reference model and the actual system response [67] . These regulating factors are updated online to ensure accurate and efficient voltage regulation. Adaptive control algorithms, such as the Model Reference Adaptive Control algorithm, use feedback from the system to estimate and adapt the control parameters [66] . The control parameters are adjusted based on the error between the measured voltage and the reference model, allowing the control system to adapt to changes in the system dynamics and maintain voltage regulation. Energy storage units, such as batteries or capacitors, play a crucial role in controlling the active power injected into or absorbed from the grid. By adjusting the active power output, the energy storage unit can regulate the voltage within the desired range [26] . The MRAC control strategy ensures that the output of the energy storage system tracks the reference model, achieving the desired voltage regulation. The control algorithm calculates the appropriate active power output of the energy storage unit based on the error between the measured voltage and the reference model, allowing for precise control of the voltage. MRAC operates in a closed-loop control system, where measurements of the grid voltage are fed back to the control algorithm. Based on the error between the measured voltage and the reference model, the control algorithm calculates the appropriate control actions [5] . This feedback loop enables continuous monitoring and adjustment of the active power output of the energy storage unit to maintain the desired voltage level. The control algorithm compares the measured voltage with the reference model and adjusts the control parameters to minimize the error, ensuring accurate voltage regulation. After implementing the MRAC-based voltage regulation, the performance of the control system is evaluated. Metrics such as voltage deviation, response time, and control effort are used to assess the effectiveness of voltage regulation [47] . The regulating factors are fine-tuned based on the evaluation results to optimize the performance of the MRAC control strategy. Fine-tuning involves adjusting the control parameters to improve the system's response and minimize voltage deviations [59] . Techniques such as optimization algorithms and system identification methods can be used to fine-tune the regulating factors. In this case study, we highlighted the use of MRAC for voltage regulation in a microgrid with fluctuating photovoltaic generation. By utilizing an energy storage unit and an accurate power system model, MRAC effectively regulates voltage levels within the desired range [42,30] . The adaptive control techniques and closed-loop control system ensure accurate and efficient voltage regulation. Through performance evaluation and parameter fine-tuning, the MRAC control strategy can be optimized for enhanced voltage regulation in similar microgrid scenarios. Step-by-step control methodology One way to use the Model Reference Adaptive Control algorithm to determine the amount of active power required to compensate for voltage deviation is to use an adaptive voltage regulator [58] . Such a device may adjust the active power output of the energy storage unit based on the difference between the reference voltage and the measured voltage [14] . The selection of parameter values in the Model Reference Adaptive Control algorithm for voltage regulation in grids using active power from energy storage depends on various factors, including the specific system dynamics, control objectives, and performance requirements [11,44] . While the optimal values may vary depending on the specific application, proposed values for the initial values of parameters such as learning rate (α), adaption gain (β) and reference model parameters (θ) are for all 1. The learning rate determines the step size at which the algorithm updates the parameters based on the gradient information. A moderate learning rate other than 1 can be initially considered. This allows for a balance between convergence speed and stability. The adaptation gain controls the rate at which the algorithm adapts the parameters based on the error between the desired and actual voltage values [22] . A value of 1 can be a starting point, but it may need to be adjusted based on the specific system dynamics and control objectives. Higher adaptation gains can lead to faster parameter updates, but there is a trade-off with stability. The reference model represents the desired behavior of the voltage regulation system. The parameters of the reference model, such as response time, overshoot, and settling time, can be adjusted based on the specific requirements of the grid []. These parameters should be chosen to achieve the desired voltage regulation performance while considering the system dynamics and control objectives. We can use an adaptive control mechanism based on an online learning algorithm to incorporate voltage in considering node and other grid parameters to calculate K ) adapt (t [15] . Let's represent the voltage at the considering node as U . We use the formula (1) to calculate the control input (active power): node (t) (1) P control ( t ) = α × K adapt ( t ) × Δ U ( t ) × U node ( t ) × X ( t ) Here, P is the control input representing the active power input to the grid at time t and control (t) e(t) = U is the error signal representing the difference between the reference voltage ref (t) − Unode(t) U and the actual voltage at the considering node ref (t) Unode(t). Calculation of K adapt The steps to calculate the adaptive gain K in the Model Reference Adaptive Control [66[ for regulation voltage in nodes in the grid by energy storage active power based on Recursive Least Squares (RLS) adapt [41] . The RLS algorithm is an online learning approach used to estimate the parameters of a linear model adaptively [25] . The procedure involves the recursive updating of a weight vector, which maps the inputs to the system's outputs. Here are the steps to calculate K using RLS: adapt 1. Set K adapt (0) to an initial value, usually a tiny positive scalar [61] 2. Generate an observation vector z(t) at time T by concatenating the error signal e(t) , the voltage at the point in the grid where the grid is connected U , and a vector of state variables node (t) X(t) [27] . The observation vector is defined as z(t) is calculated by formula (2). (2) z t = [ U t , U node t , X t ] T Calculate the filter gain vector K(t) with formula (3) where (3) K adapt t = P demand , ( t - 1 ) × z t z ( t ) P demand ( t - 1 ) as adaptation gain is incorporated into the equation. β = P demand , ( t - 1 ) × z t Considering that z is a three-dimensional vector, and both the voltage difference and voltage, as well as X(t) (which represents a vector of relevant grid parameters), have values greater than one, it can be concluded that the value of z will be a positive number greater than one. Multiplying this value by the power required for the grid over time t will yield a positive result. Furthermore, the presence of the power in the denominator of the positive observation vector, raised to the positive power of the required power, will be greater than their product. This observation implies that K which represents a constant fraction, will decrease the value of the active power needed to regulate the voltage in the product of the powers of voltage, current, and the differences between the reference voltage and the voltage currently occurring in the node, as well as the vector of relevant grid parameters. adapt (t), This analysis indicated that the adaptive control mechanism, represented by K plays a crucial role in reducing the active power required for voltage regulation. By considering the powers of voltage, current, and the differences between reference and actual voltage values, as well as the vector of relevant grid parameters, the adaptive control mechanism contributes to more efficient and effective regulation of the microgrid's voltage. adapt (t), During operation, the value of K may fluctuate depending on the dynamics of the power system and the power demand adapt (t) [38,17] . For example, if there is a sudden increase in power demand, the value of K may need to be increased or decreased to maintain the desired voltage level. Similarly, if there is a sudden drop in power demand adapt (t) [59] , the value of K may need to be decreased to prevent overcompensation. adapt (t) Overall, the possible K values depend on the specific application of voltage regulation and the desired performance adapt (t) [56] . It is essential to carefully tune the weight of K due to formulas 1 and 3 to ensure stable and effective regulation of the voltage in the grid using energy storage active power. adapt (t) Calculation of X(t) X(t) represents a vector of relevant grid parameters or control inputs at time t . These parameters or inputs can include various variables that describe the state of the power system and affect its behavior [54,49] . Vector X(t) is used to capture information about the grid's operating conditions, system parameters, and any external factors that influence the voltage regulation process. The specific components of X(t) will depend on the application and the complexity of the control system [12,39] . Here, the X(t) vector is the relevant grid parameter to control inputs at time t for voltage regulation in the grid using energy storage active power. We calculate it by the formula (4). where: (4) X t = U node t , I line t , P demand t Q demand t , θ U is the actual voltage at the considering node at the previous time step node (t) t-1 . I is currently flowing through a specific distribution line at the previous time step line (t) t-1 . P is active power demand from loads connected to the grid at time demand (t) t . Q is reactive power demand from loads connected to the grid at time demand (t) t . We calculate the magnitude of a 4-dimensional vector as a mathematical function to combine the vector components into a single value to convert a vector with four components to a single value [4] as formula (5) presents. The remaining parameters present in the formula for active power regulation, denoted as P (5) X ( t ) = U node t 2 + I line t 2 + P demand t 2 + Q demand t 2 control , directly stem from the voltage value. Simulation model for test In the field of grid structures, it is crucial to have a comprehensive understanding of the critical components and underlying assumptions [16] . The examined system consists of various elements, including a transformer, nodes, lines, loads (GL), and photovoltaic installations (PV). Meticulous planning is required to create a realistic and accurate grid model. A single transformer is used in conjunction with 49 nodes (W), connected by 47 lines, accommodating 40 loads, and 18 photovoltaic installations. More detailed load profiles were adopted based on actual usage patterns and consumption data to model the load profiles. These profiles provide a more accurate representation of the energy consumption and demand within the grid. The exemplary waveforms were updated to reflect more recent time characteristics provided by Tauron, the energy infrastructure operator in southern Poland. The discretization time is precisely calibrated to correspond with the whole seconds of the longest sunshine-filled day of the year—the first day of summer, June 21—to minimize errors. Each node in the grid is uniquely identified, with the designation W1 assigned to the initial node, and subsequent nodes marked accordingly. The spatial distribution of the grid is critical for understanding the interconnections between the nodes, lines, and transformer. Fig. 2 provides a detailed schematic diagram that portrays the intricate composition of the grid fragment under study. The diagram reveals the configuration of the transformer, power lines, and nodes, allowing for a comprehensive understanding of the interconnections and spatial distribution within this fragment. By visually representing these elements, Fig. 2 serves as a crucial reference point for analyzing the operational characteristics of the examined grid section. The detailed grid model, incorporating more nodes, lines, and loads, provides a more accurate representation of the real-world grid system. The simulation results offer valuable insights into the performance of the grid under different conditions and can inform the design of future grids. Application algorithm in practice The study specifically focuses on analyzing the dynamics within an area where multiple prosumers actively generate electricity using photovoltaic installations. These prosumers are interconnected with the external grid through a transformer. Fig. 2 presents a comprehensive schematic diagram that visually depicts a specific section of the grid, highlighting critical components such as transformers, power lines, and nodes, along with their respective locations within the examined area. This diagram serves as an invaluable tool for understanding the overall structure and connectivity of the grid, facilitating a meticulous assessment of operational dynamics and potential points of interest in the analyzed fragment. In this investigation, the system under consideration is based on a fragment of the grid located in southern Poland. This region is characterized by a significant concentration of prosumers who generate electricity through photovoltaic installations. These prosumers are connected to the external grid through a transformer. As a result, a substantial number of prosumers are encompassed within the analyzed area, as depicted in Fig. 2 . Fig. 3 presents the active power values generated by photovoltaic sources. The active power generation from PV sources is derived from data obtained from the World Radiation Data Center, which is then incorporated into the Power Factory simulation software, utilizing coordinates 50 N and 20 E. Furthermore, Fig. 4 illustrates the active and reactive power values, offering a comprehensive overview of the power dynamics within the system. Lastly, Fig. 5 provides a comprehensive summary of the active and reactive power measurements obtained from the external grid, while Fig. 6 depicts the corresponding influence of these power values on the voltage levels within the examined grid. Fig. 7 . The analysis of the simulation highlights the need for effective grid control mechanisms in the presence of distributed energy sources, such as the PV installations in the microgrid. The observed voltage fluctuations, exceeding normal levels during periods of high PV generation and falling below the reference level during peak consumption hours, indicate the challenges associated with maintaining voltage stability in such scenarios. To address these voltage fluctuations, grid control strategies leveraging energy storage systems can be implemented. Energy storage units, such as batteries or capacitors, can play a crucial role in regulating the grid voltage by absorbing or injecting active power as needed. By utilizing energy storage, the excess active power generated by the PV systems during times of high generation can be stored for later use, effectively balancing the grid and mitigating voltage fluctuations. In addition to voltage regulation, energy storage systems can also help in managing reactive power requirements. As shown in Fig. 3 , the inverters used in PV systems absorb reactive power when active power is being generated. This exchange of active and reactive power with the grid can lead to power quality issues. Energy storage systems can be employed to provide reactive power support, ensuring a balance between reactive power absorption and generation, and thus improving power quality and system stability. The exchange of active power with the external grid, as depicted in Fig. 6 , highlights the potential role of energy storage systems in reducing grid dependence. By absorbing additional active power and providing reactive power locally, the reliance on the external grid can be minimized, leading to increased self-sufficiency and reduced dependence on centralized power generation. Overloaded and underloaded nodes A comprehensive simulation model was developed using PowerFactory software to evaluate the voltage regulation capabilities of grids employing Model Reference Adaptive Control (MRAC). The simulation model incorporated a realistic power system, comprising various nodes, lines, and transformers, to accurately represent the distribution grid. This model enabled a detailed analysis of the power system's performance and the effectiveness of voltage regulation achieved by the energy storage (ES) system with MRAC. Voltage measurements recorded at critical nodes within the grid provided values for detailed analysis. These measurements served as the basis for calculating performance metrics such as voltage deviation and settling time. The analysis of these metrics provided valuable insights into the technical aspects of the power grid, including the grid's stability and the efficiency of voltage regulation. The article presents a detailed analysis of the power system's performance through the use of several tables. These tables provide insights into various technical aspects of the power grid, including voltage deviations at different nodes, settling times for voltage regulation, and the overall effectiveness of the ES system with MRAC. This technical analysis helps in understanding the grid's behavior under different operating conditions and provides valuable information for further improvements in voltage regulation strategies. Table 1 provides valuable information regarding the voltage values at different nodes in the grid. It is observed that the voltage values at 13 nodes differ from the nominal values, which can vary by 10 % up and down, i.e., between 0.9p.u. and 1.1p.u. These nodes are crucial when considering the placement of energy storage units for voltage regulation. Tables 2 – 4 . By strategically placing energy storage units at these nodes, the active power received or sent to the node can have a positive impact on voltage regulation. This means that the energy storage units can actively contribute to regulating the voltage and maintaining it within the desired range. Considering these nodes for the placement of energy storage units is important because they represent areas in the grid where voltage deviations are significant. By targeting these nodes, the energy storage units can effectively mitigate voltage fluctuations and ensure a more stable and reliable grid operation. The simulation results reveal that thirteen nodes within the power distribution grid require voltage regulation measures to optimize grid performance. These nodes experience both low and high-voltage challenges, which can impact the reliability and efficiency of the power distribution system. Implementing energy storage technologies tailored to the specific voltage regulation requirements of each node is crucial for achieving optimal grid performance. To illustrate the active power calculation procedure for voltage regulation, Table 5 presents the smallest and greatest voltage levels observed across nodes W35, W42, and W49 within the examined grid segment. Additionally, Table 6 includes the calculated active power values for the remaining nodes, ranging from W34 to W50. The proximity of a node to the transformer influences its electricity demand, as nodes closer to the transformer typically exhibit more stable voltage levels. This analysis emphasizes the importance of considering the specific characteristics and requirements of each node when designing and implementing voltage regulation strategies. By selectively calculating active power for nodes experiencing voltage deviations, the energy storage system can efficiently allocate resources and respond to voltage fluctuations in a targeted manner. This approach optimizes the utilization of energy storage capacity and minimizes unnecessary energy consumption, ultimately enhancing the stability and reliability of the power distribution grid. The need to calculate active power from the energy storage system for voltage regulation specifically arises for nodes experiencing increased or decreased voltage values. This targeted approach ensures that power is allocated to nodes where voltage regulation is necessary, rather than wasting resources on nodes with normal voltage levels. By focusing on nodes with voltage deviations, the energy storage system can efficiently stabilize voltage levels and enhance overall grid stability. To achieve this, the energy storage system utilizes advanced control algorithms, such as the Model Reference Adaptive Control (MRAC) algorithm, which dynamically adjusts the active power output of the energy storage units based on the voltage deviations observed in the grid. The MRAC algorithm continuously monitors the voltage levels at different nodes and calculates the required active power injection or absorption to regulate the voltage within the desired range. Furthermore, the energy storage system takes into consideration various grid parameters, such as load demand, generation capacity, and network topology, to optimize the allocation of active power resources. This ensures that the energy storage units are effectively utilized to address voltage deviations in different parts of the grid, improving overall grid stability and reliability. By efficiently allocating resources and responding to voltage fluctuations in a targeted manner, the energy storage system minimizes unnecessary energy consumption, reducing operational costs and environmental impact. Additionally, the selective calculation of active power enables the energy storage system to respond effectively to voltage deviations, improving the resilience and reliability of the power distribution grid. This utilization of energy storage units for voltage regulation in grids involves selectively calculating active power for nodes experiencing voltage deviations. This targeted approach optimizes resource allocation, minimizes energy consumption, and enhances grid stability. The implementation of advanced control algorithms, such as the MRAC algorithm, and consideration of grid parameters further contribute to the effectiveness and efficiency of the energy storage system. The results obtained from the calculation of active power values used simultaneously for voltage regulation using controlled energy storage with the reference adaptive control model are highly promising. The study revealed that as the distance from the transformer increased, the required power values to stabilize the voltage levels decreased significantly. This finding indicates the effectiveness of the control model in regulating voltage within the grid. Therefore, the nominal approximate active power values for energy storage that ensure the steady operation of the MRAC-based algorithm are shown in Table 7 , The energy storage units were programmed to provide energy to the grid during peak hours from 9 am to 6 pm while adding power to the grid during off-peak hours from 12 am to 9 am and from 6 pm to 12 am. This scheduling of energy flow ensured optimal utilization of the grid for voltage regulation purposes, while also helping to balance the supply and demand of electricity on the grid. Fig. 8 depicts the active power characteristics of the energy storage system, illustrating its capability to absorb or inject power into the grid as needed for voltage regulation. Table 8 displays the effects of voltage when the voltage was at its minimum, while Table 9 displays the effects of voltage when the voltage values were at their maximum. Summary of the results of the algorithm The results of the algorithm based on the MRAC demonstrate the effectiveness of controlling grids solely with active power. The algorithm successfully determined independent power values for thirteen nodes within the grid. The application of the algorithm had a significant impact on voltage regulation throughout the entire grid, as evidenced by the data presented in Table 5 . This highlights the algorithm's ability to ensure stable and regulated voltage levels, enhancing the overall performance and reliability of the grid. One key advantage of MRAC is its ability to adapt to changing system conditions and uncertainties. In voltage regulation applications, the availability of active power from energy storage can vary due to factors like renewable energy generation fluctuations or changing load demands [9] . MRAC can adjust its control parameters to achieve optimal voltage regulation in dynamic and uncertain environments. This adaptability allows MRAC to maintain voltage stability and minimize the deviation from the reference value. Another benefit of MRAC is its ability to provide a fast and accurate response to changes in active power requirements. By continuously monitoring the system response, MRAC can quickly detect deviations from the desired voltage level and adjust the control action accordingly. MRAC's ability to rapidly respond to sudden changes in active power demands ensures that the voltage remains within the desired range. The fast response time of MRAC contributes to the overall stability and reliability of the grid. Additionally, MRAC can provide robust performance even in the presence of model uncertainties and disturbances. Traditional control models may rely on accurate system models, which can be challenging to obtain in practice. MRAC, on the other hand, leverages adaptive mechanisms to estimate and compensate for uncertainties in real time. This capability allows MRAC to maintain reliable voltage regulation even in the face of varying system conditions and disturbances, contributing to the overall stability and resilience of the grid. It is worth noting that the effectiveness of MRAC in regulating voltage using only active power depends on the specific characteristics and requirements of the grid. Sometimes, it may be necessary to use a combination of active and reactive power control for optimal voltage regulation. However, focusing solely on active power regulation, MRAC can offer advantages due to its adaptability, fast response, and robust performance. Also, the Model Reference Adaptive Control can be advantageous for regulating voltage in the grid using only active power from energy storage [62] . Its adaptability to changing system conditions, fast response to active power variations, and robustness in the presence of uncertainties make it a suitable choice in certain voltage regulation scenarios. The specific control model selection should consider the unique characteristics and requirements of the grid to ensure optimal performance. Conclusion and future work In this article, we present the findings of our study on voltage regulation using the MRAC algorithm. Through simulations, we demonstrate how the MRAC algorithm effectively stabilizes voltage levels in the grid. Additionally, we explore the relationship between distance from the transformer and the power requirements from grids for voltage regulation. The significant advantage observed in this study is the reduction in power requirements as the distance from the transformer increased. This implies that the grid can efficiently regulate voltage levels in remote areas of the grid with lower power demands. This finding has important implications for optimizing the deployment and utilization of energy storage resources in power distribution systems. The initial disturbances caused by high consumer load and the fluctuating energy generation from photovoltaic installations are also discussed. The insights gained from this research contribute to the advancement of voltage regulation strategies, ultimately enhancing grid stability and reliability. Incorporating renewable energy sources significantly impacts the operation and dependability of distribution grids. Developing sophisticated control strategies and grid management methods should be the main emphasis of future research to integrate renewable energy sources efficiently, assuring optimal use and reducing system disturbances. This will result from increased penetration of renewable energy sources, better electricity quality, and greater distribution grid performance. The detailed analysis presented in this article provides valuable insights for researchers and practitioners in the field of power systems and energy storage. It highlights the importance of accurate modeling and analysis in evaluating the performance of voltage regulation strategies and paves the way for further advancements in grid resilience and stability. This article presents a comprehensive examination of the utilization of energy storage units for voltage regulation in grids, highlighting its contributions in five key areas and seven novel aspects demonstrated in the study, while also suggesting four future research directions to further enhance grid resilience and effective voltage regulation. 1. This article contributes by providing a comprehensive examination of energy storage units for voltage regulation in grids, including a detailed description of the MRAC algorithm. It also showcases the practical application of voltage regulation through energy storage and active power control. The research emphasizes the advantages of the proposed method, focusing on grid resilience and effective voltage regulation. 2. The article provides a comprehensive examination of the utilization of energy storage units for voltage regulation in grids. It explores the practical implementation of active power control using a Model Reference Adaptive Control algorithm. 3. The article offers a detailed description of the MRAC algorithm used for active power control. It considers grid parameters and showcases the practical application of voltage regulation through energy storage active power control using MRAC. 4. The article showcases the practical application of voltage regulation through energy storage active power control using the MRAC algorithm. It demonstrates how energy storage units can effectively regulate grid voltage in real-world scenarios. 5. The article discusses the results of implementing an energy storage unit for global voltage regulation. It highlights the advantages and superiority of this method compared to other voltage regulation techniques. 6. The article emphasizes the importance of achieving grid resilience through effective voltage regulation. It provides insights into how energy storage units and the MRAC algorithm enhance grid stability and performance. The work presents a novel approach to voltage regulation through active power energy storage using model reference adaptive control. It offers a practical implementation of active power control using MRAC, taking into account grid parameters for improved performance. The research contributes to the field of grid resilience by showcasing the application of the MRAC algorithm in voltage regulation and highlighting the superiority of the proposed method in achieving optimal control and reliability. 1. The article focuses on the practical implementation of active power control using the MRAC algorithm, which adds a practical dimension to the existing body of knowledge. This practical approach enhances the understanding of how energy storage units can effectively regulate voltage in real-world grid scenarios. 2. The article considers grid parameters in implementating voltage regulation through energy storage active power control. This consideration of grid parameters adds novelty to the work by addressing the specific requirements and characteristics of different grids. 3. The article showcases the application of the MRAC algorithm for voltage regulation, which adds novelty by demonstrating the effectiveness of this algorithm in real-world scenarios. This practical demonstration contributes to the understanding of how MRAC can be utilized for voltage regulation in grids. 4. The article discusses the results of implementing an energy storage unit for global voltage regulation and highlights the advantages of this method. This discussion of results and advantages adds novelty by providing evidence and insights into the effectiveness and superiority of the proposed approach. 5. The article contributes to the field of grid resilience by emphasizing the importance of effective voltage regulation and showcasing the practical implementation of energy storage units for this purpose. This contribution adds novelty by addressing the specific challenges and requirements of achieving grid resilience through voltage regulation. 6. The article focuses on the practical application of energy storage units for voltage regulation, which adds novelty by providing insights into the implementation aspects of this technology. This practical perspective enhances the understanding of how energy storage units can be effectively utilized for voltage regulation in grids. 7. The article highlights the advantages and superiority of the proposed method compared to other voltage regulation techniques. This emphasis on the superiority of the method adds novelty by providing evidence and insights into the unique benefits offered by the combination of energy storage units and the MRAC algorithm for voltage regulation. Future studies in voltage regulation using model reference adaptive control for appropriate active power operation could contribute to the development of more robust and resilient energy systems by leveraging advanced control algorithms, optimization techniques, and AI-based approaches. 1. Future studies can focus on developing robust control schemes for voltage and reactive power regulation in multi-feeder microgrid systems, considering accurate power sharing and voltage regulation at each load feeder. These studies can explore advanced control techniques that enhance the resilience and stability of microgrids, ensuring reliable voltage regulation in the presence of distributed energy sources. 2. Further research can investigate the integration of automatic voltage regulators and load frequency control in distributed energy storage control systems. These studies can explore the synergistic benefits of combining these control strategies to achieve effective voltage regulation and load balancing in smart electric power delivery systems. 3. Future studies can focus on developing novel resilience assessment methods for active distribution networks, considering the integration of renewable energy sources and energy storage systems. These studies can explore innovative voltage regulation schemes that enhance the resilience of distribution networks, ensuring reliable and stable operation in the face of disturbances and uncertainties. 4. Future studies can investigate the development of hierarchical scheduling frameworks for the resilience enhancement of decentralized renewable-based microgrids. These studies can consider proactive actions and mobile units to optimize the scheduling of renewable energy sources, energy storage systems, and demand response programs, ensuring efficient and resilient operation of microgrids. To sum up, this article not only provides a comprehensive examination of energy storage units for voltage regulation, but also offers a detailed description of the MRAC algorithm, showcases its practical application, discusses its advantages, emphasizes the importance of grid resilience, and introduces novel aspects such as considering grid parameters, demonstrating practical implementation, and highlighting the superiority of the method. Furthermore, the article suggests future studies in robust control schemes, integration of control strategies, resilience assessment, and hierarchical scheduling frameworks to enhance grid resilience and voltage regulation. Declaration of competing interest The author declares having no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
[
"AGGARWAL",
"AHRENS",
"ALIRAHMI",
"ASIJA",
"BARKLUND",
"BOJOI",
"BYRNE",
"CAI",
"CONEJO",
"DEEPA",
"DUARTE",
"ESPINAR",
"FALLAH",
"FEI",
"GELLINGS",
"HAN",
"HAQ",
"JADHAV",
"JAFFAR",
"JAHAN",
"KAHANI",
"KAMBALIMATH",
"KHAN",
"KOHZADI",
"LEALFILHO",
"LI",
"LOMBARDI",
"MASSAOUDI",
"MCILWAINE",
"MEHMOOD",
"MOSSAD",
"PARRA",
"QUINONES",
"RAO",
"RUSSELL",
"SAEED",
"SAIBALMANNA",
"SAXENA",
"SWAROOP",
"TORRICO",
"ULUTAS",
"WU",
"VANDOORN",
"YACOUBI",
"ZERAATI",
"ZHANG"
] |
443c42e63955408b9bb3d7e856a06741_Mucocele apendicular a propósito de un caso_10.1016_j.rmclc.2023.09.001.xml
|
Mucocele apendicular: a propósito de un caso
|
[
"Boui, Meriem",
"Abide, Zakaria",
"Boujida, Nadia",
"Salaheddine, Tarik",
"Fenni, Jamal El",
"Lahkim, Mohamed"
] |
El mucocele apendicular es una entidad patológica rara, pero potencialmente peligrosa, se presenta en diferentes formas clínicas. Presentamos el caso de un paciente de 47 años, sin antecedentes particulares, que consulta por dolor crónico en fosa ilíaca derecha y cuya tomografía computarizada abdomino-pélvica muestra una masa quística del apéndice que evoca un mucocele apendicular. El paciente se sometió a una apendicectomía. El análisis histológico de este confirmó el diagnóstico de mucocele apendicular sin células malignas. El seguimiento postoperatorio fue sencillo.
Appendiceal mucocele is a rare pathological entity, but potentially dangerous, it presents in different clinical forms. We present the case of a 47-year-old patient, with no particular history, who consulted for chronic pain in the right iliac fossa and whose abdominal-pelvic computed tomography showed a cystic mass in the appendix that evokes an appendiceal mucocele. The patient underwent an appendectomy. Histological analysis of this confirmed the diagnosis of appendiceal mucocele without malignant cells. Postoperative follow-up was easy.
|
Introducción El mucocele apendicular o tumor secretor de moco es una condición rara definida como la distensión líquida de la luz apendicular por acumulación de moco pudiese ser, o no, de origen tumoral, benigno o maligno . Representa del 0,15 al 0,6% de las piezas de apendicectomía 1 . Actualmente la imagenología juega un papel importante en el diagnóstico, en especial la tomografía computada (TC) abdominal que permite establecer el diagnóstico y evitar complicaciones, aunque el diagnóstico definitivo se basa en el estudio histológico que debiese ser rutinario para todas las piezas de una apendicectomía. Su tratamiento es quirúrgico. El mucocele apendicular debe diagnosticarse temprano debido a su posible malignidad y al riesgo significativo de enfermedad gelatinosa del peritoneo (pseudomixoma peritoneal) que existe. Presentamos un caso de mucocele apendicular en un paciente de 47 años, destacando el rol de la TC en el diagnóstico de esta patología. 2–5 Caso clínico Paciente masculino de 47 años, sin antecedentes mórbidos, que consulta por dos meses de evolución de dolor sordo en fosa ilíaca derecha, de moderada intensidad, sin otros signos clínicos asociados. Sin hallazgos de interés al examen físico, se le indica una tomografía computarizada abdomino-pélvica con y sin inyección de medio de contraste yodado en la fase portal. El examen reveló una masa de 5 cm de diámetro y 9,5 cm de largo, sugerente de mucocele apendicular, colgando del extremo distal del apéndice ( Figura 1 B). Con forma de “quilla”, pared delgada y densidad quística ( Figura 1 A, C y D) presentaba realce homogéneo de sus paredes, sin calcificaciones parietales aparentes. Se realiza una apendicectomía con análisis anatomopatológico de la pieza quirúrgica, que confirmó el diagnóstico de mucocele apendicular sin células malignas, del tipo cistoadenoma mucinoso, con márgenes de resección sanos. El tumor fue clasificado como neoplasia apendicular mucinosa de bajo grado según la clasificación de tumores apendiculares propuesta por el Peritoneal Surface Oncology Group International (PSOGI) , sin perforación. El curso postoperatorio transcurrió sin complicaciones. 6 Discusión El mucocele apendicular, también llamado tumor mucosecretor apendicular, se define como una distensión mucinosa de la luz apendicular como consecuencia de la acumulación intraluminal de secreciones mucinosas, translúcidas y gelatinosas. Puede ser o no de origen tumoral, benigno o maligno . Descrito por primera vez por Rokitansky en 1842 y nombrado por Feren en 1876 1,7 , el mucocele apendicular constituye una afección poco común que representa del 0,15 al 0,6% de las apendicectomías 8,9 . Afecta preferentemente a adultos cuya edad media se sitúa entre 50 y 60 años 2–6,10–13 . La proporción de sexos es variable de una serie a otra y, según la literatura, suele tener un predominio femenino 4,14–17 . 4,5,8,9,14,16 En el aspecto anatomopatológico, se distinguen cuatro tipos de lesiones histológicas (en orden de gravedad creciente) : a) el quiste de retención: corresponde a la acumulación de moco debido a la obstrucción de la luz apendicular, especialmente en casos de obstrucción causada por un apendicolito. En respuesta a la obstrucción, la mucosa se vuelve hiperplásica e hipersecretora y se producen cambios degenerativos progresivos con la aparición de células cúbicas. La pared del apéndice se vuelve atrófica y puede ser reemplazada posteriormente por tejido conjuntivo; b) la hiperplasia vilosa epitelial: el apéndice es normal o ligeramente dilatado con una mucosa adelgazada, las lesiones se limitan a la mucosa y se organizan en estructuras papilares finas sin atipias ni mitosis; c) el cistoadenoma mucinoso: el apéndice está dilatado por el moco y la luz está revestida por un epitelio uniestratificado mucosecretor. Pueden existir formaciones papilares, pero generalmente el epitelio es plano. Se pueden encontrar grados variables de displasia asociados con atipias o mitosis. Este tipo constituye la causa más frecuente de tumores en los mucoceles apendiculares. El adenoma quístico mucinoso es una neoplasia quística inusual del apéndice vermiforme con cambios adenomatosos vilosos en el epitelio apendicular asociados con una luz llena de mucina, según la clasificación de tumores apendiculares propuesta por el 18 Peritoneal Surface Oncology Group International (PSOGI) ; d) el cistadenocarcinoma mucinoso, tumor maligno, se caracteriza por un alto grado de atipias celulares y mitosis, invasión conjuntiva por células neoplásicas y presencia de células neoplásicas en el derrame mucoso intraperitoneal. En nuestro caso, el mucocele apendicular correspondía a un cistoadenoma mucinoso. Algunos casos de la literatura también informan este tipo de mucocele, como el cistoadenoma mucinoso 6 , y algunos de ellos se beneficiaron de una hemicolectomía derecha 10–12,19 . Un estudio reciente llegó a una conclusión similar sobre la clasificación PSOGI y la 8ª edición del AJCC ( 10,11 American Joint Committee on Cancer ) para la estratificación pronostica de los pacientes, sugiriendo que la clasificación PSOGI ofrece una mejor estratificación cuando se considera la supervivencia sin progresión . 20 Su presentación clínica es variada e inespecífica, siendo asintomática en 25 a 30% de los casos. En un 70 a 75% de los casos se manifiesta con dolor crónico en fosa ilíaca derecha . La complicación más grave es la rotura peritoneal, responsable de un pseudomixoma peritoneal, cuyo pronóstico suele ser malo. Lo clave para el radiólogo es el reconocimiento preoperatorio del mucocele apendicular, para alertar al cirujano del riesgo de rotura durante la intervención y evitar un pseudomixoma del peritoneo. Las imágenes médicas basadas en la exploración ecográfica y con TC en la mayoría de los casos permiten el diagnóstico. La ecografía revela una masa quística de fosa ilíaca derecha con contenido más o menos hipoecogénico dando la imagen denominada en “piel de cebolla” 13,15 . En la TC se presenta como una masa de base cecal, redondeada u oblonga y bien delimitada, con una pared delgada. Puede presentar finas calcificaciones parietales que, aunque inconstantes, permiten el diagnóstico diferencial con un absceso apendicular en caso de síndrome apendicular agudo. A veces se ve un estercolito en la base del apéndice. Su pared puede estar engrosada, irregular, con nódulos que realzan el contraste, evocando un cistoadenocarcinoma; sin embargo, no existen signos radiológicos que confirmen o excluyan con certeza la malignidad del tumor apendicular subyacente 14,17 . La rotura de un mucocele apendicular, cualquiera que sea su estadio, en la cavidad peritoneal da lugar a un pseudomixoma peritoneal también denominado “enfermedad gelatinosa del peritoneo” que se presenta en forma de una ascitis gelatinosa: espesa, hipodensa, que puede ser septada y contener finos calcificaciones curvilíneas, creando un “festoneado”. En la imagen por resonancia magnética abdominopélvica, el mucocele se presenta como una lesión quística peri-cecal con hipo-señal en T1 y señal hiperintensa en T2 con realce de contraste en la pared después de la inyección de gadolinio 1,21 . La colonoscopía puede mostrar una elevación del orificio apendicular, el cual puede presentar un flujo mucoso amarillento 22 . Es posible encontrar implantes peritoneales que se presentan en forma de nódulos heterogéneos, que pueden realzarse tras la inyección de medio de contraste. Deben buscarse en particular a nivel del omentum mayor, el fondo de saco de Douglas, los ovarios, las canaletas parietocólicas y las regiones subfrénicas. Pueden existir otras complicaciones además de la ruptura, habiéndose informado un caso de vólvulo 23 así como también una doble invaginación apendicocecal y cecocólica 24 . 25 Los principales diagnósticos diferenciales son; plastrón apendicular, absceso apendicular, quiste ovárico en la mujer, quiste mesentérico o duplicación digestiva quística. Su pronóstico varía según se trate de un adenoma o de un adenocarcinoma mucinoso, pero el pseudomixoma peritoneal sigue siendo un cuadro grave. Esta es la razón por la que la extirpación de un mucocele apendicular durante una apendicectomía debe realizarse imperativamente sin romper la pared . 21 El tratamiento del mucocele apendicular se basa en la cirugía o en su asociación con quimiohipertermia intraperitoneal en caso de enfermedad gelatinosa peritoneal . No obstante, esta cirugía debe cumplir con un protocolo que incluya la extracción completa del apéndice, el paso a través de una zona sana en la base y la ausencia de traumatismo apendicular intraoperatorio que pudiera provocar la diseminación de moco y células epiteliales en el peritoneo. En nuestro caso, el mucocele no estaba perforado, no había ningún proceso patológico en la base del apéndice y los ganglios linfáticos regionales eran negativos. Por lo tanto, solo se realizó una apendicectomía, que es una cirugía adecuada en un caso como este. 26 Conclusión El mucocele apendicular es una patología rara con síntomas inespecíficos y variados. El diagnóstico preoperatorio es posible e importante requiriendo la realización de un escáner abdominal el cual presenta un triple interés; diagnóstico objetivando la conexión de la masa líquida con el ciego, y en ocasiones las calcificaciones parietales; búsqueda de signos de malignidad y finalmente, para el seguimiento post-terapia. Declaración de conflictos de intereses Los autores declaran no tener conflictos de interés. Consideraciones éticas Las imágenes incluidas en este artículo han sido anonimizadas para mantener la confidencialidad del paciente.
|
[
"FAIRISE",
"RANGARAJAN",
"YAKAN",
"LAKATOS",
"WAKUNGA",
"CARR",
"MOUJAHID",
"CREUZE",
"LOPEZ",
"TAPIA",
"BUTTE",
"YOUSRA",
"MERRAN",
"CASPI",
"SOUEIMHIRI",
"KOUADIO",
"ABDELOUAFI",
"PICKHARDT",
"SK",
"GONZALEZBAYON",
"ZANATI",
"DERELLE",
"KHAN",
"DOSSANTOS",
"JADIB",
"GOVAERTS"
] |
9bba41ad28ce4a659e87d8ba852098e8_Measurement and assessment of water resources carrying capacity in Henan Province China_10.1016_j.wse.2015.04.007.xml
|
Measurement and assessment of water resources carrying capacity in Henan Province, China
|
[
"Dou, Ming",
"Ma, Jun-xia",
"Li, Gui-qiu",
"Zuo, Qi-ting"
] |
As demands on limited water resources intensify, concerns are being raised about water resources carrying capacity (WRCC), which is defined as the maximum sustainable socioeconomic scale that can be supported by available water resources and while maintaining defined environmental conditions. This paper proposes a distributed quantitative model for WRCC, based on the principles of optimization, and considering hydro-economic interaction, water supply, water quality, and socioeconomic development constraints. With the model, the WRCCs of 60 subregions in Henan Province were determined for different development periods. The results showed that the water resources carrying level of Henan Province was suitably loaded in 2010, but that the province would be mildly overloaded in 2030 with respect to the socioeconomic development planning goals. The restricting factors for WRCC included the available water resources, the increasing rate of GDP, the urbanization ratio, the irrigation water utilization coefficient, the industrial water recycling rate, and the wastewater reuse rate, of which the available water resources was the most crucial factor. Because these factors varied temporally and spatially, the trends in predicted WRCC were inconsistent across different subregions and periods.
|
1 Introduction The concept of carrying capacity is rooted in demography, biology, and applied ecology ( Clarke, 2002 ). In ecology, carrying capacity is defined as the maximum population of a species that a habitat can support without permanent impairment of the habitat's productivity ( Rees, 1997 ). Water resources carrying capacity (WRCC) is a new concept that has not yet been clearly defined and described. Some researchers consider WRCC to be the capacity of water resources to sustain a society at a defined good standard of living, while others consider it the threshold level of water resources at which an environment is capable of supporting the activities of human beings ( Seidl and Tisdell, 1999; Li et al., 2000 ). Internationally, not many breakthroughs have been achieved in the WRCC research; the topic has only been considered briefly in theories of sustainable development ( Ofoezie, 2002 ). Some scholars have used terms such as sustainable water utilization, the ecological limits of water resources, or the natural system limits of water resources to express the meaning of WRCC ( Hunter, 1998; Falkenmark and Lundqvist, 1998 ). Studies focusing exclusively on WRCC have primarily been conducted in China. The concept of WRCC was first applied to the Urumqi River Basin in China in 1989 ( Shi and Qu, 1992; Feng et al., 2006 ). It has been a topic of significant debate since 2001, and represents a new academic frontier ( Long et al., 2004 ). One definition of WRCC, and the definition used in this study, is the maximum sustainable socioeconomic scale based on available water resources and maintenance of good, defined environmental conditions ( Dou et al., 2010 ). In this concept, the socioeconomic scale is the overall size of a regional socioeconomic system in a certain period, and can be represented by a series of socioeconomic indices (such as total population, urbanization ratio, industrial structure, and grain yield). Good environmental conditions mean a suitable living environment for human beings and the ecological system, in particular good water quality and a healthy aquatic environment. WRCC is an indicator of regional sustainability, and achieving regional sustainability is important because social institutions and ecological functioning are closely linked at this scale ( Graymore et al., 2009 ). Therefore, research on WRCC should be based on two premises: First, it must be possible to sustain the normal operation of a regional socioeconomic system, and as a result researchers must calculate the quantity of water resources required to sustain these social service functions. Second, it is necessary to evaluate the maximum socioeconomic scale that water resources can sustain after meeting the needs of the ecosystem. Regional carrying capacity depends on water resources. There have been many theoretical studies of carrying capacity based on regional water resources because this concept is most often considered within a larger theoretical context of sustainable development. In particular, severe water shortage problems have forced the Chinese government to initiate a series of studies to determine the carrying capacity based on regional water resources in arid and semi-arid areas, such as western China and the North China Plain ( Xia and Zhu, 2002; Dou et al., 2010; Zai et al., 2011 ). In recent years, with increasingly serious water pollution, there have even been some studies conducted in eastern China, where water resources are abundant ( Liu and Borthwick, 2011; Liu, 2012 ). Furthermore, Falkenmark and Lundqvist (1998) have used estimates of the maximum global use of water resources to study how carrying capacity is determined by regional water resources. The National Research Council (NRC) ( 2002 ) studied the Florida Keys Basin's carrying capacity in the United States under different land-use scenarios. Lane et al. (2014) offered a Carrying Capacity Dashboard ( QUT, 2012 ) to highlight one way in which some basic resource-based parameters have been utilized. In practice, carrying capacity is often estimated by comparing stress on the environment (e.g., demand of natural resources) against environmental thresholds (e.g., available natural resources) ( Clarke, 2002; Oh et al., 2005 ). On the whole, the current studies on WRCC emphasize harmonization of the demands of socioeconomic development with the supply of water resources. Regional socioeconomic systems and water resources systems are often represented using areas such as river basins, which allow researchers to analyze the systems' structures, functions, and processes and determine the WRCC. In China, the regional socioeconomic scale may be determined by the urban population growth rate and economic development goals. Constraints imposed by the availability of water and other natural resources are rarely considered in planning, which may explain why most Chinese cities are facing severe water shortages and experiencing environmental problems ( Zhang et al., 2010 ). Therefore, it is necessary to develop a suitable methodology to effectively describe hydro-economic interaction in highly populated regions and to choose the best strategies to alleviate the conflict between socioeconomic development and water resources exploitation. Henan Province, China's most populous province and the province with the fifth highest gross domestic product (GDP), has long suffered from an intense conflict between the limited water resources and the rapid growth of water demand. In this study, we developed a method for calculating Henan's WRCC based on available water resources and relevant water environment protection goals, and analyzed spatial and temporal variations of the WRCC. First, considering the spatial differences of the economic development level and water resources conditions, a distributed hydro-economic model was developed to describe the interaction between the socioeconomic and water resources systems. Second, a WRCC quantification model was developed to identify the maximum sustainable socioeconomic scale based on the hydro-economic interaction relationship and a series of constraint conditions. Finally, on the basis of the models, Henan Province's WRCC was calculated for different development periods, and the change tendency of the water resources carrying level was analyzed. 2 Framework and methodology 2.1 Overview Research on WRCC involves many disciplines, including hydrology, ecology, environmental sciences, economics, sociology, and management science ( Zhang et al., 2010 ). Many methods can be used, of which the most common are trend analysis ( Liu, 2012 ), the fuzzy comprehensive evaluation method ( Prato, 2009 ), system dynamics ( Feng et al., 2008; Dang and Guo, 2012 ), multi-objective decision-making and analysis ( Xu and Cheng, 2000 ), the large-scale system theory, the optimization method, and the projection pursuit approach ( Zhang and Guo, 2006; Liu and Borthwick, 2011 ). Trend analysis is based on empirical analysis of some socioeconomic indices under water resources constraints. The fuzzy comprehensive evaluation method is a common assessment method based on a set of index systems. System dynamics can reflect the interaction and feedback mechanism between human activities and the water resources system. Multi-objective decision-making and analysis can obtain the maximum sustainable socioeconomic scale under a series of water environmental and resources constraints. The large-scale system theory can use the idea of decomposition and coordination to solve a large-scale system problem. The optimization method finds the global optimal solution for complex problems based on given criteria. Finally, the projection pursuit approach is a new statistical method solving multi-dimensional socioeconomic and water resources system problems. From a management perspective, the large-scale system theory and the optimization method are most appropriate because the former shows the interaction between the socioeconomic and water resources systems, and the latter is convenient for management and decision-making. In this study, we developed a quantitative method for determining WRCC based on these two theories, and synchronously modified it to transform it into a distributed model that considers spatial differences of the economic development level and water resources conditions of the study area. The research framework was as follows: (1) The study area was divided into 60 subregions based on the intersections of 18 administrative divisions of Henan Province and 21 third-level sub-catchments of the four first-level river basins. The division of sub-catchments was undertaken according to The Technical Outline of National Comprehensive Planning of Water Resources ( GIWRHPD, 2006 ). The reason we divided the study area in this way is that spatial differences among available water resources are determined by the third-level sub-catchments in which rainfall and runoff conditions are different, whereas those of water demand are determined by the administrative divisions and associated with economic development levels. The subregions were then connected by a hydraulic relationship based on the river systems of the study area ( Fig. 1 ), demonstrating the transfer processes of water and contaminants between the subregions. (2) A distributed hydro-economic model, used to represent the interaction between socioeconomic development and water resources exploitation, was developed based on the large-scale system theory. (3) Based on the hydro-economic model, an optimal model for calculating WRCC was developed. The model seeks to sustain the maximum socioeconomic scale as the optimization objective, maintain ideal water quality of water function areas, and meet basic water supply and livelihood guarantees as the constraint conditions. (4) An assessment method with a standard water resources carrying level, which reflected the development potential of the subregions in the future, was developed. (5) The WRCC of 60 subregions in Henan Province was calculated for different development periods, and the main factors restricting socioeconomic development of Henan Province were identified by these models. 2.2 Development of hydro-economic model A hydro-economic model was developed to describe the interaction between the socioeconomic and water resources systems in Henan Province. This model consists of three calculation modules: the socioeconomic system module, water quantity module, and water quality module. 2.2.1 Socioeconomic system module This module was developed to simulate the pressure of socioeconomic development on the water resources system inside each subregion, such as socioeconomic development in the future, natural resources consumption, and environmental pollution. This module consists of three components: (1) Forecast of socioeconomic development level: This was used to forecast the future socioeconomic scale according to the planning development goals of Henan Province. To describe the hydro-economic interaction, it is essential to select several representative socioeconomic indices to reflect the intensity of human activity and its influence on the environment. In this study, the following indices were selected: (a) indices A: population indices, such as total population, the urbanization ratio, the growth rate of the population, and the floating population; (b) indices B: economic indices, such as GDP, the proportion of the economy made up by the three industries, the increasing rate of GDP, livestock number, and grain yield; (c) indices C: indices that were used to reflect the levels of resource consumption and pollution discharge, such as interbasin water transfer, water use quota, water consumption rate, and the pollutant discharge coefficient of various industries; and (d) indices D: indices that were used to reflect the development level of technology, such as the wastewater treatment efficiency, the wastewater reuse rate, the industrial water recycling rate, and the irrigation water utilization coefficient. In this study, we selected 2010 as the baseline year and 2011 – 2030 as the planning years. Indices A and B were used to forecast the socioeconomic scale of the study area in the planning years, and indices C and D were used to simulate the conditions of water resources supply and use as well as pollutant discharge. In the forecast of the socioeconomic scale, the indices of socioeconomic development level (e.g., the urbanization ratio, the growth rate of the population, and the proportion of the economy made up by the three industries) of the study area were first analyzed and forecasted on the basis of the statistical data from 1980 to 2000. Then, the values of the socioeconomic scale indices (e.g., total population, GDP, livestock number, and grain yield) were calculated according to the functional relationship between them and the parameters above. Finally, the forecasted results were verified and adjusted according to the target values. (2) Calculation of water use (or demand) and consumption: In the baseline year, the productive (including industrial, agricultural, and service industrial), domestic (including urban domestic and rural domestic), ecological, and total water uses were obtained from the statistical data of water conservancy departments; in the planning years, the productive, domestic, ecological, and total water demands were forecasted with the water quota method, which constructs a functional relationship between the socioeconomic or ecological indices and water demand quota. The corresponding water consumption was then calculated by multiplying the water use (or demand) by the water consumption rate. (3) Calculation of pollutant discharge: The pollutant discharges arising from point sources (including industrial and urban domestic pollutant discharges) and non-point sources were calculated with the pollutant-discharging coefficient method, which constructs a functional relationship between the socioeconomic indices and pollutant-discharging coefficients. The determination of the pollutant-discharging coefficients was based on comprehensive consideration on the industrial source, domestic lifestyle, and pollution source treatment level. The river pollution load was then calculated by multiplying pollutant discharge quantities into rivers by the river load coefficients derived empirically on the basis of the degradable characteristic of pollutants, distance from the pollution source to rivers, and canalization level of drainage channels. 2.2.2 Water quantity module First, the hydraulic connection among the subregions was established, and a sketch of the water resources calculation node was made ( Fig. 1 ). All of the variables pertaining to water resources in a subregion, such as the volume of inflow and outflow, self-produced and transferred water, and water consumption, were determined during a certain period of time. According to the relationships between the unknown variables and the given variables, the unknown variables were expressed by the approximate functions of the given variables and then input into the model. The unknown parameters were identified and inserted into the model. According to the calculated multiple correlation coefficients, the regression results were evaluated, and the identified parameters were input into the model. The values of all the water volumes were calculated and checked to determine whether the calculations conformed to the law of mass conservation. In this study, the precipitation and inflow data of 33 hydrological stations in Henan Province were used in the calculation of water quantity. For any time period Δ t , there is a water balance equation as follows: where (1) P + Q T + Q in = E + E i + E a + E ud + E rd + Q out + Δ V P is the volume of precipitation, in m 3 ; Q T is the volume of transferred water, in m 3 ; Q in is the volume of water inflow, in m 3 ; E is the volume of evapotranspiration (including water surface and land surface evaporations), in m 3 ; E i , E a , E ud , and E rd are the volumes of industrial, agricultural, urban domestic, and rural domestic water consumptions, in m 3 ; Q out is the volume of water outflow, in m 3 ; and Δ V is the volume change in water storage, in m 3 . 2.2.3 Water quality module Along with the water quantity module, a module for water quality was also developed based on the law of mass conservation. Because organic pollution was the most outstanding problem of surface water pollution in Henan Province, the permanganate index (COD Mn ) was selected as the only water quality index in the following calculation process. In this study, the COD Mn concentrations at 21 monitoring sections in Henan Province were used in the calculation of water quality. For any time period Δ t , the following pollutant balance equation was satisfied: where (2) Δ C V s = ( 1 − β ) ( Q si C in + S ps + S ns ) − ( Q so + W u ) C out Q si and Q so are inflow and outflow volumes of surface water, respectively, in m 3 ; C in and C out are COD Mn concentrations of inflow and outflow, respectively, in mg/m 3 ; Δ C is the change in COD Mn concentrations over the time period Δ t , in mg/m 3 ; S ps and S ns are river pollutant loads from point sources and non-point sources, respectively, in mg; V s is the volume of surface water storage, in m 3 ; W u is the volume of total water use, in m 3 ; and β is the comprehensive reduction rate of COD Mn , which is related with river length, flow velocity, and pollutant characteristics, and determined by a one-dimensional steady-state pollutant migration and transformation equation, which is dimensionless ( Dou et al., 2010 ): where (3) β = f ( x , u , k ) = Q si C in ( 1 − e − k x 1 u ) + ( S ps + S ns ) ( 1 − e − k x 2 u ) Q si C in + S ps + S ns x 1 is the river length from the inflow section to the outflow section of a subregion, in m; x 2 is the river length from the sewage outfall section to the outflow section, in m; u is the average flow velocity, in m/s; and k is the degradation coefficient of COD Mn , in d −1 . 2.2.4 Coupling of modules The modules of the water quantity, water quality, and socioeconomic system are correlated and interdependent, and they must be coupled to simulate and forecast the interaction between the socioeconomic and water resources systems. The coupling relationship of the modules is illustrated in Fig. 2 . In the analysis of the entire study area, based on the input–output relationship of each subregion shown in Fig. 1 , all subregions were combined. 2.3 Development of WRCC quantification model The objective of the WRCC quantification model is to find the maximum socioeconomic scale of Henan Province. The socioeconomic scale is usually represented by a series of indices, i.e., urban population, rural population, GDP, industrial added value, agricultural added value, and grain yield. If these indices are included in the objective function, solution of the model will become one solution of the multi-objective optimization problems. For the sake of a convenient solution, all of these indices change proportionally with one another during the search for optimal values. Therefore, the objective function is simplified as a single-objective function based on a change ratio α and can be expressed as follows: where (4) Q WRCC = max f ( α ( P pop , R GDP , ⋯ ) ) Q WRCC is the value of WRCC; α is the change ratio, which is a percentage used to minimize or magnify the socioeconomic scale when solving the WRCC quantifying model; P pop is the population; and R GDP is the value of GDP, in RMB. The constraints of the WRCC quantifying model are as follows. (1) Simulation of hydro-economic interaction: The hydro-economic model described above is embedded in the WRCC quantifying model as an important constraint. It plays an important part in constructing the relationship between the objective function and other constraints, realizing the transmission and feedback of data between the socioeconomic and water resources systems. (2) Water supply constraint: Total water use of each subregion must be less than or equal to its available water supply. where (5) W p + W e + W d ≤ W as W p , W e , and W d are, respectively, the volumes of productive, ecological, and domestic water use, in m 3 , and W as is the volume of available water supply, including available surface water, available groundwater, transferred water from outside subregions, and wastewater reuse, in m 3 . (3) Water quality concentration constraint: The calculated value of COD Mn concentration in representative water function areas in each subregion must be less than or equal to the corresponding control objective value: where (6) C ≤ C S C is the calculated value of COD Mn concentration, and C S is the control objective value of COD Mn concentration, in mg/L. In this study, there were 60 water function areas selected as the representative concentration control nodes, according to the following principles: (a) these water function areas were the major stream segments in the subregions; (b) these water function areas received the greatest mass of flow and pollutant afflux in the subregions where they were located; and (c) for subregions in which hydraulic connections were very complex, the number of representative water function areas was increased appropriately. (4) Socioeconomic development level constraint: Per capita GDP and per capita share of grain in each subregion must be greater than or equal to a certain living standard: (7) R a ≤ R S where (8) F a ≤ F S R a and R S are the calculated values and the minimum living standards of per capita GDP, respectively, in RMB; and F a and F S are the calculated values and the minimum living standards of per capita share of grain, respectively, in tons. The minimum living standards were obtained from The Future Development Outline of Henan Province ( PGHP, 2010 ). The WRCC quantifying model is composed of the objective function and constraints. Applying the numerical iteration method (NIM), an approximate optimal solution that meets all of the constraints is obtained. When searching for the optimum value with the NIM, the actual (or forecasted) socioeconomic data input as the initial value is enlarged or reduced by adjusting α in a certain proportion and judging whether the result can satisfy the constraints. The search process is repeated until the difference in the α value of two adjacent searches is less than a certain error coefficient (the value is 0.01 in this study). At this time, the adjusted socioeconomic scale is the WRCC. When solving the WRCC quantifying model, there arises a situation in which the Q WRCC of a downstream subregion is zero or very small due to the severe pollution of inflow from upstream. The reason is that, because all the subregions in the study area are closely linked through hydraulic relationships, water use and pollutant discharge of a subregion will have a certain impact on downstream subregions, especially those in urban districts. Hence, for downstream subregions, this further diminishes the socioeconomic scale of adjacent upstream subregions, in addition to allowing them to adjust their own scale. Finally, a balance point, which takes into account the coordinated development between the upstream and downstream subregions, may be obtained. 2.4 Assessment of water resources carrying level The water resources carrying level is used to express the degree of socioeconomic development pressure on water resources systems ( Dou et al., 2010 ). The water resources carrying level is the ratio of the actual (or forecasted) socioeconomic scale in the baseline (or planning) year to the WRCC. When the water resources carrying level is greater than 1.0, the actual (or forecasted) socioeconomic scale has exceeded the WRCC, and the overload becomes more severe as it increases; when it is equal to 1.0, the socioeconomic scale is at the threshold value of WRCC; and when it is less than 1.0, the socioeconomic scale is within the WRCC range, and the development potential becomes greater as it decreases. Furthermore, the water resources carrying level is divided into five categories: when it is less than or equal to 0.6, it falls into the fully loaded category; between 0.6 and 1.0, it falls into the suitably loaded category; between 1.0 and 1.5, it falls into the mildly overloaded category; between 1.5 and 2.0, it falls into the moderately overloaded category; and when it is greater than or equal to 2.0, it falls into the severely overloaded category. 3 Results and discussion 3.1 Calibration and validation In the simulation of the hydro-economic interaction process, the twenty sensitive parameters listed in Table 1 were selected for Henan Province's hydro-economic model, including seven socioeconomic parameters, seven water quantity parameters, and six water quality parameters. In order to demonstrate the simulation effect of the hydro-economic model, three indices, average relative error, the correlation coefficient, and the efficiency coefficient ( Zhang et al., 2013 ), were used to measure the simulation precision. Both discharge and COD Mn concentration were simulated using this model, and the applicability of the model was evaluated by investigating the processes of change in the variables. The simulation results are summarized in Table 1 . In the water quantity simulation, there were 21 stations whose absolute value of average relative error values were less than 15%, accounting for 64% of all the stations. The average correlation coefficient was 0.72, and the average coefficient of efficiency was 0.45. In the water quality simulation, there were 13 sections, whose absolute value of average relative error values were less than 45%, accounting for 62% of all the sections, and 15 sections whose correlation coefficients were greater than 0.40, accounting for 71% of all the sections. According to the results of the simulation, we concluded that the model was a satisfactory and reasonably representative of the hydro-economic interaction. 3.2 WRCC in baseline year We selected 2010 as the baseline year, when the WRCC of 60 subregions in Henan Province under actual conditions was determined using the WRCC quantification model. The calculation conditions applied in the WRCC quantification model in the baseline year were as follows: (a) hydrological conditions were adopted based on statistical precipitation, evaporation, and runoff data from 2010; (b) socioeconomic conditions were adopted based on statistical socioeconomic indices, such as population, GDP, livestock number, and grain yield, from 2010; (c) water resources conditions were adopted based on statistical data on water resources exploitation and utilization from 2010; and (d) water environmental conditions were adopted based on monitored water quality concentrations of the rivers and statistical pollutant discharge and pollutant load amounts of rivers from 2010. Based on these calculation conditions, the WRCC of each subregion was calculated using the model, and the water resources carrying level was determined. Then, the WRCC of 18 cities and of the entire province were aggregated according to the calculation results. The results are shown in Table 2 , and the spatial distribution of the water resources carrying level is shown in Fig. 3 . The results in Table 2 indicate that the water resources carrying level of Henan Province was 0.98 in 2010, which falls into the suitably loaded category. The loaded total population was 10 265.0 × 10 4 and the loaded GDP value was 23 519.06 × 10 8 RMB. These were 102.5% of the actual values of the province (the actual total population was 10 018.0 × 10 4 and the actual GDP value was 22 953.26 × 10 8 RMB in 2010). There were 33 overloaded subregions, accounting for 55% of the total subregions. Of these 33 overloaded subregions, nine subregions fell into the severely overloaded category, seven subregions fell into the moderately overloaded category, and 17 subregions fell into the mildly overloaded category. Despite the fact that 2010 was a wet year and water resources in that year were correspondingly abundant, the actual socioeconomic scale of some subregions was nonetheless found to exceed their WRCC. The subregions with higher water resources carrying levels included some subregions in the headwaters of the Huaihe River and its major branches, such as subregion 13 (Luoyang City), subregion 32 (Nanyang City), subregion 35 (Nanyang City), subregion 49 (Xinyang City), and subregion 51 (Xinyang City). These subregions lie in the southern part of Henan Province, where there is abundant precipitation as well as low water consumption and pollutant discharge, and the high water resources carrying level of these subregions was due to their slower production. The others were the subregions along the main stream of the Yellow River, such as subregion 2 (Zhengzhou City), subregion 7 (Kaifeng City), subregion 9 (Luoyang City), subregion 19 (Xinxiang City), and subregion 60 (Puyang City). These subregions have abundant water flowing through them even though they generate less water because of scarce precipitation and small catchment areas. In the meantime, water consumption and pollutant discharge in these subregions are low due to their small socioeconomic scale. The subregions with lower water resources carrying levels included some subregions in the Haihe River Basin, such as subregion 4 (Zhengzhou City), subregion 20 (Jiaozuo City), subregion 30 (Jiyuan City), subregion 43 (Anyang City), subregion 44 (Anyang City), subregion 45 (Anyang City), subregion 57 (Puyang City), and subregion 59 (Puyang City). The Haihe River Basin is one of the areas that suffers from severe water shortage, in which precipitation and generated water amount are both low. At the same time, these subregions are relatively isolated, and replenishment of water and transferred water both remain at low levels. The others are the subregions in the central urban areas of Henan Province, such as subregion 1 (Zhengzhou City), subregion 6 (Kaifeng City), subregion 17 (Xinxiang City), subregion 24 (Xuchang City), subregion 53 (Zhoukou City), subregion 54 (Zhumadian City), subregion 46 (Shangqiu City), and subregion 47 (Shangqiu City). Although these subregions have certain available water resources, their water consumption and pollutant discharge are large because of their large socioeconomic scale and unreasonable industrial structure. For example, the dominant industries are energy, machinery, and raw material industries in subregion 1 (Zhengzhou City); modern industry in subregion 17 (Xinxiang City); and machinery manufacturing and fur processing industries in subregion 24 (Xuchang City). 3.3 WRCC in planning years The planning years were from 2011 to 2030, and 2030 was selected as a key time node to be introduced and compared with the baseline year in this study. The calculation conditions in the planning years were as follows: (a) hydrological conditions based on statistical data on precipitation, evaporation, and runoff at the hydrological frequency of 50%; (b) socioeconomic conditions based on forecasted socioeconomic index values from 2011 to 2030, according to future development goals from the Central Plains Economic Zone Development Planning ( PGHP, 2012 ); (c) water resources conditions based on the planning water resources utilization level from 2011 to 2030 according to the Henan Province Water Resources Integrated Planning ( HWPECC, 2008 ); and (d) water environmental conditions based on the controlled water quality concentration in rivers and forecasts of pollutant discharge and load amounts of rivers from 2011 to 2030, according to environmental protection goals from the Henan Province Environmental Protection Planning ( HPEPB, 2012 ). Based on the calculation conditions described above, the WRCC of each subregion was calculated and the water resources carrying level was determined. The aggregated results for the entire province from 2011 to 2030 are shown in Table 3 , and spatial distribution of the water resources carrying level in 2030 is shown in Fig. 4 . The results in Table 3 indicate that the water resources carrying level of Henan Province will be 1.31 in 2030, which falls into the mildly overloaded category. The loaded total population of the province in 2030 will be 8 589.1 × 10 4 , which is a decrease of 1 675.9 × 10 4 from 2010 with a decrease rate of 16.33%. The loaded urban population was predicted to increase by 1 595.4 × 10 4 over that in 2010, with an increase rate of 41.84%. The loaded rural population was predicted to decrease by 3 271.3 × 10 4 from that in 2010, with a decrease rate of 50.70%. This is due to the rapid urbanization ratio predicted to occur from 2010 to 2030, which will result in significant migration from the countryside to the city. The loaded GDP value of the province was predicted to be 41 312.21 × 10 8 RMB in 2030, which is an increase of 17 793.15 × 10 8 RMB over that in 2010, with an increase rate of 75.65% from 2010 to 2030. The proportions of agricultural, industrial, and service industrial added values were predicted to change dramatically, from 13.9: 51.5: 34.6, in 2010 to 4.5: 42.4: 53.1, in 2030. In the future, the service industry will be the dominant industry rather than traditional industry. There are 38 subregions predicted to be overloaded, accounting for 63% of total number of subregions. Of these 38 subregions, 12 subregions were predicted to be severely overloaded, nine subregions were moderately overloaded, and 17 subregions were mildly overloaded. For 29 subregions, the water resources carrying level was predicted to decrease from 2010 to 2030. However, there were also 16 subregions in which the water resources carrying level was predicted to improve from 2010 to 2030. For example, subregion 20 (Jiaozuo City), subregion 29 (Jiyuan City), subregion 30 (Jiyuan City), subregion 38 (Hebi City), and subregion 39 (Sanmenxia City) will move into the suitably loaded category. The water resources condition (i.e., available water resources) is a crucial factor restricting socioeconomic development of water-deficient areas, such as the northern and central areas of Henan Province. According to the calculation results, both the water resources amount and the WRCC value in 2010 are greater than those predicted in 2030. The main reason is that the hydrological conditions for 2010 are based on actual data, and 2010 was a wet year, while the hydrological conditions for 2030 are based on designed data, and 2030 is predicted to be a normal year. In addition, the influences of future socioeconomic development and technological progress on the WRCC are significant in the following ways: (a) The increase of the urbanization ratio and GDP will lead to the growth of water demand quota and total water demand ( Fig. 5 (a) and (c)), which will contribute to a decrease in the WRCC. (b) The irrigation water utilization coefficient, industrial water recycling rate, and wastewater reuse rate will improve with the progress of science and technology ( Fig. 5 (d)), which will contribute to an increase in the WRCC. (c) Total water supply and interbasin water transfer will increase due to the construction of various water transfer projects ( Fig. 5 (b)), which will contribute to an increase in the WRCC. Because the indices of the increasing rate of GDP, urbanization ratio, irrigation water utilization coefficient, industrial water recycling rate, wastewater reuse rate, and interbasin water transfer will vary temporally and spatially, and the trends in the WRCC vary among the subregions and over time. From the temporal perspective, the number of overloaded subregions exhibits an increasing trend over the period from 2011 to 2016, and a decreasing trend over the period from 2017 to 2030 ( Fig. 6 (a)), and the loaded population exhibits an opposite trend, decreasing at first and then increasing ( Fig. 6 (b)). In the former period, the negative factors (e.g., the increasing rate of GDP and urbanization ratio) are more influential than the positive factors (e.g., the irrigation water utilization coefficient, industrial water recycling rate, wastewater reuse rate, and interbasin water transfer) when the water resources conditions are consistent, but in the later period, the positive factors are more influential on the WRCC than the negative factors. A significant reason is that the middle route of the South-to-North Water Transfer Project of China will be completed and begin transferring water in 2017. With the construction of this project, Henan Province could increase additional freshwater by 3.769 × 10 9 m 3 each year, accounting for 14.2% of the province's available water supply in 2017. Basically, the water demand of 11 large and medium-sized cities in Henan Province along the middle route will be met, but the water demand of other cities will remain the same. Meanwhile, the loaded GDP value exhibits a consistently increasing trend over the period from 2011 to 2030 ( Fig. 6 (c)). On the whole, the indices of socioeconomic development and urbanization show increasing trends, but those of the rural population and the proportion of agricultural added value show decreasing trends ( Fig. 6 (d)). 4 Conclusions In this paper, the concept of WRCC was discussed, and some calculation and evaluation methods for WRCC were presented. Using these methods, the WRCC of Henan Province was determined for the baseline and planning years, and the following conclusions are drawn: (1) A distributed quantitative model for WRCC based on the large-scale system theory and optimization method was developed, and Henan Province's WRCC was calculated for different development periods. (2) According to the simulation results of the model, there were 33 overloaded subregions in 2010; the loaded total population was 10 265.0 × 10 4 , and loaded GDP value was 23 519.06 × 10 8 RMB. The province's water resources carrying level is suitably loaded. (3) Based on the planning development goals, the WRCC from 2011 to 2030 was calculated. According to the simulation results, there will be 38 subregions overloaded in 2030, the loaded total population will be 8 589.1 × 10 4 , and the loaded GDP value will be 41 312.221 × 10 8 RMB. The province's water resources carrying level will be mildly overloaded. (4) WRCC is influenced by many factors, of which available water resources is the most crucial. The main factors affecting the WRCC are different from 2011 to 2030. In the period from 2011 to 2016, the negative factors (e.g., the increasing rate of GDP and urbanization ratio) that contribute to a decrease of the WRCC are more influential than the positive factors (e.g., the irrigation water utilization coefficient, industrial water recycling rate, wastewater reuse rate, and interbasin water transfer) that contribute to an increase of the WRCC when the water resources conditions are consistent, and the converse is true for the latter period from 2017 to 2030.
|
[
"CLARKE",
"DANG",
"DOU",
"FALKENMARK",
"FENG",
"FENG",
"GENERALINSTITUTEOFWATERRESOURCESANDHYDROPOWERPLANNINGANDDESIGNGIWRHPDMINISTRYOFWATERRESOURCES",
"GRAYMORE",
"HENANPROVINCEENVIRONMENTALPROTECTIONBUREAUHPEPB",
"HENANWATERANDPOWERENGINEERINGCONSULTINGCOLTDHWPECC",
"HUNTER",
"LANE",
"LI",
"LIU",
"LIU",
"LONG",
"NATIONALRESEARCHCOUNCILNRC",
"OFOEZIE",
"OH",
"THEPEOPLESGOVERNMENTOFHENANPROVINCEPGHP",
"THEPEOPLESGOVERNMENTOFHENANPROVINCEPGHP",
"PRATO",
"QUEENSLANDUNIVERSITYOFTECHNOLOGYQUT",
"REES",
"SEIDL",
"SHI",
"XIA",
"XU",
"ZAI",
"ZHANG",
"ZHANG",
"ZHANG"
] |
bc4dcd49f50941f0918111feaf2af8aa_Lagophthalmos of the medial upper eyelid after Mohs surgery of the medial canthus_10.1016_j.jdcr.2020.10.028.xml
|
Lagophthalmos of the medial upper eyelid after Mohs surgery of the medial canthus
|
[
"Donaldson, Matthew R.",
"Morrell, Travis J."
] | null |
Introduction Lagophthalmos is characterized by incomplete eyelid closure and can lead to exposure keratopathy. It is often attributed to injury or pathology of the facial nerve innervating the orbicularis muscles. Lagophthalmos can be seen after eyelid surgeries that involve transection of terminal facial nerve fibers within the suborbicularis fascial plane. However, it has not been reported after Mohs surgery (MS) of the medial canthus. We present a case of transient medial upper eyelid lagophthalmos and hypometric blink resulting from MS of the medial canthus. Transient lagophthalmos is a well-described complication after external dacryocystorhinostomy (DCR), an ophthalmologic procedure in which a cutaneous incision is placed in the same region as our patient's defect. 1 Post-DCR lagophthalmos is hypothesized to result from injury to variant orbicularis innervation via the “angular” branch of the facial nerve. 2-4 Case report A 75-year-old woman with an infiltrative basal cell carcinoma of the left medial canthus and nasofacial crease was treated with MS. After clear margins were achieved, the defect extended into skeletal muscle and was closed with an island pedicle flap, as shown in Fig 1 . The patient was found to have medial upper eyelid lagophthalmos and delayed (hypometric) blink at a 4-week follow-up ( Fig 2 ). The flap was well healed, with appropriate lower lid position and no evidence of ectropion. Exposure keratopathy developed in the patient, and she was managed with lubricating drops and nightly occlusion. Complete lid closure, normalization of the blink, and resolution of keratitis were noted 3 months later. Discussion Incomplete eyelid closure in this patient was caused by partial paresis of the medial upper eyelid rather than by malposition of the lower lid. As upper lid lagophthalmos is usually attributed to a facial nerve injury, this finding was unexpected due to the defect's location and predicted course of the facial nerve. A literature search revealed that postoperative medial upper eyelid lagophthalmos is well described after DCR for lacrimal duct obstruction. The incision for external DCR is placed in a region corresponding with our patient's defect, originating roughly 1 cm medial to the insertion of the medial canthal tendon. The incision extends 1-1.5 cm inferiorly along the nasal sidewall and is taken down to the periosteum prior to osteotomy and lacrimal sac and nasal mucosal flap formation. This incision may also lie along the nasojugal fold or at the eyelid margin. 2-4 Vagefi et al have reported 16 of 215 patients (7.4%) undergoing external DCR, who experienced postoperative lagophthalmos and/or hypometric blink. Nasojugal, vertical, and eyelid margin incisions were all associated with this complication. Resolution of lagophthalmos was seen in all the patients by 32 weeks. In another series of 79 patients undergoing external DCR, 28.6% experienced lagophthalmos and hypometric blink of the upper eyelid. 2 All cases in this series were related to an incision starting halfway between the nasal bridge and medial canthus and extending obliquely in an inferomedial fashion. Findings resolved by 5 weeks in all patients. An additional 3 cases (out of 10 DCR patients) of medial upper eyelid lagophthalmos were reported with resolution by 3 months. 3 The authors of these studies did not feel that local anesthetic myotoxicity, damage to the orbicularis muscle inferior to the medial canthal tendon, or even disinsertion of orbicularis from the periosteum adequately explained the findings. 4 2-4 Post-DCR lagophthalmos was instead attributed to facial nerve injury at the location of the cutaneous incision. The orbicularis oculi are innervated by zygomatic, buccal, and temporal branches of the facial nerve. These branches are thought to form superior (temporal and zygomatic) and inferior (zygomatic and buccal) plexuses that course lateral to medial to insert into the orbicularis complex. 2-4 Nemoto et al 5 have demonstrated that a terminal branch of the buccal nerve (superficial buccal branch) courses across the cheek to run over the medial palpebral ligament with the angular artery, as shown in 6 Fig 3 . In the “triangular window” near our patient's defect, the nerve runs between the inferomedial orbicularis and levator labii superioris alaeque nasi and over the levator labii superioris. These branches variably innervate the orbicularis oculi, procerus, and corrugator supercilii. Forty-two percent of examined specimens had branches innervating the upper orbicularis oculi. Caminer et al have described the superficial buccal branch of the facial nerve as the “angular” nerve. Their cadaveric dissections revealed a confluence of the zygomatic and buccal nerve branches coursing medially across the cheek to the medial canthus. They demonstrated that the angular nerve innervated the corrugator and procerus. Presumably, some patients rely on this angular nerve to control upper orbicularis contraction if minimal redundancy is provided by other branches. 7 In the context of the DCR literature and orbicularis innervation summarized above, our patient may have had an injury to the angular branches of the facial nerve. The defect was deep and extended through skeletal muscle overlapping the predicted path of the nerve through the triangular window to the medial canthal tendon. Alternatively, lagophthalmos may have resulted from damaged muscle fibers, postoperative edema, or an unidentified stimulus. Our patient's postoperative edema was significant but resolved in days while her upper lid pathology persisted for weeks. The muscle fibers affected by tumor extirpation were inferior to the medial canthal tendon and would be less likely to affect the upper eyelid function. Lower lid ectropion is a feared complication of medial canthus surgery and can lead to exposure keratopathy. However, this patient's lower lid was in an appropriate position without scleral show or ectropion and did not seem to contribute to her upper eyelid pathology. 5 This is our first episode of upper eyelid lagophthalmos resulting from MS of the medial canthal region. This case may represent a rare confluence of defect location and depth and facial nerve variation. As rapid recovery seems to be the rule after DCR, we may have missed other cases. In either scenario, this region is not necessarily a “danger zone” for facial nerve injury. If observed, the resolution of lagophthalmos is likely, but measures to ensure eye lubrication should be taken to reduce the risk of exposure keratopathy until muscle function normalizes.
|
[
"JORDAN",
"VAGEFI",
"ODAT",
"HAEFLIGER",
"OUATTARA",
"NEMOTO",
"CAMINER"
] |
e2a32ae6c0dc4a6e8b045c5136e93c78_Climaticpark-py - A python framework for the climatic simulation of vehicular parking lots_10.1016_j.softx.2025.102271.xml
|
Climaticpark-py - A python framework for the climatic simulation of vehicular parking lots
|
[
"Nshuti, Hyacinthe-Marie",
"Morales-García, Juan",
"Muñoz, Andrés",
"Navarro, Pedro J.",
"Alonso, Diego",
"Sanchez, Pedro",
"Álvarez, Bárbara",
"Terroso-Saenz, Fernando"
] |
Private vehicles are the dominant mode of commuting worldwide, accounting for a significant share of energy use and
CO
2
emissions. Urban growth has led to the expansion of surface parking lots, where vehicles are often exposed to intense solar radiation, raising cabin temperatures above 60 °C and increasing air-conditioning energy demand. Installing sunshades can mitigate this issue, but their effective deployment requires detailed analysis. This paper presents climaticpark-py, a Python library that simulates parking lots considering geographic and ambient conditions along with mobility patterns, enabling evaluation of sunshade configurations to optimize thermal comfort and energy efficiency even before physical implementation. The present software intends to contribute to the United Nation’s sustainable goal Make cities and human settlements inclusive, safe, resilient and sustainable.
|
Code metadata Current code version v1.0 Permanent link to code/repository used for this code version https://github.com/ElsevierSoftwareX/SOFTX-D-25-00366 Permanent link to Reproducible Capsule https://mybinder.org/v2/gh/fterroso/climaticpark/0d818f2dd89655ee5afd73d8594ff7d338f874f0?urlpath=lab%2Ftree%2Fdemo.ipynb Legal Code License GPL 2.0. Code versioning system used git Software code languages, tools, and services used python 3.10 Compilation requirements, operating environments & dependencies numpy, pandas, scipy, scikit-learn, matplotlib, folium, tensorflow, keras, geopandas, fiona, suncalc-py If available Link to developer documentation/manual Support email for questions fernando.terroso@upct.es 1 Motivation and significance Human mobility surveys consistently reveal that private vehicles are the predominant mode of commuting worldwide. For instance, the mobility patterns of the European population are predominantly centered around private vehicle use, with 50% of individuals relying on private vehicles daily [1] . Over the past decade, the transportation sector has consumed 25% of global energy, with 44% of this attributed to personal vehicles, and daily commuting accounts for approximately 25% of CO2 emissions in Europe [2] . As urban populations continue to grow and the use of private vehicles remains prevalent, public institutions have promoted the development of urban parking facilities in recent decades. These facilities are predominantly composed of surface parking lots, which often provide a limited number of covered spaces. Vehicles parked in these areas are exposed to varying intensities and durations of solar radiation, influenced by factors such as the time of day, season, and the specific location within the lot. This exposure can cause cabin temperatures to exceed 60 °C during prolonged periods of direct sunlight [3] . As a result, the vehicle’s air-conditioning system must consume a substantial amount of energy to reduce the cabin temperature to a comfortable level. In this context, vehicular parking lots can include different types of sunshades in their premises to reduce the vehicles’ sun exposure and, thus, their cabin temperatures. However, besides the cost of the installation, the effective deployment of such structures in a vehicular parking lot calls for a detailed preliminary analysis of its geographical location, ambient conditions and the behavioral patterns of its drivers. This paper introduces climaticpark-py , a python library designed to simulate a vehicular parking lot accounting for its geographic ambient conditions, such as temperature and sunlight exposure, as well as its drivers’ mobility patterns. By following a modular approach, this library allows to simulate the impact of installing different configuration of sunshades in a parking lot in terms of the estimated cabin temperatures and fuel consumption of the vehicles. Thus, it can be used by parking-lot operators to explore various sunshade designs and placements to optimize coverage and functionality before their actual installation in the physical environment. In this way, this library aims to contribute to one of the United Nations’ Sustainable Development Goals, which seeks to make cities and human settlements inclusive, safe, resilient and sustainable . 1 1 https://sdgs.un.org/goals . The rest of this paper is organized as follows. Section 2 reviews current software solutions for vehicular mobility simulation. In Section 3 , we detail the software components and functionalities of the library. Section 4 explains some illustrative examples whereas Section 4 puts forward its impact. Finally, Section 5 outlines directions for future development. 2 Related work In the road-traffic and microclimate simulation field, it is possible to find several tools that focus on different aspects and dimensions. In this section, we review some of these libraries. To begin with, a software known as SPA [4] focuses on scheduling parking spaces based on demand and estimating vehicle parking durations, employing policies like Worst-Fit (WF-SPA), Best-Fit (BF-SPA), and Parking Behavior Forecast (PBF-SPA). In the parking management, both SPA and climaticpark-py aim to optimize parking resource management, though they differ significantly in their respective methodologies. On the contrary, climaticpark-py simulates parking conditions and behavior based on environmental variables. Another interesting software is TransCAD [5] , a comprehensive transportation planning tool, that supports multimodal analysis across different geographic scales, from local to global. It models passenger and freight transportation, network analysis, and facility location. While climaticpark-py is specialized for parking resource management, focusing on simulating parking behavior considering environmental factors, TransCAD handles broad transportation systems The software ABMTrafSimCA [6] , a traffic simulation tool, allows modeling micro-scale road traffic dynamics using cellular automata (CA) and a multi-agent approach. The tool focuses on vehicle behaviors like acceleration, breaking, and distance management, offering insights into individual vehicle interactions within traffic flow. climaticpark-py , in contrast, centers on parking resource management, predicting parking behavior and optimizing parking allocation based on environmental conditions such as weather and lot occupancy as mentioned above. Besides, CORSIM [7] integrates NETSIM and FRESIM to simulate detailed traffic flow, congestion, and vehicle interactions on freeways and surface streets. It operates at a broader, macro-to-meso level, addressing complex vehicle interactions within large-scale networks. In contrast, climaticpark-py focuses on localized parking efficiency, simulating parking demand and behavior their environmental conditions along with its side effect in the energy consumption of the vehicles. Another prominent traffic simulation tool is SUMO [8] , an open-source platform primarily focused on traffic management, vehicular communications, and network modeling. This tool simulates large-scale transportation networks, providing insights into traffic flow, congestion, and vehicle interactions. In this context, PyPML [9] is an interesting extension that adds parking management to SUMO. Specifically, it enriches parking data by including the current parking occupancy, the occupancy over time, and the intention of using the parking lot by vehicle type. PyPML also supports multi-parking scenarios in SUMO (for instance, the authors simulate parkings at the Principality of Monaco and the neighboring French cities). Thus, SUMO and PyPML analyzes broader traffic network dynamics, including parking lots, while climaticpark-py is concentrated on optimizing parking at a localized level also considering environmental forecasting. Thus, both tools can be considered as complementary. In this context, the parking lot simulator proposed in [10] uses the agent-based modeling framework NetLogo to enable micro-scale simulation of parking lot dynamics. This tool simulates driver behavior when occupying available parking spaces. Additionally, the microscopic parking simulator described in [11] allows modeling occupancy behavior in both on-street and off-street parking lots with limited capacity. In summary, while all the aforementioned tools aim to enhance transportation systems, climaticpark-py distinguishes itself through its focused approach to parking resource management, emphasizing real-time optimization using environmental and parking data, in contrast to the broader traffic simulation and planning tools discussed. Regarding microclimate simulation and forecasting, the climate-library [12] provides functions to preprocess and compute derived indices from climate data. Besides, ENVI-met [13] is one of the most well-known and established numerical tools for microclimate simulation, based on computational fluid dynamics. There are also some libraries, such as [14] , that apply deep learning techniques to forecast microclimate conditions. In this context, it is important to note that our library focuses specifically on modeling the temperature conditions inside vehicle cabins within a parking lot. To achieve this, it is necessary to consider certain climatic factors within the parking lot premises; however, modeling those external conditions is not the primary goal of climaticpark-py . 3 Software description In this section, we provide a detailed description of the climaticpark-py architecture, its key functionalities along with some relevant code snippets. 3.1 Software architecture The climaticpark-py library has been entirely developed in Python following the Object-Oriented Programming Paradigm (OOP). This allowed to structure the code in a modular and extensible design. In that vein, Fig. 1 shows its Unified Modeling Language (UML) class diagram comprising 6 different classes. Here, we briefly describe each of them. To begin with, the core class of the library is ClimaticPark (center in Fig. 1 ). This class defines the end-point to interact and use the library to launch and simulate different scenarios for a Target Parking Lot (TPL) defined by the client. Hence, its constructor receives as input parameters different geospatial files defining the infrastructure and location attributes of the TPL along with its sunshade structures. Furthermore, it is in charge to orchestrate the interactions and execution of the other classes in the library during a simulation workflow as we will put forward in Section 3.2 . The AmbientModule class (AM) in (bottom right corner in Fig. 1 ) is in charge of computing and generating the simulation aspects related to the climatic conditions in the TPL, based on the geographical location provided to the ClimaticPark class. In the current version of the library, this module focuses on estimating the ambient temperature in the geographical location of the parking lot. In terms of implementation, the AM relies on a Recurrent Neural Network implemented with tensorflow library. 2 2 https://www.tensorflow.org . Concerning the ShadowModule class (SM) (bottom left corner in Fig. 1 ), it is responsible for simulating the shadows projected by the sunshades in the TPL due to the sunlight during the target simulation days. In that sense, the computation of the sun’s location at each moment is performed by the third-party library suncalc-py . 3 As a result, this class also computes the coverage rate of the TPL’s spaces on an hourly basis for the whole simulation period. The coverage rate of a parking space is the proportion of this space’s spatial area covered by a shadow. 3 https://github.com/kylebarron/suncalc-py . Regarding the DemandModule class (DM) (top left corner in Fig. 1 ), it oversees the simulation of the drivers’ behavioral aspects in the TPL. Such aspects are defined in terms of the entry and exit hours of the TPL for each driver along with the space occupied by their vehicle during the stay. In order to simulate such occupied spaces, the library assumes that drivers tend to park close to the entry and exit gates of the TPL. The CabinTemperatureModule (CTM) class (top right corner in Fig. 1 ) provides a key feature of the library as it is in charge of computing the cabin temperature of the simulated vehicles by the DM. To do so, this module considers the coverage rates of the spaces computed by the SM and the ambient temperature simulated by the AM and integrates them by means of a linear regression model. Last, Vehicle (top part in Fig. 1 ) is an auxiliary class whose instances represents the different cars simulated by the library. In this manner, each object of this class comprises the features computed by the aforementioned modules for a single vehicle, such as its entry and exit hours or its cabin temperature. Besides, this class includes the functionality to estimate the energy consumption required by the vehicle to reduce its cabin temperature to a particular comfort level when its leaves the TPL’s premises. 3.2 Software workflow Fig. 2 shows the workflow followed by the library to provide a simulation of the TPL. As we can see from this figure, the library is designed following a pipeline architecture, where data flows through a series of three sequential processing stages, namely, preparing the simulation , launching the simulation and visualization of the results . Each state performs a specific task and passes its output to the next stage in the pipeline, promoting modularity and ease of maintenance. For the sake of completeness, Appendix A shows a UML sequence diagram involving the library classes depicted in Fig. 1 . The diagram illustrates the ordered sequence of method invocations among these classes to carry out the three aforementioned processing stages. 3.2.1 Library inputs In order to provide a simulation outcome, climaticpark-py requires the client to previously gather the following data about the TPL, • Geographical location and spatial distribution of the spaces. • Geographical location of the entry and exit gates. • Geographical location and spatial distribution of the sunshades. • Geographical location of the TPL’s spatial centroid. • Timeseries comprising an ordered sequence of entry and exit hours of a set of TPL’s drivers. All the location data must be encoded as geopandas GeoDataFrames where the spaces and sunshades are defined as spatial polygons. Thus, the user can feed the library with different configurations of sunshades and evaluate their impact on a existing parking infrastructure. Besides, the timeseries of entry and exit hours must be provide as a CSV file. This file is used by the library to provide a realiable simulation of the drivers’ behavior. For the sake of clarity, Table B.1 describes the input parameters of the library and the required file types. Next, a detail of the three steps followed by a user-library interaction depicted in Fig. 2 is given. 3.2.2 Step 1: Preparing the simulation The first step in the library’s pipeline is to prepare the simulation by instantiating the necessary objects. To do so, we firstly need to instantiate a ClimaticPark class by providing to its constructor all the inputs listed in Section 3.2.1 . Next, we call the ClimaticPark.prepare_simulation method which creates an object of each of the five modules in the library. Moreover, the AmbientModule collects weather data from the Open-Meteo web service 4 for the TPL location in order to feed the ambient temperature predictor that will be used in downstream steps. At this point, the instance of 4 https://open-meteo.com/ . ClimaticPark that we have created is ready to generate the simulation of TPL. 3.2.3 Step 2: Launching the simulation Next, it is necessary to invoke the ClimaticPark.launch_ simulation method to actually perform a simulation on the TPL. This method receives two input parameters, named n_days_ahead , which indicates the total number of entire days we want to simulate and allocation_policy that allows to configure the behavior followed by the simulated vehicles in order to occupy the available spaces. In this sense, the library defines three different parking allocation behaviors, namely: • Random , vehicles randomly occupy the available spaces in the parking lot. • Minimum distance , vehicles occupy the available spaces closest to their entry gate. • Random minimum distance , vehicles tend to occupy the spaces closest to their entry gate, with a certain level of randomization. As a result, this method invokes the necessary methods in the other modules of the library to compute the movement of shadows, entry and exit of vehicles and their associated spaces and cabin temperatures. It is worth mentioning that this stage does not return any output to the user because it focuses on the computation part. The following pipeline’s step provides the client with the simulation outcomes. 3.2.4 Step 3: Visualization of the results With the aim of retrieving the different simulation outcomes, the library provides different methods to visualize the results. Given the spatiotemporal nature of the simulation, such outcomes are provided as interactive spatial maps that allow the user to easily see the evolution of certain elements of the TPL. The palette of map-visualization methods provided by the library through the ClimaticPark object are described next, • show_coverage_rates returns an HTML map showing the distribution of TPL spaces and how the cover rate of each one evolves during the simulated days on an hourly basis. • show_roofs_projected_shadows returns an HTML map showing the shadows projected by the TPL sunshades (given as input to the library, Section 3.2.1 ) for each hour of the simulated says. This map allows a better understanding of the cover rates generated in the previous method. • show_occupancy returns an HTML map that indicates the time evolution of the occupied spaces based on the previously simulated vehicles. Hence, the client can visualize the entry and exit behavior simulated by the climaticpark-py . • show_cabin_temp returns an HTML map that enriches the previous one with the cabin temperature of each vehicle in its occupied space on an hourly basis. These methods provide interactive maps where the user can easily visualize the simulation outcomes. However, they do not provide any data structure that could be manipulated and analyzed later on. For that purpose, the library also provides the method compute_energy_consumption through its Climaticpark class. This method returns a pandas DataFrame comprising all the features of the simulated vehicles. For each vehicle, this data structure returns, among other fields, its entry and exit dates, its occupied space within the TPL along with its initial and final cabin temperature. This allows clients to use such data structure in their own downstream processing pipeline. Last, we show the Python code snippet comprising the three previous steps. 4 Illustrative example To show the performance of climaticpark-py , we selected as TPL one of the facilities located on the campus of the Catholic University of Murcia (Murcia, Spain). 5 The library was employed to simulate the parking lot’s operation over the course of a typical workweek (5 days from Monday to Friday). 5 https://ucam.edu . This TPL does not currently feature any covered parking areas. To conduct a meaningful evaluation of the library, we considered one hypothetical sunshade configuration where all 250 parking spaces in the TPL were equipped with some form of sunshade. For better understanding, Fig. 3 a illustrates the spatial layout of the parking spots as blue rectangles whereas Fig. 3 b shows the locations of the simulated sunshades as dark blue polygons. Given this configuration, Fig. 4 shows several screenshots of the maps generated by climaticpark-py , illustrating the movement of shadows projected by the simulated sunshades. As shown in the figure, the projected shadows move consistently with the sun’s position. For instance, at 6:00 a.m. ( Fig. 4 a), the roof shadows are skewed to the left with respect to the roofs due to the eastward position of the sun, whereas at 4:00 p.m. ( Fig. 4 c), they shift to the right as the sun is positioned more toward the west. This map makes it possible to analyze the efficiency of the shadows cast by the input sunshades in relation to the spatial distribution of the parking spaces. Besides, Fig. 5 shows several screenshots of the map generated by the library, displaying the occupancy of the TPL and the cabin temperature of the vehicles in each occupied space where the color of each rectangle (space) represents the cabin temperature of the vehicle parked in it. In this way, it is possible to observe where the simulated vehicles with higher cabin temperatures tend to be located, considering the users’ entry and exit patterns and the sunshade design. For example, Fig. 5 a shows that at 6:00 a.m., most of the vehicles with the highest cabin temperatures were located in the central spaces of the parking lot, whereas Fig. 5 b indicates that at 4:00 p.m., the hottest vehicles were found in the external spaces. 5 Impact The climaticpark-py library aims to be a milestone in the development of a new family of road traffic simulators that focus not on traffic management, but on the sustainability of parking lots, considering the impact of air-conditioning systems on vehicle energy consumption. Furthermore, this library has been already employed as the simulation framework in a research project aimed at defining an efficient allocation policy for on-street vehicular parking lots equipped with sunshades [15] . The proposed policy enables the allocation of vehicles in such a way that incoming vehicles occupy the most suitable available spots, minimizing cabin temperatures at the time drivers return to their vehicles. To define this policy, the authors used the library to model both the current and projected future shadows cast by the sunshades, as well as the demand behavior of parking lot users. Furthermore, climaticpark-py also offers novel features compared to existing software packages for macro-scale vehicular parking lot simulation. For instance, simulators such as PyPML [9] allow simulated vehicles to reorganize their trips based on the available capacity of each parking lot in the target urban setting. In contrast, our library focuses on the micro-simulation of a specific parking lot, incorporating more detailed aspects such as the behavior of individual drivers within the premises, as well as relevant climatic variables. In this regard, it is true that other tools exist that support fine-grained simulation of vehicle movement within parking lot premises [10,11] . However, they do not consistently include climatic features in their simulation outputs. This enables our library to be used not only as a tool for validating or analyzing driver behavior when occupying parking spaces, but also for evaluating the potential impact of sunshades or roofs before they are actually installed. 6 Conclusions While most traffic simulation tools focus on flow dynamics, the efficient management of surface parking lots remains a key urban challenge, requiring consideration of occupancy, environmental, and contextual factors. To address this need, this paper has introduced climaticpark-py , a simulation library fully implemented in Python. The library enables the simulation of key elements in surface parking lots, with a particular emphasis on evaluating the efficiency of sunshade structures. As practical implications of this work, climaticpark-py serves as a valuable tool for parking-lot operators to explore different sunshade designs and placements, allowing them to optimize coverage and functionality before any actual physical installation. This pre-implementation analysis can lead to more effective and cost-efficient deployment of sunshades. The strengths of this work lay on its data-driven decision-making approach by providing interactive maps for visualizing spatiotemporal simulation outcomes, such as coverage rates, projected shadows, occupancy, and cabin temperatures. By means of these interactive maps and the use of pandas DataFrames , users can easily visualize and analyze the impact of different sunshade configurations, while considering the geographical and occupancy characteristics of the environment. Regarding the limitations of this library, the current thermodynamic model does not take into account certain external climate factors such as the cloud coverage that may affect the vehicle’s temperature. Future developments will aim to enhance the library by incorporating more detailed thermodynamic models related to in-vehicle cabin temperature, as well as by simulating the effects of natural and artificial shading elements, such as trees and surrounding buildings. The definition of alternative options to simulate the occupancy behavior of the drivers will also be considered. In addition, the development of a web-based interface to support the definition and initialization of simulations is foreseen, aiming to improve usability and user interaction. A dedicated designer class for defining and automatically optimizing sunshade positions using the library’s functionalities is also planned, with the goal of automating the simulation process. CRediT authorship contribution statement Hyacinthe-Marie Nshuti: Writing – review & editing, Writing – original draft, Visualization, Software, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Juan Morales-García: Software. Andrés Muñoz: Writing – review & editing, Project administration, Funding acquisition, Conceptualization. Pedro J. Navarro: Writing – review & editing, Validation, Conceptualization. Diego Alonso: Writing – review & editing, Supervision. Pedro Sanchez: Writing – review & editing, Visualization, Validation, Supervision. Bárbara Álvarez: Writing – review & editing, Formal analysis. Fernando Terroso-Saenz: Writing – review & editing, Writing – original draft, Validation, Software, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A UML sequence diagram See Fig. A.6 . Appendix B Description of the input parameters See Table B.1 .
|
[
"GIMENEZNADAL",
"CONTI",
"SCOTT",
"LIN",
"CALIPERCORPORATION",
"GORKA",
"HALATI",
"KRAJZEWICZ",
"CODECA",
"VO",
"GU",
"TRIPATHI",
"ENVIMET",
"ZANCHI",
"TERROSOSAENZ"
] |
595f45255d30449eadbcb3327be68809_Preweaning mortality in piglets in loose-housed herds etiology and prevalence_10.1017_S1751731117003536.xml
|
Preweaning mortality in piglets in loose-housed herds: etiology and prevalence
|
[
"Kielland, C.",
"Wisløff, H.",
"Valheim, M.",
"Fauske, A.K.",
"Reksen, O.",
"Framstad, T."
] |
Preweaning mortality in piglets is a welfare issue, as well as an ethical and economic concern in commercial pig farming. Studying the causes of preweaning mortality and their prevalence is necessary to reduce losses. Preweaning piglet mortality was investigated in a field study including 347 sows from 14 loose-housed Norwegian piglet-producing herds. A total of 5254 piglets were born in these herds during the study period, and 1200 piglets were necropsied. The cause of death was based on pathoanatomical diagnosis (PAD). Preweaning mortality of all piglets in the study was 23.4%, including 6.3% stillborn. The two main causes of preweaning mortality in live-born piglets (n=4924) were trauma (7.1%) and starvation (2.7%). Piglets dying of an infection accounted for 2.0%. Among the necropsied piglets (n=1200), 29.1% had died due to trauma, 26.8% were categorized as stillborn and 11% had died of starvation. Piglets that had died of trauma, had a mean time of death of 1 lactation day (LD 1), ranging from LD 0 to LD 21. The mean time of death of piglets that died due to bacterial infection was LD 9, ranging from LD 0 to LD 31, with Escherichia coli accounting for most infections found in necropsied piglets. Farmers were able to identify death by trauma in piglets, but were less able to identify death due to hunger. Most piglets that died in the preweaning period, died of trauma. Surprisingly, this included large and well-fed piglets. The second most prevalent cause of preweaning mortality was starvation. Improved monitoring may reveal piglets with low body mass index, and additional nutrition may contribute to increase the survival rate.
|
Implications Piglet mortality during parturition and lactation is a major problem in pig farming worldwide. Increasing the survival of piglets will significantly enhance animal welfare and improve the ethical sustainability and economy of swine production. In order to increase piglet survival, more information is needed on the causes of piglet mortality in different production systems. This large-scale, postmortem examination study in Norwegian piglet-producing herds with loose-housed sows, provides further relevant information on this topic, indicating that trauma and starvation are important causes of death in preweaned piglets. Introduction A high piglet mortality during parturition and lactation is a major problem in pig farming worldwide. Preweaning mortality rates of live-born piglets in European countries range from 10% to 20% (Muns et al ., 2016 ). Total preweaning mortality in piglets, including stillbirths, is estimated to be between 18.8% and 25% (Damm et al ., 2005 ). In Europe, between 4% and 12% of piglets are stillborn (Edwards, 2002 ; KilBride et al ., 2012 ; Strange et al ., 2013 ). The majority of piglets that die before weaning, die within the 1 st week of life (Varley, 1992 ), with stillbirth, starvation and crushing by the sow considered to be the main causes (Pedersen et al ., 2011 ). In a Swedish study, stillbirth, starvation and crushing accounted for 6.5%, 5.0% and 4.1% of piglet mortality from all piglets born, respectively (Westin et al ., 2015 ). These percentages vary among studies, depending on housing systems (crates v . loose housing), management, weaning age and definitions used regarding mortality rates (KilBride et al ., 2012 ). Pig husbandry has changed considerably over the last 20 years, and the focus on animal welfare has greatly increased. In 2003, Norwegian legislation prohibited crated sows in the farrowing pen. In 1986, Norwegian sows produced, on average, 10.1 live-born piglets per litter, 1.1 piglets were stillborn (9.8%) and 8.5 piglets were weaned. The mean weaning age was 44 days (Grøndalen et al ., 1986 ). In Norway today, national data can be obtained from The National Efficiency Control Database in Norway, called Ingris, which contains production results and is administrated by Norsvin (Norwegian Pig Breeding Association) and Animalia (Norwegian Meat and Poultry Research Centre). According to Ingris, in 2014, Norwegian sows produced, on average, 13.2 live-born piglets and 1.1 stillborn piglets (8.4%) per litter, and weaned, on average, 11.3 piglets per litter at day 33; the preweaning mortality of live-born piglets was 14.2% (Ingris, 2014 ). The preweaning mortality of total-born piglets that same year was 21%. Information on the exact causes of preweaning mortality is needed in order to be able to reduce piglet losses. A number of studies of piglet mortality have relied on diagnoses made by farmers. However, up to 40% of such diagnoses have been incorrect (Edwards et al ., 1994 ). In order to understand the etiology behind most piglet deaths, a thorough postmortem examination is necessary. The aim of the present study was to use postmortem examinations to determine the etiology and prevalence of preweaning mortality in piglets in Norwegian loose-housed sow herds. Material and methods Study population The data presented in this paper were obtained as part of a larger study investigating both management and sow behavior in association with preweaning mortality. Details have been described previously (Rosvold et al ., 2017 ). Initially, 52 herds were included based on the following criteria: firstly, farms where sows were kept loose during parturition, farms with regular records of production results to Ingris, and secondly, breed (LY: sows of Norwegian Landrace×Swedish Yorkshire). For the present study, convenience sampling was used, with samples collected from a subset of 14 herds within 3 h driving distance of the pathology laboratory. The study period was from September 2013 until May 2014. All herds were located in the southeastern part of Norway. From 347 sows with 5254 total-born piglets, of which 4924 were live-born, 4026 survived and 1228 piglets were reported dead at weaning. Altogether, 1216 dead piglets were collected, and, of these, 1200 were necropsied. The mean parity of the sows was 2.4 (±1.6 SD), ranging from 1 to 10. Farmers were requested to adhere to usual management routines during this study. Farm characteristics All herds included in the study used batch management systems. The number of sows per batch ranged from 20 to 60 sows, with 49% of the herds with a batch size of 20 to 25 sows. The interval between batches was mostly seven weeks (65% of the herds). In each herd, gilts and sows were loose-housed in groups before they were moved individually into standard farrowing pens without crates, mean size of 7.7 m 2 (±1.1 SD, including a separate piglet creep area). Sows and gilts were, on average, moved to individual farrowing pens 8 days (±4 SD) before expected parturition, ranging from 3 to 18 days. The mean parturition duration in the herds was 4.9 h, ranging from 1 to 12.75 h. The temperature in the lactation rooms was 18°C to 19°C in all herds. Farrowings were not hormonally synchronized. Farmed animals in Norway only receive drugs administered by a veterinarian. Parturition was attended. At 1 week of age, the male piglets where castrated using local anesthesia and systemic analgesia. Teeth clipping and tail docking was not performed as this is prohibited by law in Norway. Mean weaning age in the herds was 32 days (±3.9 SD). All herds used the same vaccination regime; that is, vaccinating against Escherichia coli, porcine parvovirus, and Erysipelothrix rhusiopathiae . The sows were fed a commercial lactation diet containing 9.29 MJ NE/kg feed and 8.26 g lysine/kg feed. All sows were provided with small amounts of hay/sawdust before farrowing, and all sows had ad libitum access to water. Data collection Farmers wrapped each dead piglet individually in a plastic bag, and marked the bag with a preprinted sticker in which they were instructed to add the following information: Herd id, sow id, date of parturition, date of death and tentative cause of death as suspected by the farmer. For the piglets that died on the day of parturition, the time of death was recorded as lactation day 0 (LD 0). For piglets that died the day after parturition, time of death was LD 1, and so on until LD 32. Before collection, dead piglets were stored in a cool place in the barn, avoiding freezing, as that would negatively affect the quality of the necropsy. Dead piglets were collected from each herd twice weekly (Sunday and Wednesday), stored in a cooler overnight, and the necropsy was performed the next day. Sow-level data were recorded by the farmer on a separate form. Information recorded was sow id, number of live-born piglets, number of stillborn piglets, duration of parturition (time between first and last piglet born), whether the farmer had intervened during parturition, and/or if oxytocin was used. Production results from 2014 for the 14 herds were extracted from Ingris (Ingris, 2014 ). The total number of piglets born, which is synonymous with the term ‘total born’, was defined as the number of live-born plus the number of stillborn piglets per sow. As a part of the necropsy, piglets where weighed (kg) and measured (cm). Body length was measured from os occipitale to the root of the tail. Body mass index (BMI) was calculated using BW (kg) and the square of body length (m 2 ): BMI=BW (kg)/the square of body length (m 2 ). Postmortem examinations All piglets found dead or euthanized before weaning, and traceble back to the farm, were necropsied ( n =1200). The necropsy included general external and internal inspection at the Norwegian Veterinary Institute. In order to categorize each piglet as having one primary cause of death, pathoanatomical diagnoses (PAD) were based on the postmortem examination (Engblom et al ., 2008 ). Each piglet was assigned to one of the nine primary PAD categories ( Table 1 ). When several PAD categories were identified in an individual piglet, the PAD that was considered most likely to be the cause of death was used. For example, a piglet with anemia and broken ribs would be categorized as death due to trauma. When the primary cause of death was not possible to determine, the piglets were given the PAD No clear PAD. The category stillborn, was subdivided into four different groups; mummification, dead before parturition, dead during parturition and dead due to aspiration of amniotic fluids with or without meconium. The main criteria for each PAD are defined in Table 1 . Detailed postmortem findings, such as subcutaneous edema (SOD) and skin lesions, are defined below. The definition of SOD is a non-inflammatory accumulation of fluid in subcutaneous tissue. Skin lesions were defined as present when either discoloration, or swelling, or ulcerations were observed. When signs of infection were present, affected organs(s) were analyzed by standard microbiological techniques. The results were used to confirm/refute a PAD. The results from the microbiological examinations of 156 samples from 98 piglets were categorized as follows; (N) normal flora or no growth; (C) contamination or non-specific mixed growth; (U) unspecific growth and dependent on other findings; (P) pathogenic bacteria. Statistics Descriptive statistics were conducted using the statistical software STATA (Stata SE/10 for Windows; Stata Corp., College Station, TX, USA). When calculating the preweaning mortality among total-born piglets, the data set included all piglets born within the corresponding batch and herd ( n =5254). Preweaning mortality were calculated for both total-born piglets ( n =5254) and live-born piglets ( n =4924). The prevalence of each PAD was calculated at three levels; (a) among total-born piglets, (b) among live-born piglets and (c) among the necropsied piglets. The prevalence of each detailed postmortem finding, that is, SOD, skin lesions, circulation failure, etc., was calculated for each individual PAD. Any association between skin lesions and cause of death due to an infection (excluding the stillborn), was investigated using χ 2 test on contingency tables. κ was used to evaluate the agreement between the farmers’ tentative diagnoses of the cause of death and the cause of death according to PAD. κ Values between 0.21 and 0.40 are considered as a fair agreement, 0.41 and 0.60 indicates moderate agreement and 0.61 and 0.80 equals substantial agreement (Viera and Garreth, 2005 ). In the analyses, statistical significance was considered at P <0.05. Missing data Among the reported dead piglets ( n =1228), 12 were not collected for necropsy and the origins of 16 could not be identified and therefore these were not included in the postmortem analyses. Nevertheless, in order to obtain the correct mortality percentage for all piglets in the 14 herds, these 28 piglets were included when the preweaning mortality of the total number of piglets was calculated. In 63 piglets, the date of death had not been recorded and the time of death could not be calculated. Length (cm) was not recorded for 14 piglets, eight due to mummification. The start and end of parturition, that is, the time between first and last piglets being born, was recorded during 178 parturitions. A tentative diagnosis regarding the cause of death was obtained from the farmers for 980 piglets. In the comparison of the farmers’ and pathologists’ diagnoses, 220 piglets that did not receive an ‘on-farm diagnosis’ were excluded. Results Total preweaning mortality was 23.4% (1228/5254), with 6.3% (330/5254) of the piglets diagnosed as stillborn. The piglets were categorized into nine different PAD ( Table 2 ), and Figure 1 displays the prevalences of different causes of death among total-born piglets included in the study. Preweaning mortality of live-born piglets was 18.2% (898/4924). The most prevalent cause of death according to PAD among all live-born piglets ( n =4924), was trauma (7.1%, 349/4924). Starvation was diagnosed in 2.7% (132/4924) of the piglets ( Figure 2a ), and 3.0% were euthanized (120/4924). Piglets dying of an infection accounted for 2.0% (98/4924) of deaths, and being weak born for 1.5% (71/4924). Postmortem examination ( n =1200) The PAD of the primary cause of death in each piglet are shown in Table 1 . Among the necropsied piglets ( n =1200), 29% died due to trauma ( n =349), 26.8% of piglets were categorized as stillborn ( n =330), 11% died of starvation ( n =132) and 8% died from an infection. Among the piglets diagnosed as stillborn ( n =330), 45.5% (150/330) were normal mature fetuses that died during parturition, and 27.3% (90/330) were normal mature fetuses with aspired amniotic fluids, that is 72.8% died intrapartum . The remaining stillborn piglets died before parturition and were not fully grown (24.8%, 82/330) or were mummified (0.6%, 8/330). Excluding stillborn from the 1200 necropsied piglets, 39.9% (347/870) of the piglets were categorized as having died due to trauma. For all necropsied piglets with known time of death ( n =829), and excluding those with the PAD stillborn, 30.7% died at LD 0, 18.9% died at LD 1, 15.7% died at LD 2, 7.1% died at LD3, and 27.6% died between LD 4 and LD 32. The mean time of death for live-born piglets that died was LD 3.5 (ranging from LD 0 to LD 32). The association between time of death (LD) and BW (g) measured during the necropsy is shown in Figure 3 . The lowest mean BWs were found in piglets that died of starvation, were weak-born or were euthanized ( Table 2 ). The mean BMI-values for starving and euthanized piglets were 16 and 18 kg/m 2 , respectively. Piglets that died of an infection were among the heaviest piglets ( Figure 3 ) and had a high BMI. Of all live-born piglets that died, 17.3% had skin lesions, with 59.0% of these lesions on the front legs, mainly carpus (39.1%: Figure 2b ). A considerable number of piglets had lesions on both carpus and tarsus (23.0%), and some had deep skin ulcerations ( Figure 2c ). There was a significant association between macroscopic local skin lesions and death due to an infection ( P <0.001), with piglets that had died due to an infection were more likely to have skin lesions than not. When exploring the detailed findings of the necropsy for the most prevalent PAD (trauma, n =349), it was apparent that these piglets were among the heavier piglets that died ( Table 2 ). Most of the piglets that died of trauma did so within LD 3 (82.2%). The remaining 17.8% died between LD 4 and LD 21. Most of these piglets had full stomachs (68%), but 25% had empty stomachs and 7% had only a small amount of milk in their stomachs ( Table 3 ). Additionally, 20% of piglets that died of trauma had signs of anemia, 33% had signs of SOD ( Figure 2d ), and 13.2% had signs of dehydration. Other detailed descriptions from the necropsies for each PAD are provided in Table 3 . Unusual findings, such as hyperkeratosis of the pars esophagea were demonstrated in the ventricles of four piglets. Other infrequent findings were: esophagus obstruction, atresia ani , one piglet had drowned in the feeding trough, and another piglet had no aorta. Microbiological analyses Of the microbiological analyses from 8% of all necropsied piglets, 47.4% of the samples were from the small (39.7%) or large (7.7%) intestines. Among the samples analyzed, 44.3% of the bacteriological findings were categorized as N, 31.4% as C, 12.8% as U and 11.5% as P. Among the samples in category P and U, pathogenic bacteria were identified in 18 (47.4%). The most prevalent species was E. coli ( n =13, 34.2%); with four isolates of the type E. coli F4+ (intestine) and one hemolytic E. coli (small intestine). The other pathogenic bacteria found were identified as Staphylococcus hyicus (skin, spleen and lungs; n =5), Staphylococcus aureus (heart and spleen; n =3), Streptococcus sp. (joint; n =1) and Trueperella pyogenes (joint; n =1). Farmers’ diagnoses Farmers provided a tentative diagnosis on the cause of death for 980 (81.6%) of the 1200 necropsied piglets. Comparison of the farmers’ diagnoses with the PAD, indicates Fair agreement (34%, P <0.001). The highest agreement found was for piglets that had died due to crushing (farmers’ diagnosis)/trauma (PAD), Figure 1 . Among the piglets assigned the tentative diagnosis ‘found dead’ by the farmer ( Figure 1 ), 35% were in the PAD category Stillborn. Among piglets assigned the PAD category ‘Starvation’, the farmer used the word ‘starvation’ for 13.6% of the piglets Three percent of the total-born piglets were euthanised. The most frequent reasons for euthanasia given by the farmer were; starvation (34.3%, 36/105), weak born (22.9%, 24/105,), being small (9.5%, 10/105). Discussion Comparison of these results with those from a similar study of piglet mortality in Norwegian herds published over 30 years ago (Grøndalen et al ., 1986 ), indicates that the preweaning mortality of piglets has been reduced in Norway over this period. Whereas the former study reported 24.7% total loss of piglets, in the present study, the total piglet loss was 23.4%. The sows were housed differently in these two studies, with crates used in the former and loose housing in the current study. The genetics, nutrition, and management of piglets have also changed greatly during the intervening period between these studies. Studies from other countries have also shown comparable figures of piglet mortality in different housing systems (Weber et al ., 2009 ; KilBride et al ., 2012 ). Interestingly, the average percentage of preweaning mortality of live born reported to Ingris from the 14 study herds during the period from June 2013 to June 2014 was 18.2% (Ingris, 2014 ). However, the mortality recorded in our study was higher, and our figures were also higher than the country average of 21% that was reported during the same period. These discrepancies indicate problems in the registration system and possible under-reporting of piglet mortality at the national level. Under-reporting may be partly due to farmers not including mummified and malformed piglets in their report to the national registry. It is also known that farmers may euthanize very small piglets without counting these as being born alive, and this can influence numbers in the national database. A well-functioning recording system is essential for modern pig production, and our data suggest that validation of the current Norwegian database, Ingris, could be advantageous, both for breeding purposes and for animal welfare assessments. The number of stillborn piglets was 2% lower than the number in the national database, Ingris, which should be representative of the Norwegian pig population. The numbers of stillborn were also lower than those reported in a study of loose-housed herds in England (KilBride et al ., 2012 ). Participant bias may be one reason for the number of stillborn being lower than expected, as the study may have selected for participation of particularly concerned farmers. The study also demanded considerable efforts from each farmer during parturitions, and this may have influenced the number of piglets that were saved as the farmers were more likely to be present during parturition. Results from a larger study performed in the same herds as the present study (Rosvold et al ., 2017 ), showed that farmer presence during parturition decreased the preweaning mortality. Of all the piglets necropsied, the percentage of stillborn piglets was reduced from that reported in 1986 (Grøndalen et al ., 1986 ; 37.5%) until today (26.8%). This suggests that management procedures, feeding, and/or genetics have improved with respect to reduction of stillbirths. Comparing the distribution between the categories of stillborn, the majority of the piglets were fully grown and died during parturition. This is in accordance with a study on stillbirths preformed in the Netherlands (Leenhouwers et al ., 1999 ). Piglets born late during the parturition process, and/or with a broken umbilical cord, have been suggested as explanations for 71% of stillbirths in a previous study in Norwegian piglets (Rootwelt et al ., 2012 ). Our finding that 14% of stillborn piglets had signs of anemia, could suggest that a broken umbilical cord is a possible cause of death. Interestingly, we found that 33% of stillborn piglets had clear signs of SOD; this could be due to cardiac failure caused by asphyxia, as cited in a review (Alonso-Spilsbury et al ., 2005 ). However, SOD may also be a normal finding as newborns may have some degree of edema due to a large amount of total body fluid (80%) (Fanaroff and Martin, 2002 ). The postmortem examinations showed that most live-born piglets that die before weaning, die of trauma (7.1%). This is comparable with other studies with loose-housed sows (4.5% to 9.8%; Weber et al ., 2009 ). However, other studies have reported a slightly higher risk of crushing in comparable housing systems (KilBride et al ., 2012 ; Hales et al ., 2013 ). However, the overall mortality in crate systems and loose-housing systems is reported to be similar, and sow welfare is greatly improved in loose-housing systems (KilBride et al ., 2012 ; Hales et al ., 2013 ). In addition, trauma (caused by crushing) is highly associated with mothering style (Andersen et al ., 2005 ); lateral lying style and nest-building activities are strongly associated with the risk of crushing (Pedersen et al ., 2006 ). Planned future studies will explore how mothering style is associated with each cause of death. In 1986, Grøndalen et al . ( 1986 ) reported that one of the most common causes of preweaning mortality of total-born was dehydration, with a prevalence of 2.9%. We assume that the dehydrated piglets reported in 1986, would have been categorized as starved in our study (2.5%), as 65% of the piglets that were given the PAD ‘starvation’ also had signs of dehydration. Preweaning mortality among live-born piglets due to bacterial infection occurred in 2.0% of preweaning deaths in these 14 pig farms, indicating that bacterial infections are currently not an important factor in preweaning mortality in Norway. The proportion of infections among necropsied piglets has been reduced in Norway, from 25% in 1986 (Grøndalen et al ., 1986 ) to 8.0% today. One reason may be that vaccination of sows is more common today than 30 years ago. Another reason may be the common use of all-in-all-out management today, which includes thorough cleaning of farrowing pens between batches. In a Swedish experimental study, 24% of all necropsied piglets had an infection (Westin et al ., 2015 ), most of which was enteritis (60%). In our study, there was a significant association between skin lesions and death due to an infection in the piglets. Skin lesions may provide an entrance route for pathogenic bacteria and thereby increase the risk of death in piglets. Piglets that died of an infections, died closer to weaning. Late losses are more costly in terms of management and labor, and may also be considered as having a more severe impact on animal welfare as infections often cause pain and discomfort over a longer time period. Our findings were in accordance with a review stating that most of the total preweaning mortality occurs during the first 72 h (Muns et al ., 2016 ). However, most studies report higher piglet mortality during the 1 st days of life than we observed in our study (Pedersen et al ., 2011 ; KilBride et al ., 2012 ; Muns et al ., 2016 ). Our findings suggest that in Norwegian pig farms, the survival rate of piglets could be improved by farmers directing greater efforts towards detection of starving piglets and preventing skin lesions from occurring. Starving and euthanized piglets had the lowest mean BWs and BMI in our study, both close to 17 kg/m 2 . Kielland et al . ( 2015 ) found that piglets with BMI below 17 kg/m 2 had significantly lower Immunoglobulin G (IgG) concentrations in the blood at LD 1. In addition, a lower BMI is associated with a relatively increased body surface, which increases heat loss (Quesnel et al ., 2012 ). Thus, achieving a higher BMI should be a goal in piglet production, both to reduce piglet mortality by adequate intake of colostrum, and to prevent heat loss. Achieving adequate milk intake by piglets is important, especially with the demand for larger litter sizes resulting in greater competition around suckling. Not surprisingly, the diagnosis crushing/trauma showed the highest agreement between farmer and pathologist, as external and/or internal lacerations and/or fractures are easily observed macroscopically. Piglets that died due to starvation were more challenging for the farmer to diagnose correctly, and agreement was only 13.6%. Increasing farmers’ knowledge on how to identify a starving piglet, improve their milk supply, and manage these piglets, remains a work in progress, especially for high-producing sows with large litters. Conclusion Our study shows that there are still challenges in pig farming regarding preweaning mortality in piglets due to trauma and starvation. Many of the stillborn piglets were fully developed, a finding indicating that there is a potential to reduce the number of stillborn. Most piglets that died in the preweaning period, died of trauma. Surprisingly, this included large and well-fed piglets. The second most prevalent cause of pre-weaning mortality was starvation. Improved monitoring may reveal piglets with low BMI, and additional nutrition may contribute to increase the survival rate. Acknowledgments The authors would like to thank the farmers participating in this work-intensive study. We are also grateful for the funding provided by the Agricultural Agreement Research Fund and the Foundation for Research Levy on Agricultural Products.
|
[
"ALONSOSPILSBURY",
"ANDERSEN",
"DAMM",
"EDWARDS",
"EDWARDS",
"ENGBLOM",
"FANAROFF",
"GRONDALEN",
"HALES",
"KIELLAND",
"KILBRIDE",
"LEENHOUWERS",
"MUNS",
"PEDERSEN",
"PEDERSEN",
"QUESNEL",
"ROOTWELT",
"ROSVOLD",
"SCHMIDT",
"STRANGE",
"VIERA",
"WEBER",
"WESTIN"
] |
cbd1ed155b604314b30b55de4cb57801_Feedback linearized sliding mode controller for high-power PEMFC thermal management system adapted t_10.1016_j.ecmx.2025.101189.xml
|
Feedback linearized sliding mode controller for high-power PEMFC thermal management system adapted to road driving cycle
|
[
"Chen, Yiyu",
"Long, Mengjun",
"Jiang, Sai",
"Liu, Yuanli",
"Zhan, Zizhang",
"Wang, Lihua",
"Wan, Zhongmin"
] |
A feedback linearized sliding mode controller (FLSMC) is designed to achieve high-precision temperature control of a high-power proton exchange membrane fuel cell (PEMFC) under current disturbance, and a water-cooled heat exchanger is developed to solve the time lag problem, the rise time of the water-cooled is faster than that of the air-cooled by 34.65 s, and the settling time is faster by 218 s. In terms of regulation control, the rise time of FLSMC is about 7 s, which is 8 s and 5 s faster than PID and Fuzzy-PID. In terms of tracking performance, when the disturbance (current as well as cooling water temperature appears to step) shows in these four tracking phases, the relative error under FLSMC is able to maintain at 0 %, while the ones under PID and Fuzzy-PID show fluctuating situations, with relative error fluctuations of minimum of 0.366 % and maximum of 1.57 %. Finally, the temperature control performance of FLSMC under driving cycles were analyzed, the temperature errors were controlled within 0.02 °C overall. At the same time, the temperature does not show continuity fluctuations. FLSMC improves the system’s anti-interference capability and realizes high-precision and fast-response control of the temperature of high-power PEMFC.
|
Nomenclature T st Stack temperature V Voltage Q Energy, kJ/s I st Stack current, A m ̇ mass flow, kg/s C p Specific heat at constant pressure N Molar flow, mol/s φ relative humidity P sat saturation pressure ρ density V h Volume of hot fluid, m 3 V c Volume of cold fluid k heat transfer coefficient Nu Nusselt number Re Reynolds number Pr Prandtl Number Superscripts and subscripts ai Anode inlet ci Cathode inlet ao Anode outlet co Cathode outlet w water g gaseous state l liquid state h hot fluid c cold fluid hi Heat exchanger Hot end inlet ho Heat exchanger Hot end outlet ci Heat exchanger Cold end inlet co Heat exchanger Cold end outlet 1 Introduction With the growing demand for energy, renewable energy has emerged as a clean and promising alternative source of energy for sustainable development [ 1–4 ]. Among them, hydrogen energy is an abundant, clean, non-polluting secondary energy with a wide range of applications, and is the most widely distributed energy source on earth [ 5 , 6 ]. Proton exchange membrane fuel cell (PEMFC) is an important application carrier of hydrogen energy in automobile power system [ 7 ]. PEMFC have the advantages of long range, short charging and discharging time, and high energy density compared to lithium batteries [ 8 ]. Compared with conventional internal combustion engines, PEMFC have the advantages of high efficiency, zero emissions and low noise [ 9 ]. However, PEMFC is more sensitive to temperature fluctuations due to its electrochemical characteristics, the temperature varies greatly during operation, and therefore requires an auxiliary cooling system to remove most of the heat generated [ 10–12 ]. Previous studies have shown that temperature is one of the key parameters affecting the performance of PEMFC systems [ 13 , 14 ]. The low operating temperature will reduce the catalyst activity and lead to lower output performance of the stack, and if the operating temperature is too high it will lead to membrane degradation [ 15 ]. Therefore, a properly operating temperature ensures good performance of the PEMFC system. Due to the constantly changing operating conditions, the development of high-precision and high-efficiency thermal management strategies remains a key challenge to improve the output performance of PEMFC [ 16 ]. In order to achieve effective cooling, researchers have tried different cooling techniques. Zhang et al. [ 17 ] have systematically studied and discussed the cooling techniques reported in publications and patents, they found that among all the cooling techniques, liquid cooling was currently the most widely used cooling strategy in high-power PEMFC stacks, and it was especially suitable for automotive PEMFC stacks with output power over 80 kW. In practical engineering applications, control-oriented models are more practical. Yang et al. [ 18 ] presented a detailed description of heat generation, heat transfer and heat dissipation in PEMFCs and their associated calculation formulas, providing a basis for dynamic modeling of the PEMFC cooling system. Hu et al. [ 19 ] developed a simplified PEMFC temperature control model and verified its effectiveness through experiments. Meanwhile, a predictive control method with optimal operating temperature tracking was proposed, which made the efficiency of PEMFC in the driving cycle improved. A reasonable thermal management control strategy can effectively improve the performance of PEMFC and extend its service life. In response to the key issue of temperature control, researchers have adopted a large number of different control strategies to conduct in-depth studies on the thermal management module of fuel cells. In the past few decades, traditional PID has been widely used in temperature control of PEMFC due to its simple structure and flexible use. Ahn JW et al. [ 20 ] designed and used the classical proportional integral (PI) controller and state feedback control of the thermal circuit to maintain the stack temperature at the set value. Liso et al. [ 21 ] constructed a control oriented dynamic model of liquid cooled PEMFC for analyzing temperature response during load transients, and used a PID controller to control the stack temperature. However, due to the uncertainty and high nonlinearity of the variables in the fuel cell system model, the control performance of PID has significant limitations when facing drastic changes in dynamic loads. Therefore, Xu et al. [ 22 ] proposed a proportional integral derivative temperature control strategy based on sparrow search algorithm (SSA-PID) to address the problems of slow response and poor dynamic performance of traditional PID. The results showed that compared with traditional PID, the proposed method has faster convergence speed and better dynamic performance. In addition, researchers have also applied other control methods to the temperature control of fuel cells. For example, Han et al. [ 23 ] proposed an adaptive control strategy based on model reference for the strong nonlinearity and parameter uncertainty of fuel cell system. The results show that, compared with the traditional PID, it has better robustness. Zhang et al. [ 24 ] developed the closed-loop feedback MPC, and the proposed controller showed higher control accuracy with faster adjustment time compared with the conventional PI controller under the equivalent test condition of the joint drive cycle. Chen et al. [ 25 ] designed a controller consisting of nonlinear feed-forward and Linear Quadratic Regulator state feedback with the fuel cell cooling system of city bus as the research object, the temperature control error of the stack was within ±0.5 °C. Chen et al. [ 26 ] proposed a cascaded internal mode control to make the stack temperature track the target value under dynamic operating conditions, the step response time was 44 s and the overshoot was about 1 °C. Liu et al. [ 27 ] proposed a model predictive control strategy for thermal management of automotive fuel cells combining model adaptation, look-ahead information and temperature trajectory planning, the temperature tracking errors were reduced by 22 % to 60 % and energy savings were increased by 50 % compared to multivariable PID. It was found that the temperature distribution can be improved by adjusting the input and output cooling water temperatures. Liu et al. [ 28 ] proposed a decoupled controller to achieve synchronized control of the temperature of the cooling water in and out of the PEMFC stack, and the results showed that the temperature error range was less than 0.2 °C even under dynamic current loading conditions. The development of high-power PEMFC (>100 kW) is an inevitable trend to meet the high-energy demands in fields such as heavy transportation and fixed power stations. However, the leap in power density has brought about severe thermal management challenges. Thermal management control strategies with precise, strong robustness and rapid dynamic response have become the core bottleneck to ensure efficient energy conversion and suppress the performance degradation of key components in high-power PEMFC systems. The widely used thermal management control methods at present, such as PID, although they have a simple structure, often exhibit problems such as insufficient robustness, lagging dynamic response or poor steady-state accuracy when dealing with strong nonlinearity, large-scale variable load conditions and model uncertainties. MPC has theoretical advantages, but its heavy reliance on high-precision models greatly limits its practical engineering value. Therefore, it is urgent to develop thermal management control strategies that can effectively overcome the inherent strong nonlinearity, parameter uncertainty and external disturbance effects of high-power PEMFC systems. To solve the above problems, FLSMC was applied in the control strategy of high power PEMFC thermal management system in this study. The proposed control strategy simplifies the complexity of control design by linearizing the original complex nonlinear system into a linear equivalent system with desired dynamics. At the same time, the saturation function is used to replace the original sign function in the sliding mode control, which inherits the advantages of strong robustness and high precision of the traditional SMC, and effectively suppresses the chattering and significantly reduces the adverse impact on the actuator. Finally, in order to verify the control effect, the effective temperature management of fuel cell stacks under dynamic road conditions is achieved by simulation runs under road conditions with different time scales. 2 System description and analysis To facilitate this study, the design of the PEMFC cooling system is simplified. Fig. 1 shows the schematic diagram of the cooling system. The cooling circuit is a conventional cooling circuit consisting of stack, cooling water tank, pump, heat exchanger, cooling machine and flow meter. When flowing through the heat exchanger, the cooling water from the pump exchanges heat with the coolant from the heat exchanger, and the cooled water enters the fuel cell stack to cool the fuel cell stack. 2.1 Description of the fuel cell system The output voltage of a single cell is mainly composed of the Nernst open circuit voltage , the ohmic overvoltage loss ( E nernst ) due to resistance, the activation overvoltage loss ( V ohmic ) due to electrochemical reaction, and the concentration voltage loss ( V activation ) due to concentration change during the reaction. Therefore, the single-cell voltage ( V concentrate ) can be calculated using a semi-empirical model [ ( V cell ) 29 ]: (1) V cell = E nernst - V activation - V ohmic - V concentrate The open circuit voltage of Nernst can be calculated using equation (2) : (2) E = 1.229 - 8.5 × 10 - 4 T st - 298.15 + 4.3085 × 10 - 5 T st ln P H 2 + 1 2 ln P O 2 Due to the existence of many studies and analyses on the simulation modelling of PEMFC stack voltages, they will not be described here. The specific stack voltage modelling for this study can be found in Ref [ 30 ]. The operating temperature of PEMFC is usually around 80℃, and the performance of PEMFC can be improved at higher temperatures due to the higher gas diffusivity and membrane conductivity. The voltage characteristic curves of a single cell at different operating temperatures are obtained experimentally. Fig. 2 shows the performance of the PEMFC under different current and temperature conditions. It can be found that the power output increases continuously with increasing temperature, indicating that appropriately raising the operating temperatures lead to higher performance. When the temperature reached 90℃, the output of the fuel cell was decreased significantly, which indicated that too high temperature would seriously affect the output performance of the fuel cell, and therefore proper thermal management of the fuel cell was necessary. 2.2 Thermal management model analysis In this study, it is believed that the outlet temperature of the cooling water for the fuel cell stack is approximately equal to the actual temperature ( T st , o u t ) of the stack [ ( T st ) 31 ]. Based on the conservation of energy, the total energy balance equation for a fuel cell stack consists of six parts: the input fuel energy , the energy of the input gas ( Q fuel ) , and the energy of the output gas ( Q in ) produced by the electrochemical reaction, the electrical energy ( Q out ) , the energy carried away by the cooling water ( Q elec ) , and the heat dissipated to the surroundings ( Q cl ) [ ( Q loss ) 32 ]. (3) M st C st d T st dt = Q fuel + Q in - Q out - Q elec - Q cl - Q loss The total energy is calculated based on the electrochemical reaction as follows: ( Q fuel ) (4) Q fuel = Δ H × n I st 2 F The energy brought in by the reaction gas can be calculated as: (5) Q in = m ̇ H 2 , a i C p , H 2 + m ̇ w , g , a i C p , w , g T ai - T atm + m ̇ air , c i C p , a i r + m ̇ w , g , c i C p , w , g T ci - T atm The energy carried away by the reaction gas can be calculated as: (6) Q out = m ̇ H 2 , a o C p , H 2 + m ̇ w , g , a o C p , w , g T ao - T atm + m ̇ O 2 , c o C p , O 2 + m ̇ w , g , c o C p , w , g + m ̇ N 2 , c o C p , N 2 + m ̇ w , l , c o C p , w , l T co - T atm In Eqs (5) , (6), is the outside ambient temperature, which is assumed to be 25 °C. T atm and T ai represents the inlet temperature of the anode and cathode gases, respectively. T ci and T ao represents the outlet temperature of the anode and cathode gases, respectively, in thermal management studies it is generally considered to be equal to the temperature of the electrical stack. T co and m ̇ w , a i represents the water vapor mass flow rate from the anode and cathode into the stack, respectively. Due to the fact that the water vapor brought in by the gas does not participate in electrochemical reactions, it is assumed in this paper that the inflow is equal to the outflow for the convenience of the study. m ̇ w , c i is the mass flow rate of liquid water generated by the cathode of the fuel cell stack. m ̇ w , l , c o The flow rate of the gases involved in the reaction and the flow rate of the water produced by the reaction are calculated as follows: (7) m ̇ O 2 rec = N O 2 rec × M O 2 m ̇ H 2 rec = N H 2 rec × M H 2 m ̇ w , l gen = N w , l gen × M H 2 O The molar flow of the reaction in the above equation can be derived from the following equation: (8) N O 2 rec = n I st 4 F N H 2 rec = n I st 2 F N w , l gen = n I st 2 F The gas flow rate into the anode and cathode is determined by an excess factor: (9) m ̇ H 2 , a i = m ̇ H 2 rec × λ H 2 m ̇ a i r , c i = M air × N O 2 rec × λ O 2 0.21 Based on the principle of conservation of mass, the mass flow rate of the gas outflow is calculated as follows: (10) m ̇ H 2 , a o = m ̇ H 2 , a i - m ̇ H 2 rec m ̇ O 2 , c o = m ̇ O 2 , c i - m ̇ O 2 rec m ̇ N 2 , c o = 0.79 × M N 2 × N O 2 rec × λ O 2 0.21 Based on the definition of humidity ratio, the mass flow rate of water vapor carried by the anode and cathode gases is calculated as follows: (11) m ̇ w , g , a i = M H 2 O M H 2 × φ H 2 , a i × P sat T ai P ai - φ H 2 , a i × P sat T ai × m ̇ H 2 , a i m ̇ w , g , c i = M H 2 O M air × φ air , c i × P sat T ci P ci - φ air , c i × P sat T ci × m ̇ air , c i The saturated vapor pressure can be approximately calculated by the following formula: (12) P sat = 611.2 e 17.62 T 243.12 + T The heat absorbed by the cooling water as it flows through the stack can be calculated as: (13) Q cl = m ̇ cl C cl ( T st , o u t - T st , i n ) The electrical energy of the stack can be calculated as follows: (14) Q ele = n V cell I st The heat dissipation from the stack to the surroundings can be calculated as: (15) Q loss = h st A ( T st - T atm ) The manipulated variables of the thermal management system are the input voltages of the pumps corresponding to different cooling water flow rates, and the external load current signals are regarded as disturbing variables. Fig. 3 (a)–(c) show the response of the stack temperature under the step current, cooling water flow rate and stack cooling water inlet temperature. It is worth noting that the results of (a) are obtained at a cooling water flow rate of 3 kg/s and a temperature of 40℃. The results of (b) are obtained at a current of 600A and a cooling water temperature of 40℃. The results of (c) are obtained at a current of 600A and a cooling water flow rate of 5 kg/s. As can be seen from the figure, the high-power fuel cell thermal management system has a very high time delay constant, and it is clear that lowering the cooling water temperature through increased fan speeds will not improve the situation, so a water-cooled heat exchanger needs to be designed. 2.3 Heat exchanger model design and analysis 2.3.1 Water tank model Water tanks are used to store cooling water and are usually made of insulating materials. The heat exchange between the water in the tank and the external environment is usually negligible and is calculated as follows: (16) M cl C cl d T hi dt = m ̇ cl C cl ( T st , o u t - T hi ) The temperature of the cooling water exiting the stack is equal to the temperature of the cooling water entering the tank. The temperature of the tank outlet is equal to the temperature of the cooling water entering the heat exchanger ( T st , o u t ) . ( T hi ) 2.3.2 Heat exchanger model Cooling water flows out of the tank into the heat exchanger for heat exchange, and then enters the fuel cell stack to exchange heat to cool the fuel cell stack to the desired operating temperature. And the heat exchanger can control the cooling water temperature at the hot end by controlling the coolant flow and temperature at the cold end. Neglecting pipe heat dissipation, the outlet temperature of the hot end of the heat exchanger is the inlet temperature of the cooling water ( T ho ) for the fuel cell stack. The heat exchanger is specifically modeled according to literature [ ( T st , i n ) 33 ] and improved upon as follows: (17) T ̇ ho = - kS ρ h C p , h V h × Δ t + 1 ρ h V h × m ̇ h T hi - T ho T ̇ co = kS ρ c C p , c V c × Δ t + 1 ρ c V c × m ̇ c T ci - T co is the average temperature difference of heat transfer, which is calculated as follows: Δ t (18) Δ t = T hi - T co - T ho - T ci ln T hi - T co T ho - T ci Δ t = T hi - T co + T ho - T ci 2 , w h e n T hi - T co = T ho - T ci The heat transfer coefficient k of the heat exchanger can be determined by the following equation: (19) 1 k = 1 h 1 + 1 h 2 + δ λ + R 1 + R 2 The convective heat transfer coefficients and h 1 on both sides of the heat exchanger can be determined by the following equation: h 2 In the above equation, (20) N u = hL β N u = 0.023 × Re 0.8 × Pr 0.4 R e = ρ V L μ = 4 × m ̇ μ × Total wetted perimeter P r = C μ β is the thermal conductivity of the fluid in the heat exchanger, V is the fluid flow rate and β is the fluid dynamic viscosity. μ For noncircular tubes or ducts, the hydraulic diameter L is used to compute the Reynolds and Nusselt numbers. It is defined as: (21) L = 4 × Cross sectional area for flow Total wetted perimeter Detailed parameters for the model development section are located in Table 1 . 3 Controller strategy 3.1 The feedback linearized sliding mode controller The thermal management system of automotive fuel cell is a complex nonlinear system, and feedback linearization can eliminate the inherent nonlinearity of the system, but the traditional feedback linearized controller is sensitive to external disturbances and parameter uncertainties and cannot overcome the external perturbations, and this problem can be solved by supplementing additional robust compensators. Sliding mode control is considered as an effective method to improve the robustness and immunity of the controlled system. Considering that fuel cell vehicles need to adapt to various complex road conditions, the feedback linearized sliding mode controller (FLSMC) is designed in this paper to achieve stable control of the electric stack temperature under different road conditions [ 34 ]. According to Equation (3) , the equation of state of the system is defined as where the state variable (22) x ̇ = f x , t - g x , t × u and the control input x = T st , u = m ̇ cl , f x , t = Q fuel + Q in - Q out - Q elec - Q loss M st C st . g x , t = C p , w , l ( T st - T st , i n ) M st C st For a first order system, the integral sliding mode surface is defined: where (23) s = e + C ∫ e d t , e = x - x d is the reference value, C is a non-zero positive number. x d Define the sliding mode convergence law as where η > 0, k > 0. The exponential term −ks ensures that the system converges to the sliding mode surface at a large rate when s is large. Therefore, the exponential convergence law is particularly suitable for solving response control problems with large steps. In the exponential convergence law, in order to ensure fast convergence while weakening the jitter, η should be decreased while increasing k. (24) s ̇ = - η sgn s - k s Therefore, the controller u is designed as (25) u = x ̇ d - C e - η sgn s + k s - f x , t - g x , t Define the Lyapunov function as: , and take the derivative of it with respect to t V = 1 2 s 2 (26) V ̇ = s s ̇ = s e ̇ + C e = s x ̇ - x ̇ d + C e = s x ̇ d - η sgn s - k s - C e - x ̇ d C e = s - η sgn s - k s = - η s - k s 2 ≤ 0 This proves that the system state can tend to stabilize. 4 Results and discussion 4.1 Model validation In this paper, HS-30 kW test platform is used to collect experimental data, Fig. 4 (a) shows Hephas Energy's HS-30 kW test bench for experimental validation of the built model. The purity of hydrogen and nitrogen used in the experiment is required to be ≥99.99 % (by volume fraction). The experimental steps are as follows: First, install the stack to be tested onto the test bench and strictly check its airtightness. In order to ensure that there is no air residue in the cathode gas flow channel, it is necessary to use high purity nitrogen to purge the cathode side, and the continuous purge time is not less than 3 min. The humidification condition of the reaction gas is set at a constant 100 % relative humidity (RH), the stoichiometric ratio of the reaction gas is set at 2, and the test range of the current density is 0 to 2(A/cm 2 ). Fig. 4 (b) shows the comparison between the simulation results and experimental data under the above conditions. As shown in Fig. 4 (b), the proposed model can accurately and reliably reflect the actual operation of PEMFC, with the maximum error within 0.1 V. 4.2 The dynamics performance of the FLSMC In order to verify the control performance of the proposed control strategy, two traditional control strategies are introduced for comparison: Traditional PID control : Two PID controllers are set to control the stack temperature and cooling water temperature into the stack, respectively. Fuzzy-PID control : Setting Fuzzy-PID and PID to control the temperature of the stack and the cooling water temperature, respectively. FLSMC : Setting FLSMC and PID to control the stack temperature and the cooling water temperature, respectively. To quantitatively analyze the control performance of these three controllers, the rise time t , settling time r t and overshoot σ were introduced to evaluate the dynamics performance of the FLSMC. The rise time s t is a measure of the response speed of the system, and it is defined as the time required for the response to rise to the final value for the first time. The settling time r t , a comprehensive index for evaluating the response speed and damping degree of the system, means the shortest time required for the response to reach and remain within ±2 % of the final value, while the overshoot σ defined as the percentage of the ratio of the maximum deviation of the response to the final value. s 4.2.1 Regulate performance Under general operating conditions, the heat production of the fuel cell stack is directly proportional to the current, and the higher currents cause the stack to generate more heat, which affects the output power of the stack. Based on the above considerations, a load current with an initial value of 350A and a step value of 225A at 500 s was added, and the operating temperature of the stack was set to 70℃. Fig. 5 shows the control effect of the temperature of the stack with different controllers. From Fig. 5 (a), it can be seen that FLSMC has a strong anti-interference capability compared with PID and fuzzy PID. When the current step appears at 500 s, the temperature fluctuation under both PID and Fuzzy-PID control is greater than 1 °C, while the temperature fluctuation of the stack under SMC control is so small that it is almost negligible. Meanwhile, the rise time of FLSMC is about 7 s, which is 8 s and 5 s faster than PID and Fuzzy-PID. The settling time is 38.8 s, which is 77.2 s and 27.2 s faster than PID and Fuzzy-PID. The overshoot is 2.14 %, which is 2.15 % and 0.09 % less than PID and Fuzzy-PID. From Fig. 5 (b), the relative error of temperature under FLSMC control is significantly smaller than the other two controllers and can converge to 0.001 % quickly. In SMC, the main functions of the sign function and the saturation function are to force the system state trajectory to reach and remain on the sliding mode surface, thereby obtaining strong robustness against parameter uncertainty and external disturbances. We define the sign function as: sign ( s ) = 1 , s > 0 - 1 , s < 0 We define the saturation function as: where sat ( s , θ ) = sign ( s ) , s > θ s θ , s ⩽ θ , is a design parameter called the boundary layer thickness. θ > 0 The sign function in FLSMC provides the strongest “driving force”. No matter how close the state trajectory is to the sliding mode surface (as long as s ≠ 0), it will attempt to “pull” the state trajectory back to the sliding mode surface. This control force will frequently switch near the sliding mode surface. As shown in Fig. 6 (a), the control variable (cooling water flow rate) under the sign function shows severe vibration due to high-frequency switching, which makes the system have strong robustness to interference as shown in Fig. 5 . When the current step changes, the stack temperature will not fluctuate significantly, but such vibration will damage the actuator (water pump or valve). In Fig. 6 (a), the darker the color, the faster the vibration frequency. When the saturation function is used to replace the sign function, the vibration disappears as can be seen in Fig. 6 (b). Meanwhile, compared with PID and fuzzy PID, the cooling water flow rate under FLSMC can reach the expected value more quickly and stably ( Table 2 ). 4.2.2 Tracking performance In the simulation study of tracking control, the current shows a step change at 500 s, 3000 s, 3730 s, and 5000 s, while the inlet cooling water temperature of the stack shows a step change at 1000 s, 2500 s, 4500, and 5270 s. Fig. 7 shows the effect of different controllers in tracking the temperature of the stack. From Fig. 7 (a), it is clear that when both the current and the cooling water temperature is showing a step, the FLSMC is able to overcome these disturbances well, and no significant fluctuation occurs while tracking the temperature of the stack. In the 0–1500 s tracking period, the rise time of FLSMC is 15.4 s and 3.1 s faster than PID and Fuzzy-PID, the settling time is 21 s and 100 s faster, and the overshoot is reduced by 1.05 % and 1 %. In the tracking period of 1500–3000 s, the rise time of FLSMC is not much different from that of Fuzzy-PID, but it is 14 s faster than that of PID, and the overshoot is reduced by 0.45 % and 0.6 % compared to that of PID and Fuzzy-PID. In the tracking period of 3000–4500 s, the rise time of FLSMC is 19 s and 4 s faster than that of PID and Fuzzy-PID, and the overshoot is reduced by 0.72 % and 0.715 %. During the tracking period of 4500–6000 s, the rise time of FLSMC is not much different from PID and Fuzzy-PID, but the overshoot is reduced by 1.428 % and 1.257 %. Fig. 7 (b) shows the relative errors of temperature tracking under the three controllers. During the four periods of temperature tracking (0–1500 s,1500–3000 s, 3000–4500 s and 4500–6000 s), the relative errors under FLSMC and Fuzzy-PID can converge to 0 %, while that under PID fluctuates around 0.01 % and cannot converge to 0 %. More importantly, when the disturbance (current as well as cooling water temperature appears to step) shows in these four tracking phases, the relative error under FLSMC is able to maintain at 0 %, while the ones under PID and Fuzzy-PID show fluctuating situations, with relative error fluctuations of minimum of 0.366 % and maximum of 1.57 %. It is worth noting that when the inlet cooling water temperature of the stack shows step, the temperature tracking shows significant fluctuations compared to when the current shows step. This indicates that the temperature control of cooling water is extremely important in the thermal management of stack, especially in automotive power stacks. Fig. 8 shows the heat exchange effect of water-cooled heat exchanger and air-cooled radiator, and it can be clearly seen that the rise time of the water- cooled is faster than that of the air cooling by 34.65 s, and the settling time is faster by 218 s. It is worth noting that the cooling fan of the water-cooled radiator is the SPAL506 fan manufactured by SPAL, and it is necessary to increase the number of the fans to 20 to achieve the results shown in the figure in the simulation. Obviously, it is not suitable for thermal management of automotive fuel cells ( Table 3 ). 4.2.3 Temperature control under vehicle conditions In real engineering applications current would not present a simple step change, so the vehicle cycle test condition is introduced in this section. According to the force analysis and the laws of mechanics, the vehicle longitudinal dynamics model is established, which can be calculated by the following equation: where, (36) F t = F w + F p + F j + F is the drive power of the car, F t is the air resistance, rolling resistance, acceleration resistance, and slope resistance respectively, can be calculated by the following equation: F w , F p , F j , F where, m is the vehicle mass, ρ denotes the air density, a is the vehicle acceleration, (37) F w = 1 2 × ρ A C w v 2 F p = m g f F j = δ m a F = m g sin θ is the coefficient of air resistance, A denotes the windward area, δ is the rotational mass conversion coefficient, f is Rolling resistance coefficient, θ is the slope. C w In order to improve the reliability of the simulation results, the optimal range of FLSMC and PID parameters is firstly found by empirical trial-and-error method, and then the optimal value is found within this range by immune-optimization algorithm. 4.2.3.1 NEDC NEDC is the European endurance Standard test condition, which includes four urban cycles and one suburban cycle. The urban cycle lasts 780 s, with a maximum speed of 50 km/h; the suburban cycle lasts 400 s, with a maximum speed of 120 km/h. As shown in Fig. 9 (a), the NEDC is a steady state condition with more uniform speed and less acceleration and deceleration. Fig. 9 (b) shows the temperature control performance of the three controllers under the NEDC. It can be seen from the graph that the rise time of FLSMC is 10 s faster compared with PID and Fuzzy-PID, While, the error can be stabilized within 0.02℃, even in the high current areas (1050–1200 s). Fig. 10 shows the control input rate under NEDC, as can be seen from the figure, when the stack load undergoes continuous fluctuation changes, Fuzzy-PID is not very sensitive to such continuous fluctuations and can only adjust the cooling water flow to the trend of load changes. However, FLSMC can make the cooling water flow adjust accordingly to such continuous fluctuations. 4.2.3.2 WLTP As shown in Fig. 11 (a), WLTP (Worldwide Harmonized Light Vehicles Test Procedure) is closer to the actual road driving conditions compare to NEDC, its complete test cycle consists of four phases: low speed, medium speed, high speed and ultra-high speed, and lasts a total of 1800 s, of which the idling time is 235 s, and the travel distance is 23266 m, the average speed is 46.5 km/h, and the maximum speed is 131.3 km/h. Fig. 11 (b) shows the temperature control performance of the three controllers under the WLTP. From the figure, it can be seen that in the three areas of low (0–589 s), medium (589 s-1023 s) and high speeds (1023–1500 s), the temperature control error of FLSMC can be roughly kept within 0.01℃ without continuity fluctuations compared to PID and Fuzzy-PID. In the ultra-high speed area, as the maximum flow rate of the pump is limited, the maximum error is 0.265℃, but it is still lower than the error of the other two controllers. Fig. 12 shows the control input rate under WLTP, as can be seen from the figure, when the stack load undergoes continuous fluctuation changes, Fuzzy-PID is not very sensitive to such continuous fluctuations and can only adjust the cooling water flow to the trend of load changes. However, FLSMC can make the cooling water flow adjust accordingly to such continuous fluctuations. 4.2.3.3 CLTC-P As shown in Fig. 13 (a), CLTC-P (China light-duty vehicle test cycle-passenger car) includes three speed intervals: low speed, medium speed and high speed, and the working length is 1800 s, of which the proportion of time in low speed interval is 37.4 %, the proportion of time in medium speed interval is 38.5 %, the proportion of time in high speed interval is 24.1 %, the average speed is 29.0 km/h, the maximum speed is 114.0 km/h, and the idling proportion is 22.1 %, which is more in line with the road driving situation in China. Fig. 13 (b) shows the temperature control performance of the three controllers under the CLTC-P. It can be seen from the figure that during the whole CLTC-P test cycle, the temperature under FLSMC does not show in the continuity fluctuation similar to PID and Fuzzy-PID, and at the same time, the maximum error is only 0.1℃, and the error is stabilized at 0.01℃ during most of the test cycle. More importantly, the temperature control error under FLSMC can also be stabilized at 0.01℃ in the high-speed area (1400–1800 s) of the test cycle. Fig. 14 shows the control input rate under CLTC-P, as can be seen from the figure, when the stack load undergoes continuous fluctuation changes, Fuzzy-PID is not very sensitive to such continuous fluctuations and can only adjust the cooling water flow to the trend of load changes. However, FLSMC can make the cooling water flow adjust accordingly to such continuous fluctuations. 4.2.3.4 EPA75 The EPA75 test conditions are standards set by the U.S. Environmental Protection Agency for testing the economy and emissions of passenger vehicles in urban conditions. As shown in Fig. 15 (a), the complete FTP75 condition cycle driving time is 1874 s, the theoretical driving distance is 17.77 km, the average speed is 34.12 km/h, and the maximum speed is 91.25 km/h, which includes the cold start transient phase, the steady state phase, and the hot start transient phase. Fig. 15 (b) shows the temperature control performance of the three controllers under the EPA75. It can be seen from the figure that in the whole test cycle of EPA75, the maximum temperature control error under FLSMC is 0.03℃, which is much lower than that of PID and Fuzzy-PID, and at the same time, the temperature does not show continuous fluctuation, and the control error tends to be 0.003℃ on the whole. Fig. 16 shows the control input rate under EPA75, as can be seen from the figure, when the stack load undergoes continuous fluctuation changes, Fuzzy-PID is not very sensitive to such continuous fluctuations and can only adjust the cooling water flow to the trend of load changes. However, FLSMC can make the cooling water flow adjust accordingly to such continuous fluctuations. 5 Conclusion In this study, for the thermal management of high-power vehicle PEMFC, a temperature control-oriented model based on water-cooled heat exchangers was developed, and a feedback linearized sliding mode controller (FLSMC) was proposed. The main conclusions are as follows: (1) Water cooling has significantly better heat dissipation performance than air cooling: the rise time and the settling time of water cooling are 34.65 s and 218 s faster than air cooling, respectively, providing a better heat dissipation foundation for vehicle applications. (2) FLSMC has excellent control performance: Compared with PID and Fuzzy-PID, FLSMC strictly controls the temperature error within ± 0.02 °C under the drive cycle, and there is no continuous fluctuation, demonstrating extremely strong anti-interference ability. In summary, FLSMC has successfully achieved high-precision and fast-response control of the thermal management system for high-power fuel cells, providing key technical support for practical applications. However, there are still some limitations in this study. Firstly, the issue of coolant freezing in low-temperature environments was not taken into account. Secondly, FLSMC was not embedded in the actual system for verification. The future research priorities include exploring the combination between FLSMC and the latest artificial intelligence optimization algorithms, and analyzing the impact of uneven temperature distribution on fuel cell life. CRediT authorship contribution statement Yiyu Chen: Writing – review & editing, Methodology, Conceptualization. Mengjun Long: Writing – review & editing, Investigation. Sai Jiang: Validation, Data curation. Yuanli Liu: Software. Zizhang Zhan: Validation. Lihua Wang: Investigation. Zhongmin Wan: Project administration, Funding acquisition. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments The authors would like to thank for the support by the National Key R&D Program of China ( 2022YFB4003801 ) and the Natural Science Foundation of Hunan Province (No. 2025JJ60326 ).
|
[
"CHEN",
"GONG",
"ZENG",
"CHANDEL",
"WANG",
"SUN",
"JIAO",
"GAO",
"HUANG",
"PHILIP",
"SONG",
"LIU",
"ZHANG",
"XU",
"YU",
"ZHANG",
"YANG",
"HU",
"AHN",
"LISO",
"XU",
"HAN",
"ZHANG",
"CHENG",
"CHEN",
"LIU",
"LIU",
"KIM",
"CHEN",
"WU",
"ALDAWERY",
"LIU"
] |
2fd3a11a150945b483daef2965532219_Serum E3 SUMO-protein ligase NSE2 level and peroxynitrite related to oxidative stress in nephrolithi_10.1016_j.apjtb.2016.12.008.xml
|
Serum E3 SUMO-protein ligase NSE2 level and peroxynitrite related to oxidative stress in nephrolithiasis patients
|
[
"Mehde, Atheer Awad",
"Yusof, Faridah",
"Mehdi, Wesen Adel",
"Raus, Raha Ahmed",
"Farhan, Layla Othman",
"Zainulabdeen, Jwan Abdulmohsin",
"Zainal Abidin, Zaima Azira",
"Ghazali, Hamid",
"Abd Rahman, Azlina"
] |
Objective
To prove probable relations between serum E3 SUMO-protein ligase NSE2 (NSMCE2) concentration, peroxynitrite related to oxidative stress in nephrolithiasis patients.
Methods
A total of 60 patients with nephrolithiasis and 50 healthy volunteers were involved in this study. Colorimetric method was used to detect blood urea, creatinine, uric acid, protein, albumin, total antioxidant status, total oxidant status, peroxynitrite, nitric oxide and oxidative stress index. Glutathione, NSMCE2 and superoxide dismutase were measured by ELISA.
Results
A significant increase in level of peroxynitrite, total oxidant status, NSMCE2 and oxidative stress index in patients was observed, while total antioxidant status and glutathione were significantly decreased.
Conclusions
The study concluded that serum NSMCE2 significantly correlated with peroxynitrite and oxidative stress in patients with nephrolithiasis.
|
1 Introduction The prevalence of nephrolithiasis is increasing worldwide. It is the main health associated problems of last year [1] . It is the presence of kidney calculi caused by a disorder in the equilibrium between solubility and precipitation of salts in kidneys. The small stone can pass and produce slight pain while bigger stone may block the urinary tract, lead to severe pain and may be flow of blood [1,2] . In kidney tissue, the reactive oxygen species are made by reacting calcium oxalate or calcium phosphate crystals [3,4] . Peroxynitrite is the product of reaction of nitric oxide with superoxide radicals. In cells, peroxynitrite reacts at slow rate despite it is a strong oxidant. Peroxynitrite passes via anion channels in the cell membranes [5] . Nitric oxide is an important controller of kidney hemodynamics and tubular function at renal vasculature level, glomerulus and renal tubules [6] . The E3 SUMO-protein ligase NSE2 is a structural maintenance of chromosomes protein 5 (SMC5)-SMC6 complex component, which is contained a double-strand of DNA break repair by recombination of homologous. Performances as a E3 ligase mediating SUMO attachment to several proteins for instance SMC6L1 and TRAX [7,8] . No previous studies have reported on serum peroxynitrite with NSMCE2 in patients with nephrolithiasis. Detection of the correlation of serum peroxynitrite and oxidative stress factors to NSMCE2 in patients with nephrolithiasis was the aim of the present study. 2 Materials and methods The study protocol was performed according to the Helsinki declaration and approved by Institutional Ethics Committee No. IIUM/305/14/11/2/IIUM Research Ethics Committee (IREC300). 2.1 Patients collection and samples storage A total of 60 patients with nephrolithiasis and 50 healthy controls were involved in current study. The samples were collected from patients that were hospitalized at government health clinics in Kuantan, Pahang, Malaysia. Patients with diabetes mellitus type 2, diabetic nephropathy, heart disease, history of alcohol intake, taking potent antioxidant, pregnant females and smokers were excluded from the current study. Biochemical factors (blood sugar, urea, albumin, creatinine, protein, sodium, potassium and chloride) and general urine test were used for categorization of cases and control. Serum samples for the measurement of serum peroxynitrite and other biochemical parameters were stored at −20 °C. 2.2 Estimation of biochemical parameters The serum peroxynitrite was measured according to the method of Vanuffelen et al. [9] . Levels of oxidative stress index (OSI), total antioxidant status (TAS) and total oxidant status (TOS) in sera of studied group were measured according to methods developed by Erel [10,11] and Kumari et al. [12] . A modified method of Satoh was used to measure malondialdehyde (MDA) [13] . Serum NSMCE2, nitric oxide, superoxide dismutase (SOD) and glutathione (GSH) were measured by ELSIA. Serum sodium, potassium and chloride were measured by Olympus AU2700 analyser. Other clinical parameters including urea, creatinine, protein, albumin and uric acid were conducted using commercial kits. 2.3 Statistical analysis The data analysis was conducted by using software SPSS 20.0 version. Data were analysed using Pearson's correlation and two-tailed student's t -test. 3 Results Age and biochemical parameters for patients and control were shown in Table 1 . No significant difference was found in sera of urea, creatinine, protein, albumin, sodium, potassium and chloride in patient with nephrolithiasis in comparison to control group. Table 2 showed that 38.33% of patients had urine specific gravity less than normal value, while there were 50.00% and 66.67% increase in leukocytes and erythrocytes respectively, and 36.68% had protein in their urine. A significant increase of serum peroxynitrite has been showed in patients with nephrolithiasis in comparison to control group ( Table 3 ). No significant difference was found in nitric oxide level between studied groups ( Table 3 ). Serum TAS was significantly decreased in patients ( Table 3 ). Serum OSI, TOS, MDA and NSMCE2 were significantly increased in patients with nephrolithiasis compared to control ( Table 3 ) while GSH showed significant decreased in patients ( P < 0.01). Serum SOD and uric were similar to controls ( Table 3 ). Correlative values of serum NSMCE2 with other biochemical parameters were presented in Table 4 . A significant correlation was detected between NSMCE2 with peroxynitrite, nitric oxide, TOS, TAS, OSI, MDA and GSH in patients group, while no significant correlation was observed in control ( Table 4 ). 4 Discussion In this study, 27 people of patients were females and 33 were males. Several studies showed that nephrolithiasis was common in men compared to in women [14,15] . Governmental data exhibited change in the female-to-male percentage in ureteral or kidney stone diagnosis from 1:1.7 in 1997 to 1:1.3 in 2002 [14,15] . Obesity may be one of the reasons which cause the rise in stone disease in women [16] . Serum peroxynitrite, nitric oxide, OSI, TAS, TOS and MDA are known as oxidative stress markers. The result showed high levels of peroxynitrite, MDA, OSI and TOS, also decreased TAS and GSH in patients with nephrolithiasis than healthy group, which is due to disparity between pro-oxidants and antioxidants [17] . Patients with chronic kidney diseases have increased oxidative stress as well as the development of disease [3,18] . Another study showed the association of kidney stones with free radicals [19] . The study of Ozbek showed increase oxidative stress condition with stone forming in human sera and cultures [20] . The reaction between superoxide anions and nitric oxide led to formation of peroxynitrite, which produce lipid peroxidation, base modification, cysteine oxidization and dityrosyl-bridges formation. Through a series of reaction, breakdown of peroxynitrite leads to generation of peroxynitrous acid. Nitric oxide can decompose ONOOH. Through these mechanisms, nitric oxide works to abate the oxidation chemistry of reactive nitrogen oxygen species [21] . Nitric oxide was reported to inhibit cell proliferation and induce differentiation. In addition, nitric oxide is a reactive compound and can react with superoxide and may cause the production of the additional damaging compound like peroxynitrite. Nitric oxide can also be a very effective antioxidant to the reactive oxygen species [21] . Several studies have shown that nitric oxide is physiological modulators of peroxynitrite reactivity, thus it confirms the effects of nitric oxide on the inflammation and reperfusion injury in animal models [22,23] . Peroxynitrite is a result of enzymatic reaction of superoxide anion and nitric oxide, which clarified the significant correlation between nitric oxide and peroxynitrite [23] . The present result agrees with other study which hypothesized that nephrolithiasis was related to the increase in uric acid concentration in the blood [24,25] . However, the increase in uric acid in the present study is not significant. The significant increase in MDA and low level of GSH are coherent with previous findings [19,26] . Another study demonstrated that the increase in peroxidation and reduction of thiol concentration lead to the increase in the activity of oxalate binding, which increases the accumulation of stone parts [26] . A non-significant difference in serum SOD in patients compared to control is similar with other researches observation [26] . We describe here the increase in NSMCE2 in patients with nephrolithiasis compared to control group which is similar with previous study [27] . We found in previous study that the reduction of adenosine deaminase and AMP-aminohydrolase activities could cause a state of immune suppression, and also the increase in NSMCE2 may possibly play a role in developments of alteration of DNA damage and inflammation disorders in the patients with nephrolithiasis [28] . Our findings suggest that NSMCE2 is necessary for the inhibition of DNA damage which induced apoptosis through enabling DNA repair in cells. In this study, there is a significant correlation between peroxynitrite with TOS, TAS, OSI, MDA and GSH in patients with nephrolithiasis, while non-significant correlation between peroxynitrite with TOS, TAS, OSI, MDA, SOD and GSH in control ( Table 4 ). The data in Table 4 showed the relationship between oxidative markers with NSMCE2, which assessed to act as an indication in the prediction of kidney injury in patients with nephrolithiasis. Peroxynitrite destructs DNA via removing hydrogen atom from the deoxyribose in the sugar phosphate backbone which causes an opening of the sugar ring that leads to DNA strand breaks [29] . The present study showed that serum NSMCE2 were associated with oxidative stress markers and serum peroxynitrite, nitric oxide in nephrolithiasis patients. This might reflect increased antioxidant reaction through stones formation consequently increased oxidative stress. Conflict of interest statement We declare that we have no conflict of interest. Acknowledgments We are thankful to the International Islamic University of Malaysia for funding this project under the research management center Grant Scheme Project No. IIUM/504/5/29/1 . We would also like to thank the Department of Urology and Department of Pathology, Hospital Tengku Ampuan Afzan for supporting this study.
|
[
"HAYATDAVOUDI",
"KARAOLANIS",
"KHAN",
"KHAN",
"RADI",
"SINGH",
"YUSOF",
"ABDULBARI",
"VANUFFELEN",
"EREL",
"EREL",
"KUMARI",
"SATOH",
"DROPKIN",
"SCALES",
"SOFIA",
"KHAN",
"RAHMAN",
"DEEPIKA",
"OZBEK",
"BAHADORAN",
"PUTRI",
"STANTON",
"XU",
"HAN",
"BUXI",
"YUSOF",
"PARK",
"NILES"
] |
7dd36c14248e43c2aa63360d328baed8_Radiative neutrino mass in an alternative U1BL gauge symmetry_10.1016_j.nuclphysb.2019.02.025.xml
|
Radiative neutrino mass in an alternative U(1)
B−L
gauge symmetry
|
[
"Nomura, Takaaki",
"Okada, Hiroshi"
] |
We propose a neutrino model in which neutrino masses are generated at one loop level and three right-handed fermions have non-trivial charges under
U
(
1
)
B
−
L
gauge symmetry in no conflict with anomaly cancellation. After the spontaneously symmetry breaking, a remnant
Z
2
symmetry is induced and plays an role in assuring the stability of dark matter candidate.
|
1 Introduction Radiatively induced neutrino mass models are attractive candidate to explain the smallness of neutrino masses. In such models, neutrino masses are not allowed at the tree level by some symmetries and they are generated at loop level. Moreover dark matter (DM) candidate can easily be accommodated as a particle propagating inside a loop diagram generating the masses of neutrinos. Based on these ideas, one loop induced neutrino models have widely been studied by a lot of authors; for example, see refs. [1–98] . In addition, refs. [99–103] discuss the systematic analysis of (Dirac) neutrino oscillation, charged lepton flavor violation, and collider physics in the framework of neutrinophilic and inert two Higgs doublet model (THDM), respectively. In many models, an additional discrete symmetry has to be imposed in order to forbid tree level masses of neutrinos and to guarantee the stability of DM. However Z 2 gauge symmetry can play such a role by taking non-trivial charge assignment of standard model (SM) gauge singlet fermions as shown in ref. U ( 1 ) B − L [104] , where the lightest neutral particle with non-trivial charge can be a DM candidate. In this case, its stability is assured by a remnant U ( 1 ) B − L symmetry after the spontaneous Z 2 symmetry breaking. Thus it is interesting to construct a radiative neutrino mass model based on the alternative charge assignment of U ( 1 ) B − L . U ( 1 ) B − L In this paper, we construct and analyze a model of with alternative charge assignment, in which neutrino masses are generated at one loop level by introducing some exotic scalar fields. Also we consider a physical Goldstone boson (GB), which is induced as a consequence of global symmetry in our scalar potential introducing two types of SM gauge singlet scalar fields with nonzero U ( 1 ) B − L charges and vacuum expectation values (VEVs). We provide formulas of neutrino mass matrix, decay ratio of lepton flavor violating process and relic density of our DM candidate that is determined by interactions associated with the physical GB and an additional vector gauge boson U ( 1 ) B − L from Z ′ . Then numerical global and benchmark analyses are carried out to search for parameter sets that can fit the neutrino oscillation data and satisfy experimental constraints of lepton flavor violations (LFVs) and relic density of DM. U ( 1 ) B − L This paper is organized as follows. In Sec. 2 , we show our model, and formulate the neutral fermion sector, boson sector, lepton sector, and dark matter sector. Also we analyze the relic density of DM without conflict of direct detection searches, and carry out global analysis. Finally We conclude and discuss in Sec. 3 . 2 Model setup and phenomenologies In this section, we introduce our model. First of all, we impose an additional gauge symmetry with three right-handed neutral fermions U ( 1 ) B − L , where the right-handed neutrinos have N R i ( i = 1 − 3 ) charge −4, −4 and 5. Then all the anomalies we have to consider are U ( 1 ) B − L , and U ( 1 ) B − L 3 , which are found to be zero U ( 1 ) B − L [104] . On the other hand, even when we introduce two types of isospin singlet bosons and φ 1 in order to acquire nonzero Majorana masses after the spontaneous symmetry breaking of φ 2 , one cannot find active neutrino masses due to the absence of Yukawa term U ( 1 ) B − L . Thus we introduce an isospin singlet and doublet inert bosons L ¯ L H ˜ N R s and η with nonzero charges, and neutrino masses are induced at one-loop level as shown in U ( 1 ) B − L Fig. 1 . Also the stability of DM is assured by a remnant symmetry at renormalizable level after the spontaneous breaking where Z 2 , N R i η and s are odd and the other fields are Z 2 even. Z 2 1 Field contents and their assignments for fermions and bosons are respectively given by 1 At non-renormalizable level we would have breaking term inducing decay of DM such as Z 2 . Such a term is suppressed by cut-off scale and we can assume DM is sufficiently long-lived. L ¯ N R 1 , 2 H φ 1 3 Table 1 . Under these symmetries, the renormalizable Lagrangian for lepton sector and Higgs potential are respectively given by (2.1) − L L = ( y ℓ ) a b L ¯ L a e R b H + ( y ν ) a i L ¯ L a η ˜ N R i + y N i 3 N ¯ R i C N R 3 φ 1 ⁎ + y N i j ′ N ¯ R i C N R j φ 2 + c.c. , where (2.2) V = μ H 2 H † H + μ η 2 η † η + μ s 2 s ⁎ s + μ φ 1 2 φ 1 ⁎ φ 1 + μ φ 2 2 φ 2 ⁎ φ 2 + μ ( s 2 φ 2 ⁎ + c . c . ) + λ 0 ( H † η s φ 1 ⁎ + c.c. ) + λ H ( H † H ) 2 + λ η ( η † η ) 2 + λ s ( s ⁎ s ) 2 + λ φ 1 ( φ 1 ⁎ φ 1 ) 2 + λ φ 2 ( φ 2 ⁎ φ 2 ) 2 + λ H η ( H † H ) ( η † η ) + λ H η ′ ( H † η ) ( η † H ) + λ H s ( H † H ) ( s ⁎ s ) + λ H φ 1 ( H † H ) ( φ 1 ⁎ φ 1 ) + λ H φ 2 ( H † H ) ( φ 2 ⁎ φ 2 ) + λ η s ( η † η ) ( s ⁎ s ) + λ η φ 1 ( η † η ) ( φ 1 ⁎ φ 1 ) + λ η φ 2 ( η † η ) ( φ 2 ⁎ φ 2 ) + λ s φ 1 ( s ⁎ s ) ( φ 1 ⁎ φ 1 ) + λ s φ 2 ( s ⁎ s ) ( φ 2 ⁎ φ 2 ) + λ φ 1 φ 2 ( φ 1 ⁎ φ 1 ) ( φ 2 ⁎ φ 2 ) with H ˜ ≡ ( i σ 2 ) H ⁎ being the second Pauli matrix, σ 2 runs over 1 to 3, and ( a , b ) runs over 1 to 2. ( i , j ) 2.1 Scalar sector The scalar fields are parameterized as where (2.3) H = [ w + v + h + i z 2 ] , η = [ η + η R + i η I 2 ] , s = s R + i s I 2 , φ i = v i ′ + φ R i + i z φ i ′ 2 , ( i = 1 , 2 ) , and w + z are absorbed by the SM gauge bosons and W + Z as Nambu-Goldstone boson (NGB), and one of the massless CP odd boson after diagonalizing the matrix in basis of with nonzero VEVs is absorbed by the ( z φ 1 ′ , z φ 2 ′ ) gauge boson B − L . Z ′ CP-odd scalar ( even): As a result, one physical massless CP-odd GB is induced, which is due to a breaking of global symmetry in the scalar potential associated with Z 2 ; an global φ 1 , 2 symmetry under which U ( 1 ) and φ 1 transforms separately. Note that we have freedom to identify which component of φ 2 is the GB, and we choose ( z φ 1 ′ , z φ 2 ′ ) to be GB in our analysis. One can identify the CP-odd boson of G ≡ z φ 1 ′ as NGB, when φ 1 . Here we consider the CP-odd boson of v 2 ′ < < v 1 ′ is a physical GB; φ 2 , and it contributes to phenomenologies such as DM. We also note that the existence of this physical Goldstone boson does not cause serious problem in particle physics or cosmology since it does not interact with SM particles directly and decouples from thermal bath in early Universe. Also we assume that coupling between z φ 2 ′ G and SM Higgs is negligibly small by choosing parameters in scalar potential, and GB does not affect phenomenology; the contribution to relativistic degrees of freedom by GB is also small since it decouples in early stage of the universe due to small interactions. CP-even scalar : Inserting tadpole conditions, the CP even matrix in basis of with nonzero VEVs is given by ( φ R 1 , φ R 2 , h ) where we define the mass eigenstate (2.4) M R 2 ≡ [ 2 v 1 ′ 2 λ φ 1 v 1 ′ v 2 ′ λ φ 1 φ 2 v v 1 ′ λ H φ 1 v 1 ′ v 2 ′ λ φ 1 φ 2 2 v 2 ′ 2 λ φ 2 v v 2 ′ λ H φ 2 v v 1 ′ λ H φ 1 v v 2 ′ λ H φ 2 2 v 2 λ H ] , ( h i ), and mixing matrix i = 1 − 3 to be O R and m h i = O R M R 2 O R T . Here ( φ R 1 , φ R 2 , h ) T = O R T h i is the SM Higgs, therefore, h 3 ≡ h S M 125 GeV. In addition, we assume mixing among SM Higgs and other CP-even scalars are small to avoid experimental constraints for simplicity. m h 3 = The inert scalar sector : we obtain mass matrix in the basis of such as ( s R ( I ) , η R ( I ) ) In our analysis, we assume (2.5) M s R ( I ) η R ( I ) 2 = 1 2 ( v 1 ′ 2 λ s φ 1 + v 2 λ H s + λ s φ 2 v 2 ′ 2 + 2 μ s 2 ( − ) λ 0 v 1 ′ v ( − ) λ 0 v 1 ′ v v 1 ′ 2 λ η φ 1 + v 2 ( λ H η + λ H η ′ ) + v 2 ′ λ η φ 2 + 2 μ η 2 ) . so that mixing between λ 0 ≪ 1 and s R ( I ) is small, and we apply mass insertion approximation in calculating neutrino mass matrix below. Thus each of mass eigenvalues as a leading order is given by η R ( I ) (2.6) m s R ( I ) 2 ≈ m s 2 ≡ v 1 ′ 2 λ s φ 1 + v 2 λ H s + λ s φ 2 v ′ 2 2 + 2 μ s 2 2 , where we omit the mixing effect as an approximation. For charged scalar (2.7) m η R ( I ) ≈ m η 2 ≡ v 1 ′ 2 λ η φ 1 + v 2 ( λ H η + λ H η ′ ) + v 2 ′ λ η φ 2 + 2 μ η 2 2 , , we have no mixing effect and its mass is simply given by η ± Thus we have degenerated mass eigenvalues for components of inert doublet (2.8) m η ± = m η . η in our approximation. Stability of the potential: The global minimum at requires the following conditions 〈 η 〉 = 〈 s 〉 = 0 [109] : (2.9) 0 < ( λ H , λ η , λ s , λ φ 1 , λ φ 2 , λ η s , λ H s , λ s φ 1 , λ s φ 2 , λ H η + λ H η ′ , λ η φ 1 + λ η φ 2 ′ ) , (2.10) 0 < μ v φ 2 , 0 < λ H s λ η φ 1 + λ 0 3 , 0 < ( λ H η + λ H η ′ ) λ s φ 1 + λ 0 3 , 0 < λ H φ 1 λ η s + λ 0 3 . Physical Goldstone boson : Here we also discuss decoupling of the physical GB from thermal bath where we assume it is thermalized via Higgs portal interaction following discussion in ref. [105] . Note that interaction is subdominant since Z ′ is heavy and gauge coupling should be small from collider constraints as we discuss later. The effective interaction among our GB Z ′ and the SM fermions is induced from the interactions z φ 2 ′ , − 1 / ( 2 v 2 ′ ) φ R 2 ∂ μ z φ 2 ′ ∂ μ z φ 2 ′ and the SM Yukawa interactions such as: λ H φ 2 v 2 ′ v φ R 2 h where (2.11) − λ H φ 2 m f 2 m φ R 2 2 m h 2 ∂ μ z φ 2 ′ ∂ μ z φ 2 ′ f ¯ f , is the mass of the SM fermion m f f , is the SM Higgs mass, and we take m h as mass eigenstate for simplicity. The temperature at which φ R 2 decouples from thermal bath is roughly calculated by z φ 2 ′ [105] where (2.12) collision rate expansion rate ≃ λ H φ 2 2 m f 2 ( k T ) 5 m P L m φ R 2 4 m h 4 ∼ 1 , denotes the Planck mass and m P L should be smaller than m f kT so that f is in thermal bath. The decoupling temperature is then estimated as Thus (2.13) k T ∼ 4.8 GeV ( m φ R 2 100 GeV ) 4 5 ( GeV m f ) 2 5 ( 0.01 λ H φ 2 ) 2 5 . can decouple from thermal bath sufficiently early and does not contribute to the effective number of active neutrinos z φ 2 ′ [106] . 2.2 Gauge sector After symmetry breaking we have massive U ( 1 ) B − L boson. In this model, Z ′ – Z ′ Z mixing could be induced only through kinetic mixing effect since Higgs doublet does not have charge. Here we assume the kinetic mixing is negligibly small and we can avoid constraint from mixing effect. The mass of B − L is then given by Z ′ where we have applied (2.14) m Z ′ = g B L v 1 ′ 2 + 64 v 2 ′ 2 ≃ g B L v 1 ′ , for approximation. As we see below collider constraint indicates v 1 ′ ≫ v 2 ′ breaking scale is U ( 1 ) B − L TeV. m Z ′ / g B L > 10 2.3 Fermion sector The mass matrix for the neutral fermions in basis of , and given by N R 1 , 2 , 3 where we define this matrix is diagonalized by 3 by 3 orthogonal matrix (2.15) M N = 1 2 [ y N 11 ′ v 2 ′ y N 12 ′ v 2 ′ y N 13 v 1 ′ y N 12 ′ v 2 ′ y N 22 ′ v 2 ′ y N 23 v 1 ′ y N 13 v 1 ′ y N 23 v 1 ′ 0 ] , as V N M ψ i ≡ ( V N M N V N T ) i , where i = 1 ∼ 3 is the mass eigenvalue. The mass eigenstates are given by M ψ i . ψ i = ( V N ) i j N R j 2.4 Lepton sector and lepton flavor violations The charged lepton masses are given by after the electroweak symmetry breaking, where m ℓ = y ℓ v / 2 is assumed to be the mass eigenstate. The neutrino mass matrix is induced at the one-loop level as shown in m ℓ Fig. 1 , and its mass-insertion-approximation form is given by 2 2 Notice here that our one-loop function is different from the one of Ma model [4] , since we apply a mass insertion approximation method. where (2.16) ( M ν ) α β = ( λ 0 v v 1 ′ ) 2 μ v 2 ′ 4 2 ( 4 π ) 2 ( Y ν ) α i M ψ i ( Y ν T ) i β m s 6 F ν ( r η , r ψ i ) , F ν ( r 1 , r 2 ) = ( 1 − r 1 ) ( 1 + r 1 − 2 r 2 ) ( r 1 − r 2 ) ( 1 − r 2 ) − ( 1 − r 2 ) 2 [ r 2 + r 1 ( − 2 r 1 + r 2 ) ] ln [ r 1 ] + ( 1 − r 1 ) 3 r 2 ln [ r 2 ] 2 ( 1 − r 1 ) 3 ( 1 − r 2 ) 2 ( r 1 − r 2 ) 2 , , r i ≡ ( m i / m s ) 2 . Once we define ( Y ν ) α i ≡ ∑ j = 1 3 ( y ν ) α j ( V N T ) j i 2 , D ν ≡ U M N S M ν U M N S T ≡ U M N S ( Y ν R Y ν T ) U M N S T can be rewritten in terms of observables and several arbitral parameters as: Y ν where (2.17) Y ν = U M N S † D ν 1 / 2 O R − 1 / 2 , R i i ≡ ( λ 0 v v 1 ′ ) 2 μ v 2 ′ M ψ i 4 2 ( 4 π ) 2 m s 6 F ν ( r η , r ψ i ) , , satisfying O ≡ O ( θ 1 , θ 2 , θ 3 ) , is an arbitral 3 by 3 orthogonal matrix with complex values, and O O T = 1 and U M N S are measured in D ν [110] . Here typical order of is shown as R i i where we have taken loop factor (2.18) R i i ∼ 3.0 × 10 − 2 ( TeV m s ) 6 ( v 1 ′ 10 TeV ) 2 ( μ TeV ) ( v 2 ′ TeV ) ( M ψ i TeV ) λ 0 2 GeV , to be F ν for simplicity. Taking O ( 1 ) , we obtain λ 0 = 0.01 ( 0.1 ) since order of neutrino mass is Y ν ≲ 10 − 2 ( 4 ) GeV. O ( 10 − 10 ) Lepton flavor violations : LFV processes are induced from the neutrino Yukawa couplings at one-loop level, and their forms are given by ℓ → ℓ ′ γ (2.19) B R ( ℓ α → ℓ β γ ) ≈ 4 π 3 α e m C α β 3 ( 4 π ) 4 G F 2 | ∑ i = 1 3 ( Y ν † ) β i ( Y ν ) i α F l f v ( ψ i , η ± ) | 2 , where (2.20) F l f v ( a , b ) ≡ 2 m a 6 + 3 m a 4 m b 2 − 6 m a 2 m b 4 + m b 6 + 12 m a 4 m b 2 ln [ m b m a ] ( m a 2 − m b 2 ) 4 , is the fine-structure constant, α e m ≈ 1 / 137 GeV G F ≈ 1.17 × 10 − 5 −2 is the Fermi constant, and , C 21 ≈ 1 , C 31 ≈ 0.1784 . Experimental upper bounds are found to be C 32 ≈ 0.1736 [111,112] : where we define (2.21) BR ( μ → e γ ) ≲ 4.2 × 10 − 13 , BR ( τ → e γ ) ≲ 3.3 × 10 − 8 , BR ( τ → μ γ ) ≲ 4.4 × 10 − 13 , , ℓ 1 ≡ e , and ℓ 2 ≡ μ . Notice here that muon ℓ 3 ≡ τ is negatively induced that conflicts with the current experimental data. g − 2 Here we scan some parameters and derive allowed parameter region. The parameter ranges are chosen as where we fix (2.22) m η ∈ [ 100 , 1000 ] GeV , ( M N ) i j ∈ [ 100 , 10000 ] GeV , m s ∈ [ 100 , 1000 ] GeV , GeV and μ = 100 GeV. We then search for Yukawa couplings λ 0 v 1 ′ = 575 which can accommodate with neutrino oscillation data and satisfy LFV constraints. Note also that we take degenerate mass for neutral and charged component of ( Y ν ) i j η to avoid constraints from oblique parameters. In Figs. 2 , we show the global analysis to satisfy the neutrino oscillation data and LFVs in terms of the lightest neutral fermion mass , (which is identified as a DM candidate in the next subsection), and each of Yukawa coupling squared related to the LFV, where all the input region parameters include what we use the analysis of DM below. The black points show the allowed region from M ψ 1 , the red points show the one from BR ( μ → e γ ) , and, the blue points show the one from BR ( τ → e γ ) . All these constraints suggest that each of Yukawa coupling squared are of the order BR ( τ → μ γ ) at most. In 10 − 4 Fig. 3 , we also show LFV BRs as functions of . We find that M ψ 1 tends to be slightly larger than the other BRs while B R ( μ → e γ ) and B R ( τ → e γ ) have almost the same behavior. B R ( τ → μ γ ) 2.5 boson production at the LHC Z ′ Here we discuss collider physics of boson in the model. Our Z ′ can be produced at the LHC since it couples to quarks due to Z ′ charge. Basically B − L can decay into particles with Z ′ charge if kinematically allowed. In our scenario, we consider decay modes of SM fermion pair, DMs, and U ( 1 ) B − L where we consider negligibly small mixing in CP-even scalar sector and we assume the other modes are kinematically forbidden. Notice that, for DM, we focus on the lightest inert fermion φ R 2 z φ 2 ′ , defining ψ 1 and ψ 1 ≡ X . The gauge interactions of M ψ 1 ≡ M X are given by Z ′ where (2.23) L Z ′ = g B L Q B L X 2 X ¯ γ μ γ 5 X Z μ ′ + g B L Q B L f f ¯ S M γ μ f S M Z μ ′ − g B L ν ¯ γ P L ν Z μ ′ + i 8 g B L Z ′ μ ( ∂ μ φ R 2 z φ 2 ′ − φ R 2 ∂ μ z φ 2 ′ ) , is g B L gauge coupling, B − L applying unitary condition Q B L X ≡ − 4 + 9 ( V N ⁎ ) 13 ( V N T ) 31 , V N † V N = 1 is the charge of Q B L f symmetry for SM fermion B − L . Then the partial decay widths are obtained as f S M (2.24) Γ Z ′ → f ¯ S M f S M = X f ( Q B L f g B L ) 2 12 π ( 1 + 2 m f S M 2 m Z ′ 2 ) 1 − 4 m f S M 2 m Z ′ 2 , (2.25) Γ Z ′ → X X = ( Q B L X g B L ) 2 24 π ( 1 + 2 M X 2 m Z ′ 2 ) 1 − 4 M X 2 m Z ′ 2 , where (2.26) Γ Z ′ → φ R 2 z φ 2 ′ = 4 g B L 2 3 π m Z ′ ( 1 − m φ R 2 2 m Z ′ 2 ) 3 , for SM neutrinos (charged leptons and quarks). X f = 1 / 2 ( 1 ) We estimate the production cross section using Z ′ CalcHEP [118] by use of the CTEQ6 parton distribution functions (PDFs) [107] , implementing relevant interactions. In Fig. 4 , we show with σ ( p p → Z ′ ) B R ( Z ′ → ℓ + ℓ − ) as a function of ℓ = μ , e applying m Z ′ GeV and m φ R 2 = 200 GeV, which is compared with the current LHC limit. We find that gauge coupling m X = 250 should be g B L or smaller to avoid the constraint when O ( 0.01 ) TeV. m Z ′ ≲ 1 2.6 Dark matter In our scenario, we will focus on the lightest inert fermion as the DM candidate, ψ 1 and ψ 1 ≡ X , as we discussed in previous subsection. Note also that we can have bosonic DM candidate although we omit the discussion in this paper. M ψ 1 ≡ M X 3 3 The bosonic DM candidate has been discussed in ref. [104] . Firstly, we assume contribution from the Higgs mediating interaction is negligibly small and DM annihilation processes are dominated by the gauge interaction with Z ′ ; we thus can easily avoid the constraints from direct detection searches such as LUX [113] . One might think nonzero contributions to the direct detection from the interaction via which would give strong constraint on the gauge coupling. However constraint from DM direct detection is not significant in our model since the vector current of DM, which induces spin independent cross section, identically vanishes due to the Majorana property of our DM Z ′ X . Contribution to the direct detection arises from only vector axial current (as we will show below), which does not give the nonzero spin independent cross section but spin dependent one. Therefore we do not need to consider the direct detection constraints, since all of them are safe. Relic density : We have annihilation modes induced by gauge and Yukawa interactions to explain the relic density of DM: Ω h 2 ≈ 0.12 [114] , and their relevant Lagrangian in basis of mass eigenstates is found to be 4 4 In general the second term below is also proportional to and M 12 ′ , but these contributions are negligibly small when we take M 13 ′ is larger than m ψ 2 , 3 by a few factor at least. M X where (2.27) − L = L Z ′ + i M 11 ′ v 2 ′ X ¯ P R X z φ 2 ′ ( + ∑ β = 2 , 3 i M 1 β ′ v 2 ′ X ¯ P R ψ β z φ 2 ′ ) + ( Y ν ) α 1 ν ¯ α P R X ( η R − i η I ) + 2 ( Y ν ) α 1 ℓ ¯ α P R X η − + c.c. , , and M 11 ′ ≡ ∑ i , j = 1 , 2 V N 1 i y N i j ′ V N j 1 T v 2 ′ / 2 is all the fermions of SM. However since the typical Yukawa couplings in order to satisfy LFV constraints are f S M as we see that in the previous subsection, one finds that any annihilation modes via Yukawa couplings cannot be dominant. Thus we just focus on the processes of O ( 0.01 ) and the GB final state. Then the squared amplitudes for the processes Z ′ are respectively given by X X ¯ → f f ¯ , ν a ν ¯ a , 2 z φ 2 ′ (2.28) | M ¯ ( X X ¯ → f f ¯ ) | 2 ≈ 1 8 ( s − 4 M X 2 ) ∑ f | g B L 2 Q B L X Q B L f s − m Z ′ 2 + i m Z ′ Γ Z ′ | 2 × [ cos 2 θ ( s − 4 m f 2 ) + 4 m f 2 + 3 s ] , (2.29) | M ¯ ( X X ¯ → ν ν ¯ ) | 2 ≈ 3 s 16 ( s − 4 M X 2 ) | g B L 2 Q B L X s − m Z ′ 2 + i m Z ′ Γ Z ′ | 2 ( cos 2 θ + 3 ) , where (2.30) | M ¯ ( X X ¯ → 2 GB ) | 2 ≈ − | M 11 ′ | 4 2 v φ 2 ′ × 2 ( 2 M X 4 + 2 M X 2 s − s 2 ) s 2 + s cos 2 θ ( s − 4 M X 2 ) [ s ( s + 4 M X 2 ) − 4 M X 4 + s cos 2 θ ( s − 4 M X 2 ) ] [ s 2 − s cos 2 θ ( s − 4 M X 2 ) ] 2 , s denote one of the Mandelstam variables, θ is one of the phase space angle which is integrated out from zero to π as we will see below. is the total decay width of Γ Z ′ , where contributions from all SM fermions are included since we expect Z ′ is rather heavy. The total decay width of m Z ′ is given by summing up the partial decay widths Eqs. Z ′ (2.24) – (2.25) if kinematically allowed. Note also that mass is given by Z ′ . Then the relic density of DM is given by m Z ′ = g B L ( v 1 ′ ) 2 + ( 8 v 2 ′ ) 2 [115] where (2.31) Ω h 2 ≈ 1.07 × 10 9 g ⁎ ( x f ) M P l J ( x f ) [ GeV ] , is the degrees of freedom for relativistic particles at temperature g ⁎ ( x f ≈ 25 ) , T f = M X / x f GeV, and M P l ≈ 1.22 × 10 19 is given by J ( x f ) ( ≡ ∫ x f ∞ d x 〈 σ v rel 〉 x 2 ) [116] (2.32) J ( x f ) = ∫ x f ∞ d x [ ∫ 4 M X 2 ∞ d s s − 4 M X 2 [ W f ( s ) + W ν ( s ) + W GB ( s ) ] K 1 ( s M X x ) 16 M X 5 x [ K 2 ( x ) ] 2 ] , (2.33) W f ( s ) = s − 4 M X 2 24 π ∑ f C f | g B L 2 Q B L X Q B L f s − m Z ′ 2 + i m Z ′ Γ Z ′ | 2 1 − 4 m f 2 s ( 2 m f 2 + s ) , (2.34) W ν ( s ) = s ( s − 4 M X 2 ) 16 π | g B L 2 Q B L X s − m Z ′ 2 + i m Z ′ Γ Z ′ | 2 , where (2.35) W GB ( s ) = | M 11 ′ | 4 64 π v φ 2 ′ 4 [ ( 3 s 2 − 4 M X 4 ) ( π 2 s M X 2 M X 4 4 s M X 2 − s 2 − tan − 1 [ s − 2 M X 2 s ( 4 M X 2 − s ) ] s 3 / 2 4 M X 2 − s ) − 4 ] , ( C f for leptons and C f = 1 for quarks) is color factor, C f = 3 is defined by W ( s ) , and we implicitly impose the kinematical constraint above. In 1 16 π ∑ a ∫ 0 π sin θ | M ¯ | 2 Fig. 5 , we show and relic density of DM, fixing the following parameters M X 5 : 5 Notice here that the mode of GB does not depend on the mass of and ψ 1 so much. ψ 2 where (2.36) g B L = | V N 13 | = 0.0075 , m Z ′ = 500 GeV , v 2 ′ = 500 GeV , | M 11 ′ | = 150 GeV , 700 ≲ M ψ 2 , 3 , is found which is consistent with the LEP bound v 1 ′ ∼ 66 TeV g B L / m Z ′ = 1 / v 1 ′ 2 + ( 8 v 2 ′ ) 2 ≤ 1 / ( 7 TeV ) [117] . We find that observed relic density can be obtained for due to the resonance enhancement of the annihilation cross section. In addition It also suggests that relic density can be also explained by m Z ′ ∼ 2 M X mode where X X → 2 G B GeV is required for 150 GeV M X ≲ 100 ; for larger ≤ | M 1 β ′ | , heavier DM mass region is also allowed. Here let us explore this behavior of relic density considering the properties of annihilation modes. Increasing the DM mass, the cross section of | M 1 β ′ | exchanging mode simply decreases up to the resonant point, and then it simply starts to increases as approaching Z ′ . Therefore its relic density behaves as the opposite manner to the cross section. On the other hand the cross section for GB final states simply decreases when we increase the DM mass fixing other parameters. Then this behavior increases the relic density of DM. In total the relic density starts to increase up to M X ∼ m Z ′ / 2 GeV (before the resonant point), since the GB mode contribution is stronger than the M X ≲ 230 exchanging mode. But once it reaches at around the resonant point, the Z ′ mode contribution becomes stronger due to the resonance. After that, it simply increases due to both of property, when the DM mass increases. Z ′ As a comprehensive discussion, one might consider the case of coannihilation that can be possible in general. The simplest case could be that the masses of are almost degenerate. In this case, the total cross section simply decrease. Therefore, the allowed region at around the resonant point becomes to be narrower, and the lighter DM mass satisfying the relic density becomes to increase. The other cases could cause among ψ 1 , 2 , 3 and or CP-even bosons, but this is beyond our scope because the behavior of relic density is very complicated. Z ′ 3 Conclusion We have proposed a model providing the neutrino mass and mixing at one loop-level with a nontrivial gauge symmetry based on the model proposed by U ( 1 ) B − L [104] , in which the remnant symmetry still be there even after the spontaneous symmetry breaking of Z 2 , and a fermionic DM candidate has been discussed instead of bosonic one. Then we have given formulas for neutrino mass matrix, branching ratio of U ( 1 ) B − L and relic density of DM. Notice that, in our model, a physical GB appears, which is the consequence of two kinds of bosons ℓ → ℓ ′ γ and φ 1 to break φ 2 , where we have selected U ( 1 ) B − L as the physical GB by taking z φ 2 ′ . Then we have had a global analysis to satisfy the neutrino oscillation and LFVs, and found the typical order of Yukawa couplings are of the order 0.01. It suggests that Yukawa contribution to the DM relic density is negligibly small. Thus we have not considered this contribution in the DM analysis. Instead, we have considered the GB contribution to the relic density of DM. v 2 ′ < < v 1 ′ In the DM analysis, we have shown the behavior of relic density in terms of DM mass, preparing a benchmark point. The first solution arises from the contribution of GB mode, and the second one comes from mode as a resonance; Z ′ . m Z ′ ≃ 2 M X Acknowledgements H. O. thanks Prof. Seungwon Baek for fruitful discussions, and is sincerely grateful for the KIAS member and all around.
|
[
"ZEE",
"ZEE",
"CHENG",
"PILAFTSIS",
"MA",
"GU",
"SAHU",
"GU",
"ARISTIZABALSIERRA",
"BOUCHAND",
"MCDONALD",
"MA",
"KAJIYAMA",
"KANEMURA",
"KANEMURA",
"KANEMURA",
"SCHMIDT",
"KANEMURA",
"FARZAN",
"KUMERICKI",
"KUMERICKI",
"MA",
"GIL",
"OKADA",
"HEHN",
"DEV",
"KAJIYAMA",
"TOMA",
"KANEMURA",
"LAW",
"BAEK",
"KANEMURA",
"FRASER",
"VICENTE",
"BAEK",
"MERLE",
"RESTREPO",
"MERLE",
"WANG",
"AHN",
"MA",
"CARCAMOHERNANDEZ",
"MA",
"MA",
"MA",
"MA",
"OKADA",
"OKADA",
"BRDAR",
"OKADA",
"BONNET",
"JOAQUIM",
"DAVOUDIASL",
"LINDNER",
"OKADA",
"MAMBRINI",
"BOUCENNA",
"AHRICHE",
"FRASER",
"FRASER",
"ADHIKARI",
"OKADA",
"IBARRA",
"ARBELAEZ",
"AHRICHE",
"LU",
"KOWNACKI",
"AHRICHE",
"AHRICHE",
"MA",
"NOMURA",
"HAGEDORN",
"ANTIPIN",
"NOMURA",
"GU",
"GUO",
"CARCAMOHERNANDEZ",
"MEGRELIDZE",
"CHEUNG",
"SETO",
"LU",
"HESSLER",
"OKADA",
"KO",
"KO",
"LEE",
"ANTIPIN",
"BORAH",
"CHIANG",
"KITABAYASHI",
"DAS",
"WANG",
"NOMURA",
"BOEHM",
"HE",
"FARZAN",
"HERREROGARCIA",
"SUEMATSU",
"RESTREPO",
"CEPEDELLO",
"WANG",
"GUO",
"LINDNER",
"CAI",
"SINGIRALA",
"WEINBERG",
"BRUST",
"NADOLSKY",
"AABOUD",
"BELANGER",
"GONZALEZGARCIA",
"BALDINI",
"ADAM",
"AKERIB",
"ADE",
"EDSJO",
"NISHIWAKI",
"SCHAEL",
"BELYAEV"
] |
847f057c55a44bc9b3b18f4f5e1eddb8_Defensible citadel History and architectural character of the Lahore Railway Station_10.1016_j.foar.2020.05.003.xml
|
Defensible citadel: History and architectural character of the Lahore Railway Station
|
[
"Ali, Naubada",
"Qi, Zhou"
] |
This study aims to investigate the defensible character of the Lahore railway station built in response to “the war of independence in 1857,” which greatly impacted the location and design of the building. This study demonstrates the integral role played by the railway station in the development of the new colonial city, which the British wants to be defensive in every aspect. Railways were introduced in Pakistan (India) soon after their inauguration in Britain. Beginning from the mode of transportation, the multifaced contribution of railways toward the urban growth, new architectural style, mode of construction, and technology cannot be recanted. The research is based on the documentation and analysis of the history of Lahore railway station design. First, this study uses primary and secondary data to offer a history of the Lahore railway station from its inception to final execution. Second, it explores the criteria adopted by the British for its site selection to make the station a defensible post. The research finding includes the visual features that enhanced the architectural character of the building. Qualitative methods are used including several other approaches, namely, literature review, archival data collection, analysis of photographs, and study of architectural drawings and old maps, to achieve the objectives.
|
1 Introduction The industrial revolution, with all its technological advancements, introduced rail roads as a new mode of transportation. In the first quarter of the 19th century, the emergence of steam engine gained extraordinary importance due to its revolutionary transformation in transport structure and introduction of new technologies. The first railway was opened in Stockton & Darlington Railway, UK in 1825 ( Acharya, 2000 ). “The British never really conquered India. But the railways did” ( Christian, 2017 ). Imagining Pakistan (India at that time) without the contribution of railway networks is difficult. The arrival of the British was the most fascinating change. Rail roads largely impacted the urban growth patterns, technology, building techniques, architectural design, and the economic development of the country. Railways were introduced in India through the steam boat by Rowland MacDonald Stephenson, who was a young employee of the first Steam Navigation Company; he was later acknowledged as the “Father of the Indian Railways” ( Berridge, 1969 ). In 1845, he persuaded the directors of the East India Company to establish railways in India/Pakistan. In 1849, Lord Dalhousie contributed his best to accelerate the establishment of this new era of rail networks. Tracks of more than 23,000 miles were laid, and railways became the most costly project undertaken by the British. In Pakistan, the first railway track of 105 miles between Karachi City and Kotri City was opened for public traffic on May 13, 1861. A double line of 21 miles was later built between Karachi City and the Karachi cantonment. The railway network gradually spread in the country and connected the whole country similar to a web. The network soon became the symbol of power and identity of the British. The selection of the Lahore railway station for studying history and design development has many reasons. Lahore was an important historical city long before the Mughals. Mughal emperors attracted the commerce and residents by making the city a provincial capital from the 16th to 18th century. They gave the city a grandeur in the form of beautiful architecture. Sikh followed their footsteps, and the city remained the central attention of Punjab as political and commercial capital of Ranjeet Singh Kingdom (1801–1849). The British ruled Lahore as the last foreign invaders from 1849 to 1947. They built many buildings incorporating their ideologies and styles of construction. They soon realized the historical and geographical importance of Lahore and established the rail network in the city. The Lahore railway station was one of the earliest built railway stations in Pakistan. It was the junction (worked by the Sindh, Punjab, and Delhi Railway Company) and the headquarter of North Western railways. This system enhanced the importance of the city and the railway station. Given that the system was built shortly after the war of independence in 1857, it incorporates the features of a train station and a defensible post. The railway system was established in Punjab as Punjab Railway Company in 1862, and the Lahore railway station housed all the administrative setup. An extensive study has been conducted since the beginning of railways (in 1853) to explore different aspects of British Indian railways, its history, engineering, associated infrastructure, railroad construction, and administrative setup. One of the key descriptions of the development and expansion of railways in Pakistan was that by Malik ( Malik, 1962 ). It includes data about the history, track lengths, development, and income and expenditure with reference to years. Some other books also have significant contribution toward the British Raj and the development of colonial India. Kerr in his book ( Kerr, 2007 ) explained the initiation, pioneering decades, and expansion of railways in India (India, Pakistan, Bangladesh) and how it marked the social improvement and advancement. Christian Wolman is a popular railway historian and described the creation, influence, and legacy of Indian railways in his latest volume. The book covers present Indian cities; thus, the architectural history of the Lahore railway station remained a neglected part ( Christian, 2017 ). An excellent effort was also done by Berridge (1969) , who served North Western railways for 20 years. He explained the opening and construction of various lines in Punjab and discussed steel bridges and long-span structures. Railways of the Raj have also been discussed in the historical development of railways in India ( Satow and Desmond, 1980 ). Railways were the single most costly project by the British in India. The current study mainly aims to understand the historical importance and visual character of one of the most important railway stations of the colonial period in India. No comprehensive study is available at present to introduce the tangible and intangible qualities to communicate the architectural significance of the Lahore railway station among practitioners and researchers. Architects and historians should not only preserve but also document the heritage buildings that have stored a rich architectural history of their existence. This research is based on the documentation and analysis of the history of Lahore railway station design. The main objective here is to describe the importance of station design at that period and the factors to consider to ensure functional and secure buildings. To achieve the objectives, qualitative methods are used including several other approaches, namely, literature review, archival data collection, analysis of photographs, and study of architectural drawings and old maps. First, this study uses primary and secondary data to describe the history of the Lahore railway station from its inception to final execution. Second, it explores the criteria adopted by the British for its site selection to make the station a defensible post. The research finding includes the visual features that enhanced the architectural character of the building. The Lahore railway station is also compared with other railway stations of Punjab to have a clear picture of that region where Lahore is considered the most important city that must be defensible. 2 Significance of architectural character Buildings are unique due to their identity and distinguished architectural character. Many aspects make historical buildings significant. Character, including shape, materials, decoration, craftsmanship, site, and environment ( Nelson, 1988 ), defines the physical and visual appearance of buildings. Identifying the architectural characteristics of buildings and learning the lessons are important. Skills, expertise, and knowledge of traditional builders can keep the local identity alive and contribute to the growth of highly sustainable environment ( Asquith and Vellinga, 2006 ). Buildings tell many stories and can embody the past in the form of memory and feelings associated with events and people. Buildings are never alone and achieve their meaning through context. Site selection is important for defining the character of buildings, particularly historical ones. Given that building location is important for the Lahore railway station, we highlight the history, site selection, and character that define the aspects of the station building. 3 Construction of the lahore railway station The walled city of Lahore was irregular trapezium in shape with its longest side toward the north. The north-west side of the city was at a right angle to the Ravi River flowing nearby, as shown in Fig. 1 . During the Mughal period, the city gained considerable attention and many tombs, mosques, and other building were constructed in the suburbs of the walled city. The Sikh nobility following their footsteps built gardens mostly on the eastern side; however, they misused the Mughal buildings and took away the precious gems and stones. The decayed and ruinous suburbs were described by the travelers during the Ranjeet Singh reign ( Glover, 2008; Qadeer, 1983 ). The British took control of the city and made it the capital of the province because of its historical importance. The houses and offices of the first British residents were confined to the neighborhood of the old cantonments, which occupied a strip of alluvial soil to the south of the city and running parallel with an old bed of the Ravi. However, as the European population increased in numbers, their station gradually spread eastward. The map of Lahore in Fig. 2 clearly shows no village or garden on the north or west side of the Ravi River because this area may be subject to flooding. The civil station and the Anarkulli cantonment were already established on the south side. Anarkulli was abandoned as a cantonment in the period of 1851–1852 due to the terrible morality among the troops stationed there. The cantonments of Meean Meer was established on the east of the civil station at about 3 miles distance due to the unhealthiness of the former cantonments at Anarkulli ( Gazetteer, 1883–1884 ). As a result, the east side was finalized for the development and expansion of the railway. An additional advantage of the present site is its location near the Ravi River, which can be used as an alternate transportation route. Initially, the purpose of the railway station was to accommodate the staff, store goods, and facilitate passengers moving to and from the city. Although the site has a drawback due to the presence of ruins of the old city, laying the foundations on firm soil is difficult. However, the above-mentioned convincing advantages make the local administration and railway company bear all the difficulties and high cost of construction. The first evidence about the introduction of railways in Lahore was found in Lahore Chronicle published in June 1852. The article encouraged the idea of rail transport between two cities as it will support the commercial activities, which will be beneficial for the government. However, the first step for the development of railway lines from Lahore to Amritsar was taken when a letter was written from the civil engineer's office on February 3, 1853. According to the letter, “To lay a single line of Rails on one side of the Grand Trunk Road from Lahore to Amritsar leaving the remaining width of the road for the ordinary traffic … which after deduction of cost of maintenance will secure a surplus income of 267,9325 rupees …, length of the line would be 36miles” ( Punjab Government Civil Secretariat, 1853 ). It took few years for the finalization of the project and on July 15, 1857. Chief Engineer William Brunton presented the architectural drawings of the Lahore railway station to the Scinde Railway Company ( Khan, 2013 ). He also wrote a report on the selection of the site. The report indicated that generating revenue through the railway transport was the motivation of the British. According to the report, “I have consulted the wants of the Meean Meer cantonment and have allotted a station at each end of their lines. The stations at Lahore, Umritsir, and Mooltan, I have placed more especially with a view to native passenger traffic, which will be the main source of revenue from passengers: they are also in suitable positions for the delivery and reception of goods” ( Andrew, 1857 ). In the period of 1857–1858, Indian troops rebelled against the British for using animal grease in guns that was religiously forbidden for Muslims and Hindus. That rebellion was known by several names as Indian Mutiny, revolt of 1857, and the war of independence by the natives. They not only occupied British quarters and institutions but also killed many Europeans. The blood shed during the war send shock waves to the Colonial Britain, and the British did not consider it a safe place to live. Given that the project of the Lahore railway station was already delayed, this fear of natives greatly influenced the design of the railway station; they designed it more similar to a fortress. Now, the foremost concern of the government was securing the British troops and civilian against any native uprising. Thus, along with the availability of land, the location, and damage from flood, the safety from any future revolt was the top priority. The station meant to be grand and imposing. In 1854, the station was located within the cantonment, but Brunton forwarded the case and argued that it should be defensible in every aspect. Thus, the final location of railway stations that was previously based on population density and nature of land had a new factor added; after the mutiny in 1857, strategic location and defensible design were considered ( Satow and Desmond, 1980 ). In 1859, the foundation stone was laid by Sir (afterwards Lord) John Lawrence, who was the Late Lieutenant Governor of the Punjab, with the trowel inscribed with Latin motto “tam bello, quam pace,” which means both war to peace ( Talbot, 1988 ). It descibed the façade of the station. The Lahore railway station was constructed by the late Mohamed Sultan, who was the contractor to the Public Works Department. In 1860, the first train from Lahore to Amritsar ran for public traffic. The whole building was castellated and one of the finest and the most substantial specimens of modern brick work in the country that costed half a million rupees. By the end of l86l, l09 ¾ miles of the line were constructed. The chronological order for construction dates of the Lahore railway station is shown in Table 1 . 4 Historical character of the station Along with the style of construction and material, other tangible elements embody the significance of buildings. These elements are those events and memories of the people associated with the building to give it a historical importance. Many official reports and comments proved that the Lahore railway station was a debated topic during its time of construction. Many officials and travelers visited it later and mentioned the building in their reports and writings. In 1863, the Principal Thomson College narrated the location and fort-like appearance of the building. “The Lahore Railway Terminus is about 400 yards distant from the Delhi Gate of the city, on the site of the old Sikh Cantonment of Nolukha, among the ruins of the ancient city. In designing the Passenger Station, it was thought advisable to give it a defensive character, as far as possible, and to arrange the defenses to require but a small garrison hence the Fort-like appearance of the present structure” ( Medley, 1865 ). The British wants to have a building that would be frightening for the natives by its appearance. Burton was successful in achieving that goal and remarked about the appearance of the station, “The face these stations presented to the outside world was grim: high walls, rounded corners that would deflect shot, battlemented towers and firing slits” ( Burton, 1996 ). Fig. 3 portrays the appearance of the station that was also explained by I.J. Kerr as, “The ‘fortified main station at Lahore’ looked more like a medieval castle than a welcoming entrance to a key transport network. It was not just stations: The Rebellion led to the concern, at times an obsession, that was to last for decades among the authorities, namely ensuring the military security of the railway lines, bridges, tunnels and stations” ( Kerr, 1995 ). The manager of the North Western railway system, Lieutenant Colonel Boughey, R.E, also described the building as, “It has connection with all the railways and all the principal places of India. It is therefore a busy center and the building itself (a castellated structure) is a fine piece of modern brick-work” ( Walker, 2006 ). William J glover explained the building as, “The Lahore station, built during a time when securing British civilians and troops against a future ‘native’ uprising was foremost in the government's mind, looked like a fortified medieval castle, complete with turrets and crenellated towers, battered flanking walls, and loopholes for directing rifle and canon fire along the main avenues of approach from the city” ( Glover, 2008 ), as shown in Fig. 4 . In 1875, the Prince of Wales (later known as King Edward VII) came to Lahore. He was welcomed with banners and triumphal arch fixed near the railway station, as shown in Fig. 5 . It was a great reception by many rulers of the Punjab. The event highlighted the importance of the station as an entrance to the city. It was the first purpose-built building by the British at 400 yards distance from the Delhi gate. It served as a gate to the city designed by the British according to their ideas of a modern city and changed the urban life of Lahore ( Ali and Qi, 2019 ). The Lahore railway station had always played its part since its construction. One of the important shares was contributed during the Anglo–Afghan war in 1878. The station facilitated 75 trains of troops and soldiers in every 24 h. It also supported the transportation of goods in the 1980s, which resulted in the form of Karachi as a major port. Another significant role was during the partition in 1947, when each train carried around 4000 passengers. However, the train reached the station with only handful of survivors. The building also acted as a refuge for those who want to leave the country. People hide in the railway station while they waited for the train to take them across the border. This historical character of the building is still alive in the minds and memories of the people due to the mega partition and the loss of many lives. 5 architectural character: identifying visual features of the lahore railway station Rail stations can be best described as the “face of public transport” due to their role in the overall experience of the journey ( Hale, 2013 ). Initially, only the train sheds were provided and covered the railway tracks and platforms. Train sheds alone cannot fulfil all the functions. Provision of other facilities, such as waiting area, protections from weather, and access to the rail through other modes of transportation (such as horse and cart), was also essential ( Edwards, 2013; Griffin, 2004; Meeks, 1995 ). Thus, railway stations were developed. More than 150 years ago, the first railway station was built, and no guidance on either function or design of the railway station was provided. “Every solution had to be invented” ( Carroll, 1956 ). The architectural character of the Lahore railway station is analyzed here according to four main features: station plan, elevation, masonry, and roof design. Designing for a large number of people that can cater the entrance and exit of passengers at the same time is difficult for architects. Previous churches and theatres used to cater to a large number of people at one time, but their design was not helpful for architects. In those buildings, same entrance and exit can be used because worshipers and audience have fixed time to enter or move out. Here, the plan of the Lahore railway station building is categorized according to the very first and basic classification published by the editor of the Rel'lte Generate de l'Architectllre in 1846 ( Ching, 2014 ). He established an important criterion for the identification of station and categorized four types, namely, one-sided, two-sided, head type, and L type, as shown in Fig. 6 . The basis for this division is the circulation routes of the arriving and departing passengers and the linkage between the form and function of buildings and the tracks. In the early years of railway development, the stations were simple and mostly one-sided. With the advancement of railways, the number of tracks increased, and the stations had to cater to more passengers at a time. Later, two-sided stations were constructed to facilitate the departure and arrival of passengers through separate buildings. The first Euston Station in London in 1839 is an example of a two-sided station. The Lahore railway station was designed according to head house concept, where the passenger walkways and concourses were designed in the center and other facilities were on the sides. The head house and concourse proved to be the most significant feature of railway architecture because it provided a pragmatic solution to the volumes of train and passenger traffic ( Sheppard, 1996 ). Functionally, they allowed arriving and departing passengers to gather at a same area. After entering through the portico, a concourse was situated, as shown in Fig. 7 . The concourse also acted as a main circulation area between the entrance and destination zones, “where passengers stop to consider their next action” ( Ross, 2000 ). The building was oriented in the North–South direction. It was rectangular with two symmetrical blocks parallel to each other. Initially, there were four railway tracks and two platforms that were 519’ long, as shown in Fig. 8 . Later, with the change in size of track gauge, two tracks were replaced with one track. The interior was spacious with arrival and departure platforms. The station was well planned to handle any emergency. Huge gates were situated at the entrance and exit, and heavy sliding door was imbedded across the track to seal the station ( Davidson, 1868 ). In the beginning, the British did not expect much popularity of railways among the natives; thus, only two platforms were built. The Lahore station was also the junction. Thus, with the development of railways, the number of platforms also increased. At present, there are 11 platforms in the Lahore railway station, and all are connected through steel bridges. The passengers must cross the track by foot using steel bridges to reach the other platform. The plan in Fig. 8 shows three main platforms and the parallel activities, namely, the ticket office, station master room, waiting areas, and refreshment rooms. The fort-like appearance of the station is also shown in Fig. 9 through the AutoCAD drawing by the author. Since the time of ancient Greek, arches have always been the dominant feature of buildings. However, with the passage of time, the size, form, and function of the arches changed. The Mughals left a rich architectural inspiration for Britain; among them, arches were an important feature. The Lahore railway station was one of the earliest purpose-built buildings. Thus, Britain tried to incorporate its style rather than only following the Mughals. In the station building, two types of arches, namely, Tudor and gothic arches, were mainly used. Tudor Arch was constructed by the Mughals and was found in many of its buildings. Gothic arch was the addition by the British to this new style of architecture in India. This style was later termed as Indo-Saracenic architecture. The entrance to the station building is through the portico with Tudor arches at both ends. The porticos were added to provide protection from severe environmental conditions; they also drew attention and made the entrance significant ( Arthur and Passini, 1992 ). Tudor arch was most popular in England during Tudor Dynasty (1485–1603). It also remained an architectural feature during the late 19th and early 20th century as Tudor revival architecture. It was a four-centered arch, and two key features defined it. It had a pointed apex as one of its distinguishing features and finished at a distinctive point. Meanwhile, traditional arch used by the Mughals had round or curved top. The second key feature was the relationship between the rise and span. It was much wider than its height, as shown in Fig. 10 ( a). The portico was wide and low heighted. Thus, it was primarily used at the entrance and the windows. It gave more welcoming appearance and did not disturb the character of the building due to the low height. On the side of the portico facing the front, traditional arches were used, as shown in Fig. 10 (b). Inside the building, the major type of arches used was Gothic arch. It was a sharp-pointed arch and composed of two arc segments (parts of a circle). The lower part of the arch was parallel sided and up to the level of the springing points. That was evolved from the round-topped Roman arch, which was taller than a circular arch of the same width. This design also placed much less horizontal stress on the piers holding it up. The introduction of the Gothic arch allowed the building to be much taller and more open, thereby allowing larger windows and less raw material for support, as shown in Fig. 11 ( a) and (b). Gothic arch was used in the station for decorative purposes and to support the long-span structure. It reached higher than the normal arch for a given width and was less visible. A major consideration when building a masonry arch was the amount of horizontal thrust that it produced on its foundations. The advantage of using Gothic arch was that it puts only half the side-thrust compared with the Roman arch. This style was a well-designed successor to the Roman arch style. To support the large span of roof structures, the British used the arches fascinatingly. They were used in two different ways, as shown in Fig. 12 . On the one hand, open style was used to give more spacious appearance and keep the other side of the platform visible. On the other hand, parallel rows of gothic arches were filled with masonry. This contrasting color and mode of construction gave the platform a magnificent and elegant appearance. The developing phase of railway stations was much standardized between 1844 and 1890. It included the acceptance of certain design elements that were made symbol of railway stations. The tower, the bell, the clock, and the concourses were not only symbolic representation but also proved to be the source of wayfinding ( Quinn, 2008 ). With the arrival of the British Raj, the clock towers obtained significant status in the major cities of Pakistan. Clock towers were also introduced in the station building in around the 1940s as a source of audible cues, as shown in Fig. 13 . Before that, no system of timekeeping was available nationwide. Bells and clocks were the main elements of clock towers, and nearly all the major railway stations designed by the British had the clock towers in them. Eight clocks were imbedded in the twin tower on all the four sides. Keeping the track of train timing was important because the arrival and departure of train follows the timetable. Thus, they were the best source to signal the passengers about train timings. Soon, “station clocks became symbols, governing the comings and goings of trains and people” ( Sheppard, 1996 ). The Lahore railway station was designed to serve for defense and as the train station. Turrets were one such design elements that provide a projected defensive position to cover fire, and the holes were used for firing if any incident or revolt happened. Turrets were designed on both ends of the front elevation. They were small towers with crenellated circular tops and were projected vertically from the wall of a building, as shown in Fig. 14 . They were curved structures that allowed a 360° view of the outside world. They were primarily used in the military forts and castles for defensive purposes. It was considered excellent defensive position in the times of war. William Brunton, who was the architect of railway station, also explained that the whole station had a “defensive character.” Thus, a small garrison could secure it against enemy attack. Turrets also served the decorative purposes in the building. This character was entirely different from that used in the design of forts before the arrival of the British. The building was constructed entirely with brick masonry, as shown in Fig. 15 . At present, some part of the building has exposed brick walls, and others are plastered and painted. The old bricks used during the Mughal period were slightly different from the modem bricks. The texture of the modern brick used in the station building appeared closer and smoother, and the edges were straighter and sharper than those of the old material. In addition, the color and size of the modern brick were much reddish and thicker in a standard size than those of the old brick. The size of the brick used was as follows: length of 9″, width of 4.5″, and thickness of 3″ (standard size of brick used by the British in many building). These dimensions led to a variety of bonds in construction. Nearly all the buildings during the colonial period and even in the post-colonial period used these bricks because they were practical. In the Lahore railway station, the native and modern bricks by the British were used due to two reasons. First, the old bricks were readily available because the station was built near the ruins of the old city. Second, Mian Muhammad Sultan, who was the contractor of the Lahore railway station, was famous for selling old native bricks. He also used these bricks in railway stations, railway bungalows, and other buildings. Bricks were bonded and stacked together by a mortar joint, which was a mixture of sand, water, and cement; and lime prepared from chalk or limestone burnt in a kiln and then hydrated or slaked with water. The use of lime in a mortar gave the mixture a soft texture, which enables the buildings to breathe freely. The outer surface had a closely joint brick surface and was perfect and pleasing to the eyes. As a result, Architect William Brunton called it “the best in the world.” He felt confident that it could survive even full-scale howitzer fires. Bricks were used throughout, the outer surface had carefully closed joints, and the masonry could not be better even today. During the pre-colonial period, the traditional builders were expert in the construction of brick masonry arches. Meanwhile, European experts have technical knowledge based on the scientific calculations of structural members and strength of roof materials. The combination of both resulted in a sustainable and economical solution of roofing system, that is, jack arches. In the Lahore railway station, jack arches were used as roofing system. 1 The original drawings are unavailable for the Bahawalpur and Hasanabdal railway stations. Thus, recent pictures are used. The development of railways was one of the main driving forces behind the introduction of steel in various forms. Strong links were situated between the railway and steel construction industries not only in the demand for steel for rails and locomotives but also in infrastructure development, particularly bridges, stations, and warehouses. Train sheds have been called the single-most important design innovation of the 19th century ( Brown, 2005 ). In the station building, the use of steel and iron was mainly for truss design, sheds, and connecting bridges between different platforms. The trusses supporting the roof, as shown in Fig. 16 ( a), were the first to be noticed upon entering the station building. It was the distinctive feature of the railway station. Before that, people were used to see decorative and floral painted ceilings as in most of the Mughal buildings. This feature continued in the platform and train shed. From the beautiful complimenting arches on the sides of the platform, the character of steel trusses on the roof made the station highly engineered and technical project of that time, as depicted in Fig. 16 (b). The truss used was modified queen post. Wooden planks and galvanized iron (G.I) sheets were used as roof to cover the trusses. In 1983, the corroded G. I sheet and wooden planks of four rooms at platform number 4 were replaced. Fig. 17 shows the amendments made at that time. The roof design of the platforms was different from the rest of the station building. A lot of steel was used for the trusses because the railway had a lot of residual steel that cannot be further used for the tracks. Accordingly, the engineers used them for long-span roofs of station platforms, and it was visible in the Lahore railway station as well. The steel trusses on the platform were covered with G. I sheet walling that was 5 feet in height. Transparent corrugated sheets were used on the top to have sufficient light. Louvers with a ridge were installed on the center for light and ventilation. On both ends, a gutter was also provided for the drainage of rainwater. The span of the roof was 60 feet with 22’ height. The section of the platform is shown in Fig. 18 ( a), and the side elevation is shown in Fig. 18 (b). Every part of the station building was well planned and well thought off. 6 Present status of the lahore railway station After the British felt secured and with increasing demands of rail transport, few changes were made in the design and elevation of station buildings. However, the elevation of buildings did not change after the partition in 1947, as shown in Fig. 19 . Only the repair work was done to strengthen the structure. At present, the Lahore railway station is protected under Punjab Special Premises Preservation Ordinance (1985). No further changes can be made to damage or alter the character of the building. 7 comparison of the Lahore railway station with other stations of Punjab The railway continued to expand, and different other stations were built in Punjab Province. Some major stations and the Lahore railway station built from 1860 to 1948 are compared. The original architectural plans and picture (original drawings were unavailable) of the stations are shown in Fig. 20 (a–g) . The comparison is conducted in the form of a table to study the architectural style and other structural features of these stations. It includes number of story, masonry type, wall thickness, supporting structure, and room heights, as shown in Table 2 . This way would help analyze the technology the British used to build these stations. 8 Result and discussion The comparison of the Lahore railway station with other stations shows that the British constructed single-story railway stations. The offices and other facilities in the Lahore railway station were provided on the ground floor. However, access was provided to reach the clock towers and turrets. Brick masonry was mostly used except for the Hasan Abdal railway station due to the location of the city. The thickness of the walls was also remained the same and varied from 13.5″ to 18″. The roofing of these stations was done with jack arches that rest on steel I beams. This system was already in use in Europe for industrial buildings to fulfill the requirement of large-span structures. Thus, the British used this system initially in every railway station and in every railway building by the 20th century. Only few stations had concourses because they were built in major cities catering to a large population. Although the Sahiwal and Gujranwala railway stations were built few years after the Lahore railway station, the defensive character was not visible in any other stations. The reason was that Lahore was the capital of the province and the junction station connecting the Karachi–Peshawar line and the Lahore–Amritsar line. This strategic location of the city demanded the station to be defensive in every aspect. Furthermore, forts were constructed for defensive purposes in the pre-colonial period. The British constructed the railway station in India with defensive character for the first time, and this feature remained linked with only the Lahore station. The British kept a basic symmetry in their design of railway stations, as shown in Table 2 . All the main stations had longitudinal plan that runs parallel to the railway line. The facilities and offices were planned side by side facing the platform, as shown in Fig. 20 . These plans also showed symmetry in sizes of the rooms and elevation. However, after partition, the construction and modification of the railway stations varied either according to the historical value of the city or the requirements of the area using modern techniques. Only the stations that are protected under the government law remained the same at least in their visual character. The Lahore railway station is one such example. Currently, the Lahore railway station is no longer used as a defensive building. It now only fulfills the transportation needs of passengers. The elevation has been maintained to its original condition and standing as an emblem of the golden era of the British Raj. 9 Conclusion The Indian railway network was the largest technological project of the 19th century by the British. The magnitude of the project, the massive difficulties, and the short duration of achievements made this structure the most daring experiment of the colonial period in terms of economic and engineering. The presence of the fort and the wall around the old city depicts that the defense has always been important for previous rulers as well. However, the construction of the Lahore railway station was an advancement in terms of its design, material, and the dual purpose of the building. It proved how the British used transportation as a symbol of power. It is the only station in Pakistan that was constructed not only for the defense of the city but also to provide safe exit for the British in case of any future revolt. This study highlights the importance of the design aspects of buildings and provides a guideline of the purpose of each design element in addition to beautification. The selection of design elements has strong architectural character. From its setting in urban fabric to its overall shape, each part of the building enhanced the visual character of the Lahore railway station. This study also proves that stations can be multipurpose if they are designed strategically. The tangible and intangible factors should be considered for the study of historical buildings. If the war of independence did not happen, then the design of the station would be entirely different and may be similar to other Mughal buildings or a completely colonial structure. Even the location of the station would be different. Given that stations played a role in the city development, the whole expansion of Lahore City would also not be like how it is today. Conflict of interest There is no conflict of interest. Acknowledgment This work was supported by the National Natural Science Foundation of China (No. 51778123 ). We thank the Pakistan Railway and Pakistan Archives Department for their help in data collection.
|
[
"ACHARYA",
"ALI",
"ANDREW",
"ARTHUR",
"ASQUITH",
"BERRIDGE",
"BROWN",
"BURTON",
"CARROLL",
"CHING",
"CHRISTIAN",
"DAVIDSON",
"EDWARDS",
"GAZETTEER",
"GLOVER",
"GRIFFIN",
"HALE",
"KERR",
"KERR",
"KHAN",
"MALIK",
"MEDLEY",
"MEEKS",
"NELSON",
"QADEER",
"QUINN",
"ROSS",
"SATOW",
"SHEPPARD",
"TALBOT",
"WALKER"
] |
cf33cb59418e425e849ba5dd477696e9_Elucidating divergent biology in uterine carcinosarcoma_10.1016_j.tranon.2025.102506.xml
|
Elucidating divergent biology in uterine carcinosarcoma
|
[
"Garg, Vikas",
"Prokopec, Stephenie D.",
"Stone, Simone C.",
"Pakbaz, Sara",
"Chen, Min Li",
"Lam, Bernard",
"Benito, Czin Czin",
"Mcmullen, Michelle",
"Lungu, Ilinca",
"Rossi, Samanta Del",
"Msan, Anthony",
"Bowering, Valerie",
"Sotov, Valentin",
"Tran, Christine",
"Butler, Marcus O.",
"Oza, Amit M.",
"Diamandis, Phedias",
"Wang, Ben X.",
"Lheureux, Stephanie"
] |
Objectives
Uterine carcinosarcoma (UCS) is an aggressive malignancy characterized by epithelial (C) and mesenchymal (S) components, with complex biology and poor treatment response. This study aims to enhance understanding of UCS through genomic, epigenomic, and transcriptomic analysis.
Methods
Microdissected (C and S) tumor samples were processed for whole-genome sequencing (WGS), RNA-seqencing, and enzymatic methylation sequencing (EM-Seq). Multiplex immunohistochemistry (mIHC) and computational pathology techniques were employed to assess tumour microenvironment (TME).
Results
WGS and EM-seq of 18 samples from 9 patients revealed a low tumor mutation burden (TMB; median = 0.97 mutations/Mb) and no evidence of microsatellite instability (MSI). Driver mutations were identified in TP53 (94 %), PIK3CA (33 %), and PPP2R1A (22 %). Copy-number (CN) analysis revealed recurrent amplifications of MYC (67 %), PIK3CA (61 %), CCNE1 (56 %), AKT2 (44 %), and SMARCA4 (39 %). Comparative analysis of the C and S regions revealed no significant differences in mutation frequency, CN, transcriptomic and methylomic profiles. Both regions exhibited global hypomethylation, with functional enrichment for xenobiotic metabolism pathways in C and epithelial-to-mesenchymal transition pathways in S regions. Comparitive mIHC performed on 21 cases showed similar T cell and B cell densities, but a higher density of tumour-associated macrophages and PD-L1+ cells in the S component. Computational morphologic analysis showed substantial histomorphologic heterogeneity within and across UCS cases.
Conclusion
By elucidating the complex interplay between the epithelial and mesenchymal components, this study enhances our understanding of UCS and informs the development of novel therapeutic strategies targeting both genomic alterations and the TME.
|
Introduction Uterine carcinosarcoma (UCS), represents a distinctive tumor subtype, accounting for approximately 4.5 % of all uterine malignancies. Over the past two decades, there has been a notable and concerning upward trend in the incidence of UCS, rising from a rate of 2.2 cases per 1,000,000 individuals in the year 2000 to 5.5 cases per 1,000,000 individuals in 2016. UCS presents at an advanced stage when diagnosed, with a substantial majority (around 60 %) of patients presenting with either regional disease or distant metastasis. The prognosis is particularly grim, with a five-year relative survival rate of less than 40 % [ 1–3 ]. UCS is a biphasic neoplasm characterized by the coexistence of epithelial/carcinomatous and mesenchymal/sarcomatous elements within the tumour. The epithelial component predominantly exhibits serous histology, with less frequent occurrences of endometrioid or mixed histological subtypes (comprising serous, endometrioid, clear-cell, or adenocarcinoma). In contrast, the mesenchymal component may encompass either homologous constituents, originating from mullerian tissues such as leiomyosarcoma, fibrosarcoma, or endometrial stromal tumours, or heterologous components derived from non-native tissues, including rhabdomyosarcoma, osteosarcoma, liposarcoma, or chondrosarcoma. The substantial heterogeneity observed in the composition and proportions of these epithelial and mesenchymal components may play a pivotal role in dictating the prognosis of UCS, underscoring its clinical relevance and complexity [ 4 ]. Further, to understand the origin of UCS, three prominent theoretical models have been proposed: the collision theory, the combination theory, and the conversion theory. The collision theory postulates that UCS arises from the simultaneous, yet independent development of carcinoma and sarcoma components that fortuitously converge within the same tumour. In contrast, both the combination and conversion theories propose that these components stem from a shared precursor, with divergent and metaplastic differentiation pathways, respectively. Recent findings lend support to the concept of a monoclonal origin for UCS, as inferred from marker expression and genetic alterations. Nonetheless, the predominant role of either divergence or metaplasticity in the development of UCS remains unclear [ 5 , 6 ]. The utilization of targeted next-generation sequencing (NGS) and whole exome sequencing (WES) has significantly enhanced our understanding of the molecular landscape of UCS. While some variability exists across studies, the most frequently observed alterations occur in TP53, PIK3CA, PTEN, FBXW7, KRAS, PPP2R1A and CCNE1(amplification) [ 6–9 ]. Through the application of RNA-sequencing and DNA methylation analysis, new insights into the central role played by epithelial-mesenchymal transition (EMT) have been identified in the differentiation process leading to the development of the sarcomatous phenotype [ 9–11 ]. However, most of these studies did not undertake separate assessments of the carcinoma and sarcoma components, and predominantly relied on targeted sequencing, which may miss specific alterations and the presence of intra-component heterogeneity. A recent study by Sertier et al . supports the clonal evolution of UCS and reported a comprehensive molecular landscape of UCS using RNA sequencing (RNA-seq), whole-genome sequencing (WGS), and DNA methylation profiling on macro-dissected carcinoma and sarcoma components. WGS unveiled shared mutational signatures and exhibited a high level of concordance in copy number alterations (CNA), single nucleotide variants (SNV), and structural variants (SV) across both carcinoma and sarcoma components. Complementary RNA-seq and methylation analyses highlighted the role of EMT in driving the differential phenotypic variation observed between the epithelial and mesenchymal components. Specifically, the carcinoma component demonstrated overexpression of genes associated with the epithelial phenotype, whereas the sarcomatous components exhibited upregulation of genes linked to the mesenchymal phenotype and extracellular matrix (ECM) remodeling. Additionally, a distinctive methylation pattern emerged in sarcomatous components, characterized by hypermethylation of miR200, a microRNA associated with the epithelial phenotype, and hypomethylation of genes related to the ECM remodeling [ 12 ]. While much of the existing evidence supporting a monoclonal origin is -omics based, other factors may contribute to the adverse prognosis associated with UCS. Notably, the immune elements within the tumour microenvironment (TME) have garnered increasing attention for their potential to predict patient outcomes and guide immune checkpoint blockade therapies. Therefore, decrypting the immune microenvironment (IM) within the epithelial and mesenchymal components is critical, as it may refine UCS management [ 13 , 14 ], and this has not been yet well described. In addition, the advance of artificial intelligence (AI) and machine learning tools in digital pathology has enabled the exploration of variations in morphological characteristics. Given the considerable heterogeneity observed in the composition and proportions of these epithelial and mesenchymal components, accurately assessing this diversity may enhance prognostication and, ultimately, improve patient care [ 15 ]. In this study, we present a comprehensive analysis of UCS samples utilizing WGS, DNA methylation profiling, and RNA-seq to elucidate the heterogeneity in biology of UCS. In addition, we employed a multiplexed immunohistochemistry (mIHC) platform to examine the presence and spatial relationships of tumour and immune cells in both UCS components, shedding light on the immune microenvironment of UCS and its potential roles in tumourigenesis. We also introduce an AI tool to map and visualize intra- and inter-tumoural heterogeneity within UCS. Methodology Patient samples were collected from a cohort of patients diagnosed with UCS, confirmed by an expert gynecology pathologist, followed at Princess Margaret Cancer Centre, Toronto, Canada. The tissue samples used in this study were collected from patients between the years 2007 and 2022. Suitable formalin-fixed paraffin-embedded (FFPE) tumour specimens and matched blood samples were selected based on central pathology review and rigorous quality control processes. These samples were obtained from patients currently enrolled in translational studies, including OCTANE (NCT02906943) and VENUS (NCT03420118). Ethical approval for this study was obtained from the University Health Network Research Ethics Board (UHN REB) prior to the initiation of any study-related procedures. This research project has received grant funding support from the Canadian Cancer Society. Procedures The clinical, pathological, and follow-up data were retrieved from patient records. The archival FFPE tumour samples were from initial diagnosis and blood collected at the time of patient enrollment. Tumour demarcation from adjacent normal tissue was carried out by the expert gynecological pathologist. Each histological component of the tumor, including the separation of carcinoma (epithelial) and sarcoma (mesenchymal) components of UCS, was meticulously delineated based on morphological assessment of H&E slides and IHC. The pathologist annotated regions that were clearly identified as carcinoma or sarcoma, while normal tissue was selected from normal endometrium/endocervical epithelium and normal myometrium. Due to the heterogeneity of UCS, cases with intermixed histological components were also included, provided the tissue could still be confidently annotated. For each case, one representative slide was selected from the previously bio-banked tumor tissue in FFPE. All annotations were performed on the tissue available on each slide, with only areas exhibiting definitive morphologic criteria (carcinoma or sarcoma) being selected. Suspicious or non-definitive tumor areas were excluded from annotation. Laser Capture Microdissection (LCM) was employed to isolate the distinct histological compartments—epithelial and mesenchymal components—while adjacent normal tissue was also included for comparison. For nucleic acid extraction, approximately 10 × 10 µm tissue sections were used, with a minimum of 100,000 cells captured per region of interest. The distinct areas of sarcoma and carcinoma were identified by the pathologist on representative H&E-stained slides, and these served as guides for dissection. Depending on the size of the regions, either macrodissection with a scalpel (for larger regions) or LCM was used. For LCM, tissue sections were mounted on PEN membrane slides (Carl Zeiss MicroImaging) and stained with Cresyl Violet (MilliporeSigma) for visualization. LCM was then performed using the PALM LMPC device (Carl Zeiss MicroImaging), allowing for precise isolation of the sarcoma and carcinoma compartments. Subsequently, each histological compartment, along with adjacent normal tissue, was processed independently. For each case, FFPE archival blocks or up to 10 unstained slides were available. DNA and RNA were isolated from each compartment carcinoma (C) and sarcoma (S) with either blood or adjacent normal tissue used as control. Nucleic acid samples meeting the quality control criteria based on bioanalyzer assessment were utilized for downstream genomic, transcriptomic, and methylomic characterization. Whole genome sequencing (WGS), transcriptome sequencing (RNA-Seq), and enzymatic methylation sequencing (EM-Seq) were performed using DNA or RNA from each sample. HAVOC (Histomic Atlases of Variation of Cancer), a whole slide imaging (WSI) workflow that automates regions of interest selection using an unsupervised image feature-based clustering pipeline [ 16 ]. Detailed methods and additional bioinformatics analyses are elaborated in the supplementary material. Multiplex immunofluorescence Tissue sections of 5 µm were used for mIHC. A total of two slides were obtained per patient: one for panel 1 and one for panel 2. Each panel contained DAPI and antibodies for 6 makers: [ 1 ] pan-cytokeratin (CK), CD3, CD8, Foxp3, CD68, CD20 and DAPI; [ 2 ] pan-cytokeratin, CD3, CD68, CD163, PD-L1 and DAPI using the OPAL 7-color IHC kit (catalog #: NEL811001KT; Akoya Biosciences, Marlborough, MA, USA) following the manufacturer’s protocol. Sequential staining was done on an intelliPATH FLX auto stainer (BioCare), with the addition of each antibody followed by fluorescent labeling with OPAL dyes. Stained slides were scanned with the Vectra 3 imaging system (Akoya Bioscience). To compare the immune cell composition between uterine carcinoma and sarcoma components, a pathologist identified non-necrotic regions of each component in the same slide. Images were analyzed in inForm Software (Akoya Biosciences; V.2.2.0). Pathologist defined carcinoma regions were segmented in stromal (CK negative) and tumour (CK positive) areas. Total immune cell density without tissue segmentation was also calculated for both carcinoma and sarcoma compartments. Stromal and tumor areas were not segmented for sarcoma regions due to the lack of CK expression in sarcoma tumor cells. Immune cell density was calculated by cell count per tissue area. Immune cells were defined based on marker expression: CD4 T cells = CD3+CD8-, CD8 T cells = CD3+CD8+, Treg cells = CD3+CD8-Foxp3+, B cells= CD3-CD20+, tumour associated macrophages - TAMs = CD68+, TAMs expressing CD163 = CD68+CD163+, TAMs expressing PDL1 = CD68+PDL1+, PDL1+ cells = PDL1+. mIHC component images were exported from inForm Software to generate figures using QuPath Software. Statistics and data visualisation Statistical analyses and visualisations were performed in the R statistical environment (v4.1.0). Due to low sample sizes, non-parametric Kruskal-Wallis tests were used to contrast continuous values (such as mutation signature weights or log 2 (ratio) values for copy-number data) while proportions tests were used to compare binary variables (presence/absence of a mutation across sample groups). Visualisations were generated using the BPG package (v6.0.3) [ 17 ], with lattice (v0.20–44) and latticeExtra (v0.6–29) [ 18 ]. Results A total of 70 UCS specimens available at the Princess Margaret Cancer Centre were considered for inclusion in this study and a total of 26 of these cases were eligible for analyses as the remaining 44 specimens lacked carcinoma or sarcoma regions in the available material. Supplementary Figure 1 presents the CONSORT diagram, detailing the number of specimens included in the study, including initial screening, and analysis. Patients had a median age of 69.5 years (interquartile range [IQR] 61.5–75.4) at diagnosis. Baseline tumour characteristics collected at time of diagnosis have been summarized in Supplementary Table 1. Most samples were from the patients with advanced stage disease, with 62 % patients diagnosed at stage III/IV and consisted predominantly of serous carcinoma (62 % of cases), while endometrioid (19 %) and mixed histology (19 %) were also present. The sarcoma component displayed notable heterogeneity, with 62 % of cases classified as heterologous, 27 % as homologous, and 12 % remaining undefined. MMR (Mismatch Repair) assessment via immunohistochemistry (IHC) was available in 22 cases and identified as proficient in all tumours. A significant portion of UCS patients (70 %) experienced relapse. For the 26 patients included in this study, at a median follow-up of 31 months, the median overall survival (OS) was 37 months (IQR 22.03-NR), with a five-year OS rate of 39 %. The genomic landscape of UCS exhibits high inter-patient heterogeneity WGS and EM-seq were performed on 18 samples from 9 patients, with each patient providing two samples—one from the carcinoma (C) and one from the sarcoma (S) compartment. RNA-seq was performed on 10 samples from 5 patients due to quality failure of the other remaining samples, possibly due to age of the FFPE samples. Tumour mutation burden (TMB) was typically low across our cohort (median = 0.97 coding mutations per megabase), and we found no evidence of defective mismatch repair in any of our samples (median = 2.7 % of microsatellite sites were altered). Three patients demonstrated mutation signatures attributed to heightened APOBEC (apolipoprotein B mRNA editing catalytic polypeptide-like) activity ( Fig. 1 A ), including the associated local hypermutation events, with patient showing significantly elevated TMB (>3.2 SNVs/Mb). Across the cohort, we found no mutations in either BRCA1 or BRCA2 and estimates of homologous recombination deficiency (HRD) based on combinations of point mutations and/or larger copy-number or structural events were inconsistent. We therefore expanded our search for potential driver mechanisms. Among our samples, we observed extensive heterogeneity across patients and, conversely, remarkable similarities between C and S regions within each patient. TP53 was mutated in 17/18 samples, with the same variant detected at similar frequencies across C and S components (the final sample had coverage below our QC threshold but did show evidence of the TP53 variant observed in the matched C component). Beyond this, few recurrent driver events were detected. PIK3CA and PPP2R1A had missense mutations in three (33 %) and two (22 %) patients respectively; while coding mutations in RB1, FGFR2, SPOP, RUNX1, FBXW7, NSD2 and SOS1 were each found within a single patient. All coding mutations were found at similar frequencies across C and S components within a patient ( Fig. 1 A ). Given the scarcity of cancer-driver events detected among small mutations, we next examined the copy-number landscape of our cohort ( Supplementary Figure 2A ). Tumour cellularity ranged from 25–93 % across the cohort and the p ercent of the g enome a ffected by copy-number (CN) changes (PGA) ranged from 6.5 % - 43.5 %. Most samples were deemed polyploid, with numerous recurrent amplifications detected across the cohort including high level gains (>4 copies) of MYC (67 % of samples), PIK3CA (61 %), CCNE1 (56 %), AKT2 (44 %) and SMARCA4 (39 %) ( Fig. 1 B ). We then related these CN changes to a set of CN signatures developed for ovarian cancers [ 19 ] and found an enrichment in all samples among events attributed to signature 6 (focal amplification due to failure of cell cycle control mechanisms; Supplementary Figure 2B ). Regions of carcinoma and sarcoma demonstrate few genomic differences We next sought to determine if the C and S compartments demonstrated differences in mutational activity. There was no difference in the overall number of variants detected between regions (paired Wilcoxon signed rank test; p = 0.25), and variant frequencies were tightly correlated across regions ( Supplementary Figure 3A-C ). Similarly, we found no difference in TMB between tumour regions (paired Wilcoxon signed rank test; p = 0.62; median = 1.1 and 0.9 in carcinoma and sarcoma respectively; Supplementary Figure 3D ). Further, we found no statistical difference between regions of C and S for estimates of tumour purity ( p = 1 ), ploidy ( p = 0.54), PGA (median = 29 % and 22 % for C and S respectively; p = 0.57; Supplementary Figure 3E ) or contribution of any of the CN signatures ( Supplementary Figure 2B ) [ 19 ]. Transcriptomic and methylomic differences Given the high degree of similarity among the genomic landscape of C and S samples within each patient, we further examined the methylation profiles of our samples, as well as the transcriptomic profiles of a subset of samples (C and S regions from 5 patients). Methylation profiles were well correlated across the cohort ( Supplementary Figure 4A ) with normal samples, including both blood and tumour-adjacent tissue samples, were highly similar while tumour samples were more heterogeneous (though matched C and S samples were more similar within patients). These profiles were consistent with UCS in TCGA ( Supplementary Figure 4B ). Overall, we observed a global decrease in methylation among C and S regions relative to matched normal samples (paired Wilcoxon signed rank test of average methylation levels across ∼27 million CpG sites; p = 0.004 for both C:N and S:N comparisons; Fig. 2 A ). Differential methylation analyses identified 18,077 and 19,839 differentially methylated regions (DMRs, areas with aberrant methylation) among C and S regions respectively (compared to normal; using a multi-factor model to account for patient differences; Supplementary Figure 4C ), accounting for only 0.19 % and 0.22 % of all genic sequences respectively. Importantly, we found only 281 DMRs (accounting for 0.0015 % of genic regions) between C and S regions ( Fig. 2 B ). If we look specifically at gene promoters, this includes 3095, 2993, and 47 DMRs for C, S and C vs S respectively (accounting for 7027, 6228 and 152 distinct transcripts).Similarly, differential expression analyses identified 175 genes that were preferentially expressed in regions of carcinoma and 101 genes with increased expression in regions of sarcoma ( Supplementary Figure 5 ). These 276 genes were used for pathway enrichment analyses ( Fig. 3 D ). We found an enrichment of genes involved in xenobiotic metabolism among C regions while, as with previous studies [ 12 ], genes overexpressed in S regions enriched for the epithelial to mesenchymal transition pathway ( Fig. 3 E-F ). Interestingly, we did not find any accompanying enrichment of genes with promoter hyper- or hypomethylation in either group. Among genes with differential RNA expression between C and S, few had corresponding differential methylation of their promoter regions. One gene ( SYDE1 ) had increased RNA expression in S regions with a corresponding increase in promoter methylation among C regions (25 % meCpG in C vs 11 % in S and 5 % in normal samples). Alternatively, RBM47 had increased RNA expression with decreased promoter methylation in C regions (49 %, 77 % and 64 % in C, S and normal samples respectively). Notably, we did not detect any promoter-level methylation differences between C and S compartments in MLH1 or BRCA1 as described previously [ 12 ]. Multiplexed immunohistochemistry (IHC) Multiplexed immunohistochemistry (IHC) was performed to examine the presence and spatial relationships of tumour and immune cells in both UCS components. 21 samples were analyzed, as annotation could not be completed for 4 slides: two slides had insufficient tissue on the charged slides, and two contained areas that could not be adequately annotated due to lack of clear tumor delineation. Due to immense heterogeneity in the histologic type of sarcoma and spatial relation with respect to carcinoma, it was difficult to clearly demarcate the two components in all cases. Carcinoma components frequently showed stromal and intratumoural cell infiltration with T regulatory cells (Treg), CD4+ T cells, CD8+ T cells, and tumour associated macrophages (TAMs), with significantly higher stromal infiltration of Tregs and TAMs, and PDL1+ cells ( Fig. 3 ). Carcinoma and sarcoma components had similar infiltration of T cells, T regulatory cells and B cells ( Fig. 4 A-D ). However, sarcoma regions had a higher density of TAMs ( Fig. 4 E, I ). The density of TAMs expressing CD163, M2-like marker, was similar between the two components ( Fig. 4 F ). On the other hand, sarcoma had a higher infiltration of TAMs expressing PDL1 ( Fig. 4 G, J ). In addition to TAMs, other cells, such as tumour cells and fibroblast, can also express PDL1. Total expression of PDL1 in carcinoma and sarcoma was also evaluated. Sarcoma showed a higher, although not significant density of total PDL1+ cells ( p = 0.0542, Wilcoxon test) ( Fig. 4 H ). Computational morphologic analysis Despite their uniform classification as UCS, microscopic review of included cases revealed substantial histomorphologic heterogeneity both between and across cases. Computational morphologic analysis was completed in 29 slides from 26 cases, two failed due to the insufficient tissue on the charged slide. Some cases showed geographically distinct and variable gland-forming and sarcomatoid components while other cases had a more intermixed or poorly differentiated pattern. Even these major patterns showed a qualitatively high degree of variation between cases. To visualize this heterogeneity across the cohort, we delineated objective areas of histomorphologic variation using a deep learning image feature-based clustering approach ( Fig. 5 A ). To visualize the level of heterogeneity compared to surrounding non-neoplastic tissue types, image patches belonging to relevant neoplastic and common surrounding non-neoplastic elements (e.g. fibrous tissue, normal uterine parenchyma) were labeled, aggregated and visualized using UMAP ( Fig. 5 B ). Unlike many of the non-neoplastic elements that often intermixed across cases on UMAP (e.g. fibrous tissue, necrosis, hemorrhagic regions), independent carcinosarcoma regions tended to form distinct non-overlapping clusters. Discussion In this study, we performed a comprehensive set of genomic, epigenomic, and transcriptomic analyses to explore the molecular characteristics and immune microenvironment (TME) of UCS. By combining WGS, RNA sequencing, and DNA methylation profiling, we sought to delineate key differences between the carcinoma (C) and sarcoma (S) components of UCS thereby expanding our molecular understanding of UCS. The addition of our in-depth examination of the immune microenvironment using mIHC, alongside a deep learning-based approach for mapping morphological and intra-tumoral heterogeneity had not been described previously and further expands the knowledgebase of this rare tumour type. Our results highlight critical molecular insights and suggest that the immune microenvironment may play a pivotal role in shaping UCS biology and therapeutic response. Histologically, the carcinoma regions of UCS predominantly exhibited serous histology in 80 % cases, while the sarcoma regions demonstrated greater heterogeneity. Molecularly, we observed few differences between C and S components, however our analysis confirmed the high degree of inter-case variation, with distinct non-overlapping clusters of carcinoma and sarcoma regions identified on UMAP analysis. These findings highlight the morphological complexity and heterogeneity of UCS perhaps not captured by standard -omics based analyses. Genomic analysis revealed recurrent mutations in TP53, PIK3CA , and PPP2R1A , and these occurred at similar frequencies between the C and S regions. While we didn’t detect copy-number alterations in PTEN, CDKN2A, HDAC2, IRS2 and KMT2B , as previously reported [ 9 , 12 ], amplifications in MYC, PIK3CA, CCNE1 , and AKT2 were frequent, but again, no regional differences among CNA profiles were found between regions. Transcriptomic analysis revealed preferential expression of genes associated with EMT in the sarcoma regions, supporting the hypothesis that UCS originates from a common carcinoma precursor that undergoes EMT, resulting in a sarcoma-like transformation [ 9–11 ]. EMT, which induces the transformation of epithelial-like tumor cells into more invasive mesenchymal phenotypes, has been associated with poor prognosis in other cancers, including high-grade serous ovarian cancer [ 20 ]. EMT enhances motility and invasiveness of the malignant cells, leading to tumour dissemination and vascular infiltration [ 21 , 22 ]. Our findings suggest that this process plays a key role in the aggressive behavior of UCS, promoting tumor dissemination and resistance to therapy. In addition, the carcinoma regions exhibited enrichment of genes involved in xenobiotic metabolism, which may contribute to the chemoresistance observed in UCS [ 23–25 ]. Differential methylation analyses revealed global decreases in methylation across both C and S regions compared to normal tissue, which is consistent with epigenetic dysregulation commonly observed in cancers [ 26 ]. The minimal differences in methylation status between the two components suggest that methylation may not be the primary determinant of the aggressive phenotype of sarcoma component. Based on the hypothesis that the sarcomatous regions arise through EMT from an originating carcinoma lesion, this minimal methylation divergence may indicate that other epigenetic or microenvironmental mechanisms are involved in driving this phenotypic shift. Potential contributors could include histone modifications, non-coding RNA-mediated regulation, or chromatin remodeling, which are known to influence gene expression independently of DNA methylation [ 27 , 28 ]. Beyond validating the carcinogenesis of UCS, this study highlights potential avenues for therapeutic development in UCS including targeting EMT, PI3K/Akt/mTOR pathway, and cell cycle checkpoints. Strategies to target EMT include blocking upstream extracellular signaling pathways, inhibiting molecular drivers of EMT, targeting mesenchymal cells, or inhibiting mesenchymal-epithelial transition (MET). The transformative role of TGF-β in inducing EMT offer a therapeutic opportunity for drug development. TGF-βR1 inhibitor, Galunisertib is currently under evaluation in a trial for ovarian carcinosarcoma and UCS (NCT03206177) [ 29 ]. Eribulin, a microtubule polymerization inhibitor, has demonstrated EMT inhibition in breast cancer models and is being investigated in the EPOCH Study (NCT05619913)—an international phase II trial exploring eribulin alone and in combination with pembrolizumab for relapsed ovarian and UCS cases [ 30 ]. Targeting the PI3K/Akt/mTOR pathway, which is frequently activated in UCS, could inhibit key drivers of both EMT and chemoresistance. The activation of AKT has been shown to promote EMT transcription factors and reduce E-cadherin expression, thereby enhancing tumor cell migration and invasiveness [ 31 , 32 ]. Given that AKT activation also contributes to chemotherapy resistance [ 33 , 34 ], targeting this pathway may offer a dual benefit of reducing tumor progression and overcoming treatment resistance. Therefore, clinical trials exploring inhibitors of the PI3K/Akt pathway are especially relevant for UCS patients. Cell cycle inhibitor RP6306 [ 35–37 ], that potentially target PPP2R1A and FBXW7 is currently under investigation (NCT04855656). Mutations in PPP2R1A contributes to tumour development by inactivating the tumour suppressive activities of PP2A [ 38 ], which regulates various oncogenic pathways including RAS signaling [ 39 ]. While FBXW7 plays a pivotal role in regulating cellular growth and functions as a tumour suppressor and mutation in this gene is associated with poor prognosis, due to treatment resistance [ 37 , 40 ]. In addition to molecular targets, our study underscores the importance of the immune microenvironment in UCS. Using mIHC, we observed significant differences in immune cell infiltration between the carcinoma and sarcoma regions, with the sarcoma component showing a higher density of TAMs, PD-L1+ cells, and Tregs. These findings suggest that the immune landscape of UCS is both complex and immunosuppressive, with potential implications for immunotherapy. The high presence of TAMs and PD-L1+ cells, in particular, highlights a critical immunosuppressive TME that may be resistant to immune checkpoint inhibition [ 41–43 ]. TAMs create an immunosuppressive environment by promoting the surface expression of PDL1 and PDL2 [ 44 , 45 ], increasing Tregs through IL10 and TGF-beta secretion [ 46 ], promoting VEGF-induced angiogenesis, which creates a selective endothelial barrier [ 47 ], and stimulating tumour cells to secrete indoleamine 2,3-dioxygenase (IDO), which metabolizes tryptophan [ 48 ]. In addition, VEGF binding to its receptor (VEGFR2) encourages Treg cell proliferation and their movement into tumour tissues [ 49 ]. This leads to T cell exhaustion, suppressing the proliferation and cytotoxic function of CD8+ T cells, while also upregulating various immune checkpoint signals like PD1, cytotoxic T lymphocyte-associated antigen 4 (CTLA4), lymphocyte activation gene 3 protein (LAG3), and T cell immunoglobulin mucin receptor 3 (TIM3) [ 50 ]. High expression of TIM3, LAG3, and IDO has been associated with resistance to immunotherapy [ 51 ]. Given this immunosuppressive environment, combining PD-1/PD-L1 inhibitors with other immune checkpoint inhibitors or anti-angiogenic therapies may enhance the anti-tumor immune response. Our findings are consistent with ongoing clinical trials exploring combinations of immune checkpoint inhibitors, such as nivolumab, with IDO inhibitors or VEGF inhibitors, which have shown promise in other malignancies, including UCS [ 52–54 ]. A Phase II trial of BMS-986205, an IDO inhibitor, in combination with Nivolumab is currently recruiting UCS patients (NCT04106414). Additionally, VEGF inhibitors may reduce TAM and Treg populations, making the TME more permissive to immunotherapy [ 55 , 56 ]. In a Phase II translational study, nivolumab plus cabozantinib showed a signal of activity in patients with recurrent UCS [ 57 ]. Similarly, other studies are currently exploring combination of immune checkpoint inhibitors with VEGF inhibitors (NCT05147558, NCT05559879, NCT03694262). While our study provides valuable insights into UCS by combining mIHC and deep learning tools to analyze intratumoral heterogeneity and immune profiling, the small sample size limits it. High sample attrition due to quality issues may introduce selection bias, as the analyzed cohort might not fully represent the broader UCS population. The identification of the immunosuppressive TME, characterized by TAM and PD-L1+ cells, particularly in the mesenchymal component, require further validation with larger cohort. Given UCS is a rare disease, international collaboration is needed to understand the role of the immune landscape and its potential for therapeutic targeting. Conclusion This study supports the recent evidence of the common origin and clonal evolution of UCS providing novel insights on UCS microenvironment. The study underscores the considerable heterogeneity within the sarcoma component and its spatial relationship with carcinoma. This study highlights the potential role of EMT as a key contributor to its divergent histology and aggressive behavior. Presence of immunosuppressive TME and overexpression of genes associated with xenobiotic metabolism contribute to the resistance and pave the way for new treatment strategies for patients with UCS. Funding support Canadian Cancer Society , PM Cancer Foundation - Ann Borooah Endometrial Cancer Research Fund , Janet D. Cottrelle Foundation. Institute review board approval The study was approved by University Health Network Research Ethics Board. Ethics Board granted a waiver for the individual patient consent for this study. Data availability statement The data that support the findings of this study are available on reasonable request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions. CRediT authorship contribution statement Vikas Garg: Writing – review & editing, Writing – original draft, Validation, Software, Methodology, Formal analysis, Data curation. Stephenie D. Prokopec: Writing – review & editing, Validation, Software, Methodology, Formal analysis. Simone C. Stone: Writing – review & editing, Validation, Methodology, Investigation, Formal analysis, Data curation. Sara Pakbaz: Writing – review & editing, Visualization, Validation, Methodology, Formal analysis, Data curation. Min Li Chen: Writing – review & editing, Visualization, Validation, Methodology, Investigation, Formal analysis, Data curation. Bernard Lam: Writing – review & editing, Validation, Software, Project administration, Methodology, Investigation, Data curation. Czin Czin Benito: Writing – review & editing, Visualization, Methodology, Data curation. Michelle Mcmullen: Writing – review & editing, Project administration, Methodology, Investigation, Data curation, Conceptualization. Ilinca Lungu: Writing – review & editing, Software, Project administration, Investigation. Samanta Del Rossi: Writing – review & editing, Validation, Software, Data curation. Anthony Msan: Writing – review & editing, Validation, Data curation. Valerie Bowering: Writing – review & editing, Supervision, Resources, Data curation. Valentin Sotov: Writing – review & editing, Validation, Methodology, Formal analysis, Data curation. Christine Tran: Writing – review & editing, Validation, Data curation. Marcus O. Butler: Writing – review & editing, Software, Methodology. Amit M. Oza: Writing – review & editing, Validation, Supervision, Resources, Project administration. Phedias Diamandis: Writing – review & editing, Validation, Supervision, Software, Formal analysis, Data curation. Ben X. Wang: Writing – review & editing, Validation, Supervision, Software, Methodology, Formal analysis, Data curation. Stephanie Lheureux: Writing – review & editing, Writing – original draft, Validation, Supervision, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Conceptualization. Declaration of competing interest No conflict of interest ststament Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.tranon.2025.102506 . Appendix Supplementary materials Image, application 1 Image, application 2 Image, image 3 Image, image 4 Image, image 5 Image, image 6 Image, image 7
|
[
"LEE",
"NAMA",
"HOSH",
"MATSUO",
"NAKADBORREGO",
"GOTOH",
"SIA",
"MOUKARZEL",
"CHERNIACK",
"LIU",
"ZHAO",
"SERTIER",
"SHAKYA",
"SADEGHIRAD",
"BERA",
"DENT",
"PNG",
"SARKAR",
"MACINTYRE",
"COMMUNAL",
"JONCKHEERE",
"SINGH",
"PAVLIKOVA",
"TAMASI",
"SINKALA",
"LAKSHMINARASIMHAN",
"MARKOULI",
"TAN",
"DWIVEDI",
"YOSHIDA",
"HERRERIAS",
"CHEN",
"PAGE",
"GALLYAS",
"OCONNOR",
"OCONNOR",
"STELLOO",
"TAYLOR",
"OCONNOR",
"YEH",
"ZHANG",
"ZHU",
"KIM",
"PU",
"KRISHNAN",
"YAMAGAMI",
"MAHECHA",
"LOB",
"TERME",
"OHM",
"BEJAR",
"CHEN",
"HONG",
"ZHANG",
"KHAN",
"KATO",
"LHEUREUX"
] |
7979580866f7427da1a0bcfb5e5db530_Fundamentals of antifogging strategies coating techniques and properties of inorganic materials a co_10.1016_j.jmrt.2023.01.015.xml
|
Fundamentals of antifogging strategies, coating techniques and properties of inorganic materials; a comprehensive review
|
[
"Wahab, Izzati Fatimah",
"Bushroa, A.R.",
"Wee Teck, Soon",
"Azmi, Taium Tasneem",
"Ibrahim, M.Z.",
"Lee, J.W."
] |
Fogging of transparent surface is the condensation of water-vapor in the air into small discrete liquid drops on the surface, causing scatters of incident light and create a blurry vision. In recent technology development, coating using superhydrophobic and superhydrophilic materials characteristics have been an attractive strategy to induce antifogging property to minimize the light scattering. Inorganic materials such as TiO2, SiO2, and ZnO have been widely explored for this purpose. In this review, the fundamentals of antifogging strategies and materials choice are covered as well as the different techniques used to prepare inorganic antifogging coatings. Further, this review covers the various testing methods involved for evaluation of antifogging behavior and other related properties. Additionally, the review includes potential of applying different techniques for the purpose of industrial scale. Towards the end, the optimization and statistical analysis of antifogging coatings using computer-aided techniques are briefly described to highlight effort in this mode of study. Before ending with summary, examples of antifogging potential application are shared to appreciate its benefit.
|
1 Introduction Fogging describes the condensation of humid air into small discrete water droplets on optical surfaces (transparent and reflective solid surfaces) as a result of the temperature difference between the surface and humid environment. These water droplets scatter incident light which limits the transmission or reflection of incident light on the solid surface and creates a blurry vision, Fig. 1 . To understand the phenomenon of fogging, one must understand the principal behind condensation. Whilst air is composed of atmospheric gases and water vapor, there is a temperature limit at which the air can hold the water vapor. This temperature at which the water vapor is saturated in the air is called the dew point [ 1 ]. At the dew point, the condensation rate of water vapor is the same as the evaporation rate. When a solid surface has a temperature enough to reduce the surrounding air temperature below the dew point, the water droplets condense more than they evaporate. Thus, they accumulate on the surface and form fog. Fogging is influenced by several factors which are temperature, humidity level, and airflow. Fog may form on optical surfaces such as bathroom mirrors, eyeglasses lens, swimming goggles, binoculars, and camera lenses. Fogging does not only give nuisance but can raise safety concerns and other issues in various application. For instance, fogging can negatively affect vision during endoscopic surgery and increase the risk of procedure failure [ 2 ]. Fogging also closely related to the road safety when it forms on the windshields of a moving vehicle and helmet visor of a motorcyclist. Also, it affects the crop yield as fogging on greenhouse cladding materials results in reduction passed through sunlight. In addition, fogging reduces the efficiency of solar panels. Therefore, the elimination or reduction of the fogging phenomena became of great interest for several optical applications. For many years, the problem of fogging has been addressed using two approaches: changing environmental parameters or modifying the surface properties of the materials involved [ 3 ]. The environmental parameters can be changed by manipulating the temperature, airflow, and relative humidity of the surrounding air to facilitate the evaporation of the condensed water droplets [ 4 ]. The second approach is to modify the surface (physically or chemically) or to deposit a coating layer. Compared to the first strategy, the modification of the material is more appealing because it is easier to manipulate, is more cost-effective and can achieve more long-lasting antifogging performance. This strategy is based on altering surface properties chemically and/or surface topography to meet the desired wettability [ 3 ]. The degree of fogging depends on the morphology of the water drops condensed on the surface. This morphology is normally characterized by contact angle measurement which will be discussed further in Section 2 . There has been an agreement among researchers that anti-fogging effect can be achieved when the morphology of the liquid drop is spread out to form a thin film (superhydrophilicity) or a full spherical drop (superhydrophobicity) [ 5 , 6 ]. In the case of superhydrophilicity, the water thin film represents a transparent layer that does not refract the incident light where the superhydrophobicity allows the rolling off the water drops from the surface which is inapplicable for horizontal surfaces. Fig. 1 summarize the topics covered in this article. 2 Fundamental aspects 2.1 Surface tension and energy Surface tension is responsible for the morphology of water droplets. The water molecules in water droplets possess cohesive forces that act between the water molecules as illustrated in Fig. 2 . Each molecule experiences cohesive forces in all directions and these forces average out to zero, thus no net force on the molecule. For molecules at the water droplet surface, they experience half the amount of cohesive interaction, which acted only downward and sideways. This makes the cohesive forces within water drops can pull the water molecules inward and minimize the number of these molecules, and minimize the exposed surface area. The spherical shape has the smallest surface area to volume compared to other shapes because the presence of an acute angle on other shapes takes up many surface areas. The difference between the forces experienced by a molecule on the surface and one in the bulk droplet gives rise to the water surface tension of the water [ 7 ]. The interaction between liquid and solid mediums is called adhesive force. The behavior of the droplet will depend on its surface tension and the adhesive force which associates with the solid material surface energy. Similar to the molecules in a water droplet, an atom in a bulk material possesses balanced set of bonds or interactions. While, the atom at a solid material surface has an unbalanced set of interactions, thus, has unrealized bonding energy. Surface energy is a measure of this excess energy exists at the material surface, compared to its bulk. Thus, low surface energy of a material having low adhesive force is regarded as hydrophobic material while high surface energy is regarded for hydrophilic material. On a low surface energy solid, the water molecules are more strongly attracted to their own kind and tend to retain the droplet original shape by increasing the interface curvature. This is called wettability that relates to many branches of studies [ 8 ]. 2.2 Wettability Wettability is the ability of a liquid to be in contact with a solid surrounded by other medium (liquid or gas). As shown in Fig. 3 , three types of surface energies exist because of the interactions between phases; liquid/vapor ( ), solid/liquid ( γ l v ), and solid/vapor ( γ s l ). The interface interactions form an equilibrium contact angle which is described by Young's equation, Eq. γ s v (1) [ 9 ]. This contact angle is efficient to express the wetting characteristics of a surface. where (1) γ L V cos θ y = γ S V − γ S L , γ L V , and γ S V are the surface energies of respective interface and γ S L is the contact angle. Eq. θ y (1) describes the contact angle expressed by the mechanical equilibrium of surface energies existing at the liquid/solid interface. The wettability of a surface is divided into two groups according to the contact angle: the hydrophilic surface (contact angle below 90⁰) and the hydrophobic surface (above 90⁰). The antifogging effect can be obtained when the surface is extremely hydrophilic or hydrophobic, nominally described as superhydrophilic and superhydrophobic, respectively. The surface is considered superhydrophilic as the contact angle is below 10⁰, while it is considered superhydrophobic as the contact angle is between 150⁰ and 180⁰. The condensed water droplets are attracted to the high surface energy of superhydrophilic surface and spread out, forming a thin film. In this case, the incident light is not scattered by the water film, and the vision becomes clear. Because this water film is very thin, it can be easily evaporated and hence antifogging property is achieved. For a superhydrophobic surface, a water droplet will retain its spherical shape and roll off the low-energy surface. Thus, the condensed droplets will not stay on the solid surface and eliminate the fogging effect. Young's equation assumes that the surface is chemically homogeneous and topographically smooth, so it does not describe the wettability. This is usually not true in the case of real surface. To render the surface with wettability preference, two factors must be considered, which are the surface chemistry and topography. 2.3 Chemistry The wetting behavior of a material surface depends on the quantity of polar groups such as hydroxyl (OH), carboxyl (COOH), ester (COOR), amino (NH 2 ), amide (NHCOR), sulfonic (SO 3 H), and dihydrogen phosphate groups (PO 4 H 2 ). These polar groups have hydrophilic functionalities as it establishes hydrogen bonding and related dipole interactions with water molecules [ 10 , 11 ]. On the other side, less polar groups will result in hydrophobicity because of less hydrogen bonding and low surface energy of solid surface. Hydrophilic materials can be inorganic or organic materials. Organic materials are natural polymers (pullulan, cellulose, alginate, chitin, chitosan) [ 12–16 ] and synthetic polymers (polyacrylate, zwitterionic, polymethacrylate, etc.) [ 17–20 ]. The hydrophilic inorganic materials are ceramic-based materials and metal oxides such as TiO 2 , SiO 2 and ZnO [ 21–26 ]. The inorganic materials attracted more attention for their photocatalysis properties, higher durability, better mechanical properties and thermal stability. These inorganic materials will be discussed in Section 3 . Hence, chemical modification can the increase or decrease the polar groups to achieve superhydrophilicity and superhydrophobicity, respectively. The modification of inorganic materials for antifogging coating is aimed at inducing the generation of surface vacancies and the reconstruction of bonds with polar groups. The strategy for chemical modification of inorganic coatings will be discussed further in Section 4 . 2.4 Topography It is hard to obtain a superhydrophilic or superhydrophobic surface by chemistry modification only. It has been in wide agreement among scientists that the fabrication of hierarchical structures can significantly affect surface wettability. Thus, the topography of the surface has been considered to achieve the antifogging property. The relation between wettability and surface roughness is described by Wenzel and Cassie-Baxter, Eq. (2) [ 27 ]. where (2) cos θ w = R f cos θ y is the contact angle measured on a rough surface, θ w is the contact angle - described by Eq. θ y (1) - and (usually R f ≥ 1) is a surface roughness factor which is the ratio of the actual solid–liquid contact area to the area in case of smooth surface. The Wenzel states that both hydrophilicity and hydrophobicity can be controlled by modifying the surface roughness. By assuming the contact angle of a smooth surface ( R f ) is below 90°, then the contact angle ( θ y ) will decrease as the surface roughness increases and vice versa. Wenzel's equation is based on the fact that the roughened surface has a larger contact area between the solid surface and the liquid droplet. θ w Fig. 4 (a) shows the wetting regime of Wenzel state where a droplet in complete contact with the surface with no air-trapped [ 28 ]. Meanwhile, in contrast to the fully wettable surface presumed by Wenzel, Cassie-Baxter droplet lies on top of texture with air trapped as shown in Fig. 4 (b) [ 29 ]. This is known as Cassie air-trapping state where it imitates the lotus effect. Here, the droplet may bounce or roll off the solid surface as it has no interface attachment. This phenomenon explains the antifogging effect for superhydrophobic surfaces. The Cassie air-trapping state is described by Eq. (3) [ 29 ]. where (3) cos θ C A = − 1 + φ S L ( cos θ y + 1 ) is the Cassie air trapping's apparent contact angle, θ C A is the fractional area of solid/liquid interface, and φ S L is the contact angle on a smooth surface. Cassie-Baxter also explained the rough surface when water impregnates the air-trapped in the cavities, Eq. θ y (4) [ 30 ]. (4) cos θ C A = 1 + φ S L ( cos θ y − 1 ) Eq. (4) shows the Cassie impregnating state, also known as rose petal effect, Fig. 4 (c). The liquid can impregnate the larger-scale textures than smaller ones. The adhesion between the liquid and the surface becomes high. Thus, the water droplet adheres well to the surface and will not fall even turned upside down [ 30 ]. Therefore, the Cassie impregnating state does not meet the antifogging requirements, as it is characterized by a high contact angle with high contact angle hysteresis, which is in contrast to the superhydrophobic state (high contact angle with low contact angle hysteresis, explained further in Section 4.1 ). However, this effect can be ignored when the droplet volume is greater than 10 μl as the weight of the droplet surpasses the adhesion force [ 31 , 32 ]. 3 Inorganic antifogging coating Antifogging coatings can be achieved through either superhydrophilic or superhydrophobic materials, as previously discussed. During the Gemini program, NASA produced the first antifogging chemical (liquid detergent, deionized water, and an oxygen-compatible fire-resistant oil) to be used on astronaut helmet visors (1965–66) [ 33 , 34 ]. Since then, the antifogging technology has progressed, particularly with the presence of superhydrophilic coatings from polymers and surfactants. Through these materials that are popular for antifogging coatings, they must be reapplied frequently as they dissolve in condensed water, reducing their effectiveness [ 35 , 36 ]. Reticulation agents are frequently added to coating formulations to yield crosslinked polymeric networks after thermal or photo curing to improve the stability and durability of the anti-fogging property [ 37 ]. Water permeability, solvent resistance, thermal stability, and optical characteristics can be induced using cross-linking agents that are hydrophilic polymers or inorganic salts, or a combination of both [ 38 ]. The cross-linking density determines these characteristics. For example, to have an abrasion-resistant antifogging coating, the degree of crosslinking must be high enough, particularly at the uppermost portion of the layer, which is a key attribute for a long-lasting effect [ 39 ]. Furthermore, homogenous crosslinking distribution is critical to avoid water uptake surface imperfections, which can induce light scattering and affect optical transparency. Furthermore, the application of surfactants in antifogging coatings results in an unsightly water mark caused by condensation [ 40 ]. Inorganic materials are chemical compounds that do not contain carbon. However, elementary carbon (C) such as graphite and carbon diamond and compounds - for example, carbon nitride, carbon monoxide, silicon carbide, carbonic acid, and salts - are classified as inorganic [ 41 ]. Inorganic coating materials have a better potential to produce long-lasting coatings than polymeric coating materials. Furthermore, the roughness and thickness of the coatings should be carefully managed to enable the production of multilayers with anti-reflective and self-cleaning properties [ 42–45 ]. Inorganic coatings can be divided into two types based on their nature of wetting: intrinsic wettability and photo-induced wettability. The first includes materials that are naturally hydrophilic, such as silicon dioxide SiO 2 , zirconium dioxide (ZrO 2 ), indium tin oxide or ITO (In 2 O 3 –SnO 2 ), MgO–Al 2 O 3 , and graphene oxide (GO). The second category includes materials that are photo responsive, such as TiO 2 , ZnO, and Bi 2 O 3 [ 23 , 24 , 46 ]. 3.1 Intrinsic wettability Most inorganic materials utilized in antifogging applications are ceramics or metal oxides. It is well known that metal oxides are generally hydrophilic because of metal cations, oxygen anions, and/or hydroxyl groups existed on the surface [ 47 ]. Intrinsically hydrophilic materials are not photo responsive. For example, the superhydrophilicity of the SiO 2 coating is regraded to a high concentration of hydroxyl groups (Si–OH) on the morphology, roughened surface, and porosity, which can absorb water [ 48 , 49 ]. Because the inherent contact angles of these inorganic materials are often less than 90°, rougher surfaces can provide higher hydrophilic properties. The pore size between the nanoparticles gradually grows as the branch length of the dendritic nano-silica increases, increasing the surface roughness. Superhydrophilic behavior and good antifogging effect are reported for SiO 2 dendritic colloidal silica with branch lengths of 41, 60, and 109 nm, Fig. 5 [ 50 ]. As a result, the surface of interaction between the hydrophilic groups and the water droplets increases. Because of the adsorption capacity of the liquid on the surface, water droplets spread more easily over the surface, Fig. 6 [ 51 ]. Huang et al. [ 52 ] investigated diamond as optical coating for extreme applications, such as offshore oil exploitation and air force optical window. They proposed diamond as a replacement for the mechanically weak organic coating and other inorganic coatings with the long-term stability. The diamond films became superhydrophilic after being exposed to oxygen plasma, reducing the contact angle from 87° to less than 5°, resulting in antifogging activity. Meanwhile, ultrathin diamond coating on a quartz slide displayed an oil contact angle of 153°, and the contact angle of di-chloromethane droplets on the O 2 -plasma treated surface was 157°. This demonstrates underwater superoleophobicity and oil self-cleaning. After freezing and steaming test, the coated samples demonstrated antifogging activity with acceptable transparency. The fog formed on the diamond thin film took 4 s to evaporate, while the uncoated sample took 3.5 min. Zeolites have also been investigated as a material for antifogging coating in recent studies. Zeolites are crystalline microporous aluminosilicates with a superhydrophilic character that are widely used as water adsorbents. Zeolites have a honeycomb structure and a net negative charge, which allows them to absorb liquids and adsorb elements on chemical binding strength because the external surface is made up of densely ordered silanol groups. Thus, recent developments have explored zeolites as a material for antifogging coating. Zeolites exhibit a contact angle below 10⁰ and over time the water contact angle was decreased sharply due to the infiltration of water into interparticle pores of the coating. It is found that the degree of crystallinity increases hydrophilicity and has antifogging properties, and it has also been reported that the presence of amorphous silica is critical to transparency [ 53 , 54 ]. On another side, tuning metal oxide from hydrophilic into superhydrophobic is challenging. It normally requires additional steps in fabrication that can lower the surface energy of the inorganic materials. For instance, alumina surfaces have been treated to obtain superhydrophobic surfaces by grafting stearic acid layer [ 55 ], chemically modified with ethyl acetoacetate [ 56 ], and furnishing with a water-repellent composite layer consisting of microroughened alumina, chitosan (CHS), and poly [octadecene-alt-(maleic anhydride)] (POMA) [ 57 ]. For further discussion on the approach of forming superhydrophobic surfaces, it can be found in Section 4.1 . From a side perspective, the metal oxide with various crystal faces is possible to have hydrophobic surfaces. This was proved by Zhu et al. [ 58 ] where they discovered that the intrinsic wettability of α-Al ( 1 1 ¯ 02 ) 2 O 3 is intrinsically hydrophobic with water contact angle (WCA) near 90⁰, while the , ( 11 2 ¯ 0 ) , and ( 10 1 ¯ 0 ) crystal faces are intrinsically hydrophilic with WCA of less than 65⁰, ( 0001 ) Fig. 7 . 3.2 Photo-induced wettability When exposed to ultraviolet light, the materials in this category become superhydrophilic. Titanium dioxide (TiO 2 ), zinc oxide (ZnO), and bismuth oxide (Bi 2 O 3 ) are among examples of this category. Titanium oxide is widely used in a variety of applications, but its superhydrophilicity has gotten the most attention for antifogging purposes. TiO 2 has the highest index film for the visible range, being harder and more stable than other oxides [ 59 ]. The high-index material for beam splitters, cold mirrors and heat-reflecting mirrors, and antireflection surface are all examples of TiO 2 applications. Furthermore, its low cost and excellent chemical and thermal stabilities make it a promising substitute for SiO 2 as an antifogging coating. The superhydrophilicity of TiO2 depends mainly on its photocatalytic activity. Fig. 8 shows the mechanism of photo responsive activity of inorganic materials such as TiO 2 [ 60 , 61 ] TiO 2 is composed of Ti +4 and O −2 ions. Upon exposure to UV light, electron ( ) of TiO e − 2 excites from valence band (VB) to the conduction band (CB), leaving positive charged hole ( ). This reduces Ti h + +4 cations into . Oxygen atoms are expelled, resulting in oxygen vacancies. These oxygen gaps will be filled by water molecules, resulting in absorbed OH groups. Water can establish a hydrogen bond with chemisorbed hydroxyl groups. The abundant OH groups on the TiO T i 3 + 2 surface during exposure to UV endows the superhydrophilicity. This process can be explained by Eqs. (5)–(7) [ 61 ]. (5) T i O 2 + U V → h + + e − (6) T i 4 + + e − → T i 3 + (7) O 2 − + 2 h + → 1 2 O 2 + v a c a n c y When UV light is absent, a hole–electron recombination takes place, causing TiO 2 to return to a low hydrophilicity state. The wide band gap of TiO 2 (3.2 eV for anatase and 3.0 eV for rutile) is the factor that limits absorption to the UV area, which makes TiO 2 and other photo-responsive inorganic materials depend on UV light to perform antifogging functions [ 61 ]. As a result, creating TiO2-based antifogging coatings for indoor applications is challenging. A growing number of studies have focused on increasing TiO 2 's antifogging performance without exposing it to UV light, while others have focused on broadening its photocatalytic reaction to visible and near-infrared (near-IR) wavelengths [ 62–64 ]. There are a number of ways to modify the photo sensitivity of TiO 2 to minimize the bandgap and extend the photogenerated electron–hole pairs. These methods include other properties such as self-cleaning which makes use of TiO 2 ‘s photocatalytic activity [ 26 ]. These approaches include other properties such as self-cleaning that utilize the photocatalytic property of TiO 2 [ 26 ]. To form TiO 2 composites, various attempts have been made to modify TiO 2 using metal ions, non-metal ions, co-doping with two or more foreign ions, and hybridization with carbon materials [ 65 ]. In Section 4.2 , a more detailed description on these materials is discussed. In the case of superhydrophobic coating, TiO 2 based coatings have issues in maintaining the long-term stability due to their high photosensitivity. The organic superhydrophobic components used with TiO 2 are often degraded when exposed to UV light, thus degrading the superhydrophobic properties. Wang et al. [ 66 ] fabricated an UV-stable coating using an increasing roughness TiO 2 aggregation within hydrophobic polydimethylsiloxane (PDMS) crosslinked network. 4 Strategies to antifogging property 4.1 Superhydrophilic inorganic coating The superhydrophilic approach attracted greater interest than the superhydrophobic approach for antifogging applications. The superhydrophilic surface spreads water droplets quickly and forms continuous coatings that are transparent to visible light. These films have relatively large surface areas which evaporates quickly leading to antifogging activity [ 37 ]. While superhydrophobic surfaces form spherical droplets that may accumulate on the surface, affecting visibility unless the surface is inclined to remove these droplets [ 4 ]. Hydrophilic metal oxides can be modified to increase their surface superhydrophilicity by doping, treating, or mixing them with other metal oxides. In general, these changes are made to promote the generation of surface vacancies, the reconstruction of polar groups bonds and the optimization of surface roughness. Surface roughness is typically modified by introducing hierarchical textured surface, porosity or incorporation of nanoparticles with specific forms such as nanospheres, nanorods and nanoplates precipitations. Several fabrication techniques are used to prepare a material with high roughness which is discussed in Section 5 . TiO 2 nanotubes film with Root Mean Square (RMS) roughness of 5.48 nm and TiO 2 sol–gel film with RMS roughness of 1.28 nm were fabricated as two different types of TiO 2 coatings on glass substates with varying surface roughness. According to the study, TiO 2 nanotubes showed a better hydrophilicity and antifog activity [ 67 ]. Syafiq et al. [ 68 ] fabricated a superhydrophobic coating on a glass substrate with a water contact angle (WCA) of 118°. The authors used SiO 2 nano particles/modified with silicone oil to increase surface energy. Compared to uncoated glass, the coating had better antifogging and self-cleaning properties. Moreover, the coating proved stability even after 20 peeling cycles using scotch tape. The coating was found to be durable after exposure to outdoor whether for 2 months. Graphene oxide (GO) was spin coated on a pre-edged silica glass surface. The roughness of GO coating was formed on the edged surface due to overlap and aggregation of individual GO sheets. The GO coating becomes superhydrophilic as the roughness increases, with a static WCA of 3.7°. The uncoated substrate fogged rapidly in a freeze test, while the coated glass remained clear, indicating that the GO coating caused practically instantaneous spreading of water droplets and quick evaporation, keeping the glass clear at all times [ 6 ]. It is noted that nanotexturing of silica glass using etching can ensure the formation of such multifunctional surfaces. This coating has been used on a wide range of screens and lenses. As a result, modification of glass and other transparent substrates by coating became more feasible. However, this approach depends on the type of substrate and faces serious challenges in large-scale applications [ 26 ]. 4.1.1 Doping TiO 2 has been extensively researched in order to improve its light absorption capacity throughout a wide range of wavelengths, particularly visible light, which is desirable for indoor applications. Doping is a common strategy to modify the photocatalysis of TiO 2 . Dopant ions can trap electrons and/or holes, preventing photogenerated electron–hole pairs from recombining, and thereby improving TiO 2 's photo-reactivity [ 69 ]. Doping with these materials may result in the formation of mid-gap energy levels. Fig. 9 shows one of the types of doping aiming for reducing the band gap of photo responsive inorganic material. Doping these materials may result in the formation of mid-gap energy levels. The red shift in the absorption edge is caused by electrons transitioning from the valance band to the dopant level, then from the dopant level to the conduction band, resulting in a smaller band gap, E [ g 61 ]. Du et al. investigated rare earth metals such as Nd, Y, and La as dopants in TiO 2 films [ 69 ]. At the interaction of the functional groups with their orbitals, rare earth elements with 4f-electron configurations could form complexes with various Lewis bases (including organic acids, amines, aldehydes, alcohols, and thiols) at the interaction of the functional groups with their orbitals. Rare earth ions prevent anatase from transitioning to rutile and inhibit anatase growth. The shift in the characteristic peak indicates that rare earth ions can cause TiO 2 lattice deformation, lowering its hydrophilicity (the lowest WCA at 0.1 wt.%Nd-TiO2, Y- & LaTiO 2 at 0.3 wt.%). When dopants are excessively increased, the rare earth ions cannot enter the TiO 2 lattice, and they cover the surface of TiO 2 forming a heterogeneity junction. The valence bands and conduction bands of two crystals maybe linked paratactically, and charge capture centers become recombination center. Copper (Cu) is another promising dopant for TiO 2 . Cu2 + ions, according to Wang et al. [ 70 ], will not occupy the interstitial sites in TiO 2 lattice due to their close radius (Cu 2+ ion (0.87) and Ti 4+ ion (0.75). Thus, Cu +2 substitutes the Ti 4+ sites rather than forming CuO at the surface. When Ti 4+ is replaced with Cu 2+ , positive oxygen vacancies are produced, thus increasing the amount of adsorbed OH on the surface. Without additional photo-irradiation, a Cu–TiO 2 thin film presented a superhydrophilic surface (WCA just 5.1°) with outstanding antifogging behavior. Duan et al. [ 71 ] proposed inducing Fe doping for non-UV activated superhydrophilicity of TiO 2 films. The authors used a facial photosensitive sol–gel technique and dip coating to create a micro- and nanometer scale hill-to-valley hierarchical surface structure. Duan et al. [ 71 ] found that annealing the TiO 2 film improved superhydrophilicity (WCA 3°). The annealing increased the number of oxygen vacancies, which absorbed more oxygen and hydroxide groups. However, after being kept in the dark for 10 days under ambient conditions, the WCA of the TiO 2 film increased to 53°. This refers to the substitution of oxygen and/or other organic molecules for hydroxide groups, which increases surface energy and decreases hydrophilicity. The authors discovered that the patterned TiO 2 films have good superhydrophilic properties without UV activation (WCA reached 2° in 3 s). According to Cassie's impregnating wetting protocol, this is due to the capillary effect. When the patterned TiO 2 film was kept in the dark for 10 days, the WCA remained practically constant at 15°. This is attributed to the capillary effect as explained by Cassie impregnating wetting regime. These findings show that patterned TiO 2 films have a more stable antifogging function because they can increase the wetting of water droplets. Bharti et al. [ 72 ] had studied doping of TiO 2 with Fe and Co. The results showed that the Fe and Co ions have replaced some of Ti-ions and completely incorporated into the TiO 2 lattice. Pure untreated TiO 2 film showed ultra violet visible (UV-Vis) absorption edge at 367 nm and band gap of 3.37 eV, while Fe–TiO 2 film showed 3.22 eV bandgap at 385 nm, and Co–TiO 2 resulted in 3.36 eV bandgap at 369 nm. Because a higher concentration of dopant could lead to the formation of recombination centers, a moderate (5%) concentration was recommended. Further tuning using air plasma treatment for 60 s resulted in bandgap of 3.00 eV for Fe–TiO 2 film at 413 nm, and 3.62 eV for Co–TiO 2 at 342 nm. The air plasma treatment enhances the charge separation centers, such as oxygen vacancies and Ti 3+ , that were established by Fe and Co ions. However, the plasma treatment also reveals different results whereby the Co–TiO 2 has increased the bandgap energy. The formation of energy levels in the band gap caused an increase in absorbance and a red shift in the absorption spectra of Fe–TiO 2 thin films, whereas the Burstein-Moss shift caused a blue shift in the absorption spectra of Co–TiO 2 . They suggested that the widening of bandgap in Co–TiO 2 is related to the Ti 3+ levels and oxygen vacancies, which increase with the treatment time as compared to the Fe–TiO 2 due to the on-site Coulomb interaction/repulsion of Co–TiO 2 . These created levels donate more electrons and thus shift the Fermi level to the conduction band, which causes the band gap widening. Fig. 10 illustrates the difference in the energy levels of Fe–TiO 2 and Co–TiO 2 [ 61 ]. 4.1.2 Combination of metal oxides The performance of antifogging can be enhanced by combining metal oxides or semiconductors. For antifogging applications, TiO 2 and SiO 2 have been a popular combination [ 73–75 ]. Co-doping of these composites with other elements also have been studied in recent years such as Ag [ 76 ] and Zr [ 77 ]. They discovered that TiO 2 doped with Ag–SiO 2 exhibits an antifogging effect when exposed to visible light [ 76 ]. Interestingly, the increasing amount of Ag decreases the coating roughness, but the coating with the highest Ag content could maintain its superhydrophilicity (8 days) longer than SiO 2 /TiO 2 (less than a day). The UV-Vi reflectance spectra revealed an expansion of light absorption into the visible region for the TiO 2 /WO x compounds. When WO x is deposited on the surface of TiO 2 , the electron–hole recombination rate is effectively suppressed, according to photoluminescence analysis [ 78 ]. ZnFe 2 O 4 is another visible light-sensitive semiconductor photocatalyst that is highly efficient. It was projected that including it into TiO 2 will broaden the adsorption spectrum, achieving superhydrophilicity with WCA 0°. The authors obtained a ZnFe 2 O 4 –TiO 2 compound with contact angle approaching zero at 500 °C and 550 °C when the molar fraction of ZnFe 2 O 4 is 7 mol%. The ZnFe 2 O 4 –TiO 2 coated on glass shows excellent antifogging property and optical transmittance of more than 85% [ 79 ]. The synergistic effect of the porous structure of ZnO/TiO 2 composite thin films and surface hydroxyl on superhydrophilicity was studied [ 80 ]. The average pore size of ZnO (10, 20, 30 and 40%)/TiO 2 films were 2.70, 4.24, 2.40, 1.61 and 3.81 μm, respectively. According to the results, larger pores reduce light dispersion and improve film transparency. The result indicated that larger pores reduce light scattering and improve the transparency of the film. The WCA measurements without UV light irradiation were 10.8° for pure TiO 2 and 1.8, 2.5,3.2°, and 4.4° for ZnO (10, 20, 30, and 40 %)/TiO 2 composite, respectively. The findings indicated that the presence of ZnO in the TiO 2 composite films increased the amount of chemisorption of H 2 O, which reacts with TiO2 to form Ti–OH. Compared to bare glass, the glass containing ZnO/TiO 2 remained clear during the fogging test. The stability test by storing the material in the dark showed that the WCA increased to 9.5⁰ after 21 days. Zhu et al. [ 81 ] have developed the first superhydrophilic graphene-based transparent composite through layer-by-layer assembly of graphene oxide nanosheet and silica nanoparticles (RGO/SiO 2 ). The existence of voids within the structure of the spider weblike graphene network contributes to the excellent transmittance of the RGO/SiO 2 film. Compared to the pure RGO film, the composite coating of (RGO/SiO 2 ) 10 hybrid film exhibited improved transmittance in the wavelength range of 300–900 nm which is regarded to the presence of many voids formed due to the stacking of SiO 2 nanoparticles. However, the transmittance is still considered low (59% at 550 nm). For 5 to 15 layer-by-layer cycles, the static WCA was between 27⁰ and 31⁰. The superhydrophilicity of the coating was improved by adding two more SiO 2 cycles, resulting in a WCA 3.20° with 80% transparency. 4.1.3 Surface treatments The annealing TiO 2 thin film improved the superhydrophilicity and antifogging properties. Additionally, the annealing at 400 °C increased the roughness (from 1.4 nm to 2.6 nm) due to the crystallization and densification of TiO 2 . Within 3 days, the superhydrophilicity became stable, and within 15 days, it reached equilibrium (WCA less than 5°). The surface roughness of TiO 2 films was reported to improve with N 2 gas flow (with N 2 flow the roughness was 2.6 nm, while 9.1 nm without N 2 flow). The evaporation of the solvent is aided by the N 2 flow, resulting in a thinner layer of solution remaining on the substrate and a smoother surface [ 82 ]. The use of oxygen (O 2 ) plasma treatment to provide a specific surface roughness for antifogging purposes. Two types of treatments have been attempted: one involves etching glass substrates with O 2 plasma before coating, and the other involves plasma therapy after coating. Kim et al. [ 83 ] has deposited TiO 2 film on previously etched glass with O 2 plasma. The WCA of the TiO 2 coatings on the etched glass was between 4 and 7°, while the WCA of the nontreated surface was between 17° and 180°. Wenzel's model can explain for this. After plasma etching, surface roughness in nanoprotrusions formed, increasing the light receiving area and resulting in an increase in photogenerated electron–hole pairs. Oxygen plasma etching on the glass surface improved the photoinduced hydrophilic response of the TiO 2 surface. Another study used metal–organic chemical vapor deposition (MOCVD) to fabricate carbon-doped TiO 2 nanopillars with a WCA of 120° [ 84 ]. The C-doped TiO 2 nanopillars were modified to superhydrophilic with a WCA of 5° after microwave plasma treatment. The results reveal that the surface roughness does not change, but the wettability adjustment is due to surface carbon oxidation and reduction by oxygen plasma treatment. The creation of surface dangling bonds due to plasma exposure is discussed by Bharti et al. [ 85 ]. A rapid decrease in CA was observed within the first 10 s of exposure time (54.40° to 33.94° for water and 48.82° to 33.16° for ethylene glycol), but the roughness does not show rapid increase within that 10 s (4.6 nm–6.6 nm). Plasma comprises reactive species (electrons, ions, radicals, and neutral molecules) that interact intensely with the surface, causing the roughness to increase. Microdents/particles appear when the exposure time exceeds 30 s. The surface roughness was very low for samples exposed from 0 to 30 s, and the surfaces were practically flat. In conclusion, the plasma treatment for 0–10 s produced dangling bonds, which are responsible for the wetting of the TiO 2 layer. On the other hand, the film's surface roughness is enhanced, which is sufficient to improve wetting behavior. As a result of the prolonged treatment, the surface roughness grows dramatically, which, along with the dangling bonds, makes the film superhydrophilic. Table 1 summarizes the results obtained for different antifogging coatings. 4.2 Superhydrophobic inorganic coating Olivier proposed superhydrophobicity in 1907 after discovering a roughly 180° WCA on the surface consisting of soot. Superhydrophobic surfaces have been used in a variety of applications since then, including self-cleaning, anti-icing, antifogging, anti-wetting, and anti-fouling. A tilt angle or sliding angle must be considered for the development of superhydrophobic antifogging material to roll the water droplets off the surfaces as shown in Fig. 11 . When the droplet flows down the tilting surface under the influence of gravity, two types of angles are formed: advancing angle ( ) and θ a d v ) as it rolls. ( θ r e c Ideally, surface features required for optimal antifogging superhydrophobic coating is WCA coupled with low WCA hysteresis and low tilting angle. The CA hysteresis is the difference between the advancing angle ( ) and the receding angle ( θ a d v ). If the hysteresis is high, a droplet will tend to be pinned on a surface. The suggested superhydrophobic coatings for antifogging are those with contact angle more than 150° at a tilting angle of less than 5° [ θ r e c 86 ]. Although it produces the same spherical shaped droplet as the Cassie impregnating (rose petal-like) wetting state, it does not meet the antifogging requirement. On the other hand, the Cassie air-trapping (lotus-like) wetting condition is ideal for antifogging because it allows water drops to roll off the surface. The maximum WCA can be achieved on a perfectly smooth surface is said to be 120° [ 87 ]. As a result, the material must be microstructured in order to achieve superhydrophobic characteristics. Metal oxides, for example, have a surface with physically adsorbed water. Superhydrophobicity on these surfaces can be achieved by either producing a rough surface from a known hydrophobic substance or by modifying a rough surface with a low surface energy compound, a process known as hydrophobization [ 88 ]. Development of superhydrophobic antifogging coating can be done by a two-step route or three-step route [ 86 ]. Deposition of ‘building units' is followed by a treatment with a low surface energy substance in two-step methods (hydrophobization). In three-step route, Soft lithography will be used to microstructure the surface of the polymer, and then a coating will be applied before hydrophobization. Examples of hydrophobic molecules used for hydrophobization are perfluorooctyltriethoxysilane (PFOTES), perfluorodecyltriethoxysilane PFDTS (PFOTS), heptadecafluorodecylltripropoxysilane (FAS-17), heptadecafluorodecyl methacrylate (HDMA), and fluoroalkylsilane molecules [ 89–93 ]. These molecules establish covalent connections with the hydroxyl groups on the surfaces, preventing the surface from interacting with water molecules. Long-chain fatty acids are also being explored as a more cost-effective and environmentally friendly alternative to these fluorine-containing compounds [ 94–96 ]. Transparent nanotextured tantalum pentoxide (Ta 2 O 5 ) was deposited on quartz surface using multi-steps [ 97 ]. Then, carbon mono fluoride (CF x ) is deposited to produce a superhydrophobic optical coating. The observed WCA on the Ta 2 O 5 nanostructured surface with the hydrophobic coating was 155° with a hysteresis of 20°, as compared with the WCA of 107° for a non-textured surface with the same hydrophobic coating. An amorphous molybdenum oxide (MoO 3 ) coating can achieve superhydrophobic performance with WCA of 160.2° and small tilting angle of less than 8° after modification with FAS-17 [ 98 ]. The MoO 3 coating has a rough structure containing many air cavities. The surface interaction with FAS-17 is shown in Fig. 12 [ 81 ]. When hydrolysis reaction of FAS-17 occurred in alcohol solution, Si–OCH 3 of FAS-17 bond was broken and reacted with water to form methanol. The surface MoO 3 coating contained a large amount of (-OH) which was broken and combined with (-Si-) of FAS-17. This makes the coating superhydrophobic where no (-OH) interacts with water. Gao et al. [ 99 ] fabricated superhydrophobic TiO 2 thin film using magnetron sputtering on a Si substrate that had been coated with candle soot. The results showed that the TiO 2 became superhydrophobic with WCA up to 155° compared to 61° WCA for sole TiO 2 . According to Cassie and Baxter's theory, the proposed fabrication easily trapped air in the micro/nano porosity and prevented water from intruding into the interfaces between the microstructures. By combining microstructures and nanostructures, superhydrophobicity is obtained by deposition of TiO 2 on candle soot as a result of air trapped in the rough surface. In contrast to superhydrophilic surfaces, antifogging applications of superhydrophobicity have received less attention in the literature. This could be attributed to the fact that superhydrophobic surfaces must be tilted to roll off concentrated droplets, the manufacturing procedures are more complex and time-consuming, and water-repellent materials are not attached to the underlying substrate or photo/thermally cured [ 3 ]. However, Sun et al. [ 89 ] proposed that for an aggressive condition like airplanes application, the hydrophilic antifogging coatings cannot resist the freezing-fogging induced ice build-up and accumulation, which finally resulted in the failure of surfaces. So, the authors developed a superhydrophobic material for low adherence force to water droplets and thus resist fogging-induced ice build-up. Zinc oxide antifogging coating was fabricated imitating the compound eyes of the green bottle fly as depicted in Fig. 13 . Fly eyes can remain functional and uncontaminated in extremely dusty and moist environments. The dry-style antifogging properties of the compound eyes with superhydrophobicity can provide clear vision for the insects in highly humid environments. This special wettability of the compound eyes is attributed to the combination of surface chemistry and roughness on multiple scales. The morphology of the ZnO nanostructure was obtained by adjusting the aging time of the precursor solutions and the solvothermal temperature. Meanwhile PFOTES molecules were deposited onto the ZnO to mimic the wax layer of the fly-eye which can lower the surface energy. Water droplets does not remain on the surface as tilted to around 10° [ 89 ]. Thin films consisting of the and the top layer of vanadium oxide (VO 2 ) underlying layer of SiO 2 layer have superhydrophobicity of more than 150° [ 100 ]. They were able to achieve a visible region transmittance of more than 60%. They deemed this film properties to be advantageous for antifogging, rainproofing, and self-cleaning surfaces. With the addition of SiO 2 layers, the surface roughness of the nano-structure composite films decreased before stabilizing. Pure VO 2 films has a 6.2 nm RMS roughness and 30.5° WCA. Their WCA was reduced after exposure to ultraviolet (UV) light, which produced photogenerated holes. The SiO 2 /VO 2 composite films of 3 spin cycles were superhydrophobic (155°) under 365 nm illumination with an intensity of 160 mW cm −2 . Prolonging the irradiation time to 10 h did not significantly change the WCA. Consequently, SiO 2 /VO 2 composite films were not only superhydrophobic but also resistant to high-intensity UV illumination. 4.3 Superhydrophobic-superhydrophilic conversion Through the advancement in research on wettability and its manipulation, there is an arising innovative area of creating reversible or switchable surface property between superhydrophobicity and superhydrophilicity. Surfaces with reversible superhydrophobicity/superhydrophilicity that are driven by various kinds of external stimuli such as light irradiation, heat reaction, solvent effect, electric field response, and mechanical force induction are given special attention [ 5 ]. Chemical composition and/or micro/nano structural modification can be used to control this unique feature. These types of material properties were sought generally for multi-function coating. A superhydrophobic/superhydrophilic TiO 2 -based coating was developed for self-cleaning and antifogging applications [ 25 ]. Modified hydrothermal treatment and self-assembly on hydroxylated titanate nanobelt (TNB) were used to produce superhydrophobic titanate nanobelt (TNB). Covalent bonding was formed between the hydroxyl groups on the surface of nanobelts and the fluoroalkyl-silanol (FAS) chains. They exhibited WCA more than 150° due to the combination of the low surface energy of the fluoroalkyl groups and rough structures of cross-stacked self-assembly, Fig. 14 . However, the wetting behavior was affected by the deposition time. A 1 min deposition of TNB resulted in a surface with 152.3° WCA, 12.5 nm RMS roughness, and transmittance of 76% at wavelength of 600 nm. However, the surface showed a Cassie impregnating wetting state whereby the water droplets were firmly adhered to the surface even if moved upside down. This could be due to the partially exposed hydrophilic substrate, where they act like the desert beetle, whose back has a combination of micro/nanoscale surface roughness that combines hydrophilic and hydrophobic areas. The surface displayed different behaviors after 2 min and 5 min depositions in which the coating possessed 156.2° WCA with a tilting angle of 8.6°, and 161.3° WCA with a tilting angle of 3°, respectively. Annealing of TNB at 500 °C for 2 min Changed the surface from superhydrophobic to superhydrophilic nature. The annealing has completely decomposed the deposited monolayer and the TNB becomes porous TiO 2 . Water droplet completely spread and permeated into the coating within 0.24 s. By far, fabrication of such special wetting surfaces has been done more on metal surfaces and are not intended for antifogging property. However, examples of such surfaces can be referred to potential antifogging application. Among popular inorganic materials, ZnO is used to obtain this special wetting. On a flat ZnO, WCA can change from 109° to smaller than 10° by UV illumination [ 101 ]. This coating can be restored to its original hydrophobic form by storing it in the dark. Dark storage improves the hydrophobicity of aligned ZnO nanorod rough films to superhydrophobicity [ 102 ]. This effect is explained by the cooperation of the surface 2D and 3D capillary effects. The reversible production and annihilation of photogenerated surface oxygen defect sites modify the surface free energy and modifies the wettability of the surface. These qualities are enhanced even further by their unique surface micro/nanoscale composite structures which further amplify these properties. ZnO nanowires were deposited on a stainless-steel mesh [ 22 ]. The coated mesh showed superhydrophilic/superoleophobic behavior which can be used in separating oil from water driven by gravity. The ZnO nanowires-coating can be converted from superhydrophilic to superhydrophobic state and vice versa by annealing at 300 °C under hydrogen and oxygen environment. Thus, the reversible wettability of ZnO nanowires provides a smart surface mesh which can be switched between “oil-removing” and “water-removing” modes. The coated-mesh conserved a 99.9% separation efficiency after more than 10 cycles of both modes alternatively. This coating is reliable, invariant and recyclable wetting and anti-wetting applications. Nanostructured V 2 O 5 thin films have the ability to switch from superhydrophobicity (WCA 156°) to superhydrophilicity (WCA of 0°) using UV irradiation [ 103 ]. The magnetron sputtering deposition (for 17 h) produced superhydrophilic coating with WCA of 0°. However, its superhydrophilicity was not stable, and the WCA increased gradually by time. The WCA increased to 32° after 14 days of air storage due to the air absorption (mainly N 2 and O 2 absorption). Small air molecules gradually fill the holes on the surface, increasing the WCA over time until it reaches saturation after two weeks. The superhydrophilicity can be recovered by heating at 400 °C for 4 h. When the samples are reheated, the desorption of the air molecules from the film surface occurs and the samples turn back to be superhydrophilic. Kang et al. [ 104 ] has combined shape memory polymer (SMP) and TiO 2 nanoparticles which have an excellent switching ability between superhydrophilicity and superhydrophobicity using UV irradiation and storing in dark, respectively. Because of its good shape memory effect, SMP can allow collapsed TiO 2 -based surface microstructures recover their original morphology and restore superhydrophobic/superhydrophilic switching after pressing. The restorability is ascribed to the cooperative effect between the shape memory property of the SMP and UV-induced surface chemistry variation of TiO 2 nanoparticles. 5 Fabrication techniques Generally, all kinds of coating fabrication techniques are suitable for the preparation of antifogging coatings. This section will include the common fabrication techniques involving inorganic coatings for antifogging or generally for gaining superhydrophilic or superhydrophobic coatings. In each technique, the glass or polymer substrates must be cleaned prior to the deposition of coatings. They can be cleaned by washing using detergent, ultrasonic cleaning, and/or using alcoholic solutions. This is important to ensure the substrate is fully free from unwanted foreign dirt or layer and the deposited coating can adhere well to the substrate. 5.1 Dip coating Dip-coating technique is based on the immersion of a substrate into a solution of precursor or coating materials. Important parameters affecting the layer thickness are density, porosity, speed of the withdrawal and solution viscosity. The coating layer thickness is from 20 nm and up to 50 μm to maintain clarity and transparency. Dip-coating usually starts with preparation of sol–gel containing inorganic materials with vinyl or acrylic polymers [ 105 ]. A study showed that a strong adhesion to the substrate can result from a mixture comprising proper ratios of monoacrylates and naturally-occurring silanol coupling agents. Antifogging characteristics can be enhanced by inorganic constituents made of hybrid combinations such as silica-titania in addition to particular vinyl components and hydrophilic surfactants [ 106 ]. In an experiment, aliquots of tetraethyl-orthosilicate (Si(OC 2 H 5 ) 4 , TEOS) were injected into polyacrylic acid (PAA) solution and stirred continuously under standard conditions (room temperature) [ 107 ]. 10 h later, a colloid template of light blue hue constituting of silica nanospheres sized 40 nm, containing PAA of average molecular weight of 3000 g/mol, was formed. TEOS was hydrolyzed with hydrochloric acid (HCl) and ammonia present to prepare acid- and alkaline-catalyzed samples at room temperature. It was possible to produce porous silica from alkaline-catalyzed samples. 2.8 mm/s was the withdrawal speed at which a glass substrate was dip-coated into the sol–gel present within silica nanospheres containing the PAA template. 10 min of drying later, acid-catalyzed silica solution was subsequently deposited on the first coating at 1.2 mm/s withdrawal speed. Then, the calcination at 300° to 450 °C was used for an hour to remove the PAA template. This step resulted in formation of hollow silica–silica nanocomposite single layer coating in which acid-catalyzed silica sol was injected into vacuous silica nanospheres. On a glass substrate, another coating comprised of alkaline-catalyzed silica sol was dipped-coated at withdrawal speed of 1 mms −1 with the same calcination temperature . Due to the acceptable volume percentage of voids and surface roughness, a WCA of about 5° was recorded after 0.5 s of water dropping, resulting in optimal antifogging and self-cleaning capabilities, Fig. 15 . Coatings with other types of structure were also able to be deposited by dip-coating. Coatings on glass containing available spherical colloidal silica (particle size of 23 nm) and available dendritic colloidal silica (branch length of 41 nm, 60 nm, 109 nm) were successfully fabricated by Li et al. [ 50 ]. Solution of HCl in deionized water was slowly dripped into solution of TEOS and ethanol to get the tetraethyl-orthosilicate hydrolyzate containing 7% solid. The coating solution was procured by combining diluted colloidal silica with ethyl-orthosilicate hydrozylate and glycol solution. TEOS silica nanoparticles were well-adhered on the glass substrate, indicating the efficiency of used method. Dipping was made at 2 mm/s withdrawal speed and drying of coated glass substrates were carried out at 70 °C, 180 °C, and 400 °C for 1 h. The WCA for all dendritic coatings show superhydrophilic behavior at all drying temperature. Meanwhile, for branch length of spherical coating shows decreasing WCA with increasing temperature. It achieves 0° WCA after drying at 400 °C. Thus, dendritic silica nanoparticles can majorly enhance the hydrophilicity of the coating as compared to the spherical silica nanoparticles. This also shows that drying temperature and heat treatment on the dip-coated coating can also affect the properties of the coating such as its wettability and thickness [ 108 ]. Thermal curing was carried out on dip-coated polycarbonate lens and glass slides while varying temperature and time, with a proper thermal initiator of dicumyl peroxide present [ 106 ]. They exhibited both of the desired superhydrophilic and antifogging properties through curing carried out at optimum temperature of 50 °C for 24 h (polycarbonate lens) and a temperature of 120 °C for 8 h (glass slides). 5.2 Spin coating Spin coating uses high speed spinning to create even and thin layer of coating material on substrate by the centrifugal force as well as the surface tension of the liquid material. It can produce thin film of thicknesses ranging from a few nanometers to a few microns. Physical factors accounted for the coating deposition are spinning speed, duration, and material viscosity [ 109 ]. Graphene oxide (GO) suspension was poured and spun onto the glass substrate at 500 rpm for 10 s and dried at 60 °C for 3 h to boost the coating's mechanical properties. They obtained about 100 nm thickness of GO coating [ 6 ]. Multilayer of meso-SiO 2 /Cu–Bi 2 O 3 thin films were spin-coated onto a cleaned commercial glass plate by preparing sol–gel separately [ 110 ]. First, a sol-containing Bi(NO 3 ) 3 .5H 2 O, nitric acid, polyethylene glycol, citric acid, acetone, triton X-100, and CuSO 4 .5H 2 O were coated by spin-coating at 4000 rpm for 1 min. Cu– Bi 2 O 3 thin film was produced as the first layer of the coating. The coating was repeated 3 times and followed by calcination in air at 550 °C for 3 h with a heating rate of 1 ° C/min. Then, SiO 2 sol-containing TEOS, anhydrous ethanol (EtOH), HCl, Brij R 30 were coated on the prepared Cu–Bi 2 O 3 . After calcination at 450 °C for 5 h with a heating rate of 1 ° C/min and resulting in double layered film. When compared to pure Bi2O3 films, the produced layers demonstrated enhanced photocatalytic activity and self-cleaning capabilities. Coating solution containing zeolite MFI suspensions was prepared using a spin coating at 2000 rpm for 30 s [ 54 ]. The sample was oven baked at 100 °C for 24 h. Then, calcination was done by increasing from room temperature to 450 °C at a rate of 1 ° C/min, where it was held for 5 h. They found that the proposed deposition method presented remarkable uniformity, Fig. 16 . The time it takes for a hydrothermal reaction to occur during preparation of zeolite MFI suspension affects the opaqueness of suspension because of the formation and growth of zeolite MFI crystals. Samples undergoing 3 h of hydrothermal were highly transparent which is regarded to the low-crystallinity of colloids within the solutions. In contrast, samples of 6 h hydrothermal were strongly opaque. The milky color of the suspension may be due to the loosely packed zeolite MFI particles causing light to be refracted multiple times. However, the transparency and antifogging properties of all samples are acceptable and adequate for applications. Spin coating was also used to deposit several different types of morphologies, such as TiO 2 nanotubes film (TNF) and TiO 2 sol–gel film (TSF) [ 67 ]. Preparation of TNF involves autoclave heating (150 °C for 24 h) of TiO 2 in NaOH solution. At high temperatures, TiO 2 reacts with NaOH to produce sodium titanate nanosheets. The mechanical stress resulting from the unequal width of the layers in the multi-layered sodium titanate nanosheets drives the nanosheets curving into sodium titanate nanotubes (STNs). STNs have a uniform size with a tubular structure and consist of 2–4 layers of sodium titanate nanosheets. It was observed that nanosheets were vanished after self-rolling strong alkaline conditions and high temperatures. This indicated that all sodium titanate nanosheets transformed into nanotubes with 100 % yield. After cooling down to room temperature, the products were washed by deionized water and nitric acid until the formation of stable protonated titanate nanotubes (PTNs) colloid. Ultrasound was applied to disperse the nanotubes in nitric acid solution which was then centrifuged and redispersed in ethyl alcohol. The colloid was spin-coated on glass substrate, and finally annealed at 400 °C for 1 h to form TNF. Other samples of TiO 2 sol–gel film (TSF) were prepared by adding solution containing nitric acid, deionized water and ethyl alcohol into solution containing tetra-butyl titanate and ethyl alcohol drop by drop with vigorous stirring. After ageing for one week, the solution was spin-coated on soda-lime glass substrate and annealed at 500 °C for 2 h to form TSF. TNF produced by spin coating of PTN colloid shows better transmittance and hydrophilicity (antifogging ability) than the TSF and blank glass substrate. 5.3 Magnetron sputtering deposition Magnetron sputtering is one of the physical vapor deposition techniques (PVD) which involved ejecting material from a target onto a substrate. There are different types of power supplies for magnetron sputtering, such as direct current (DC), pulsed DC and radio frequency. Processing parameters include target power, gas flow rate, Ar to reactive gas flow ratio, sputtering temperature, working pressure, base pressure, distance between target and substrate, substrate bias, and the deposition time. Target materials are normally in a circular solid form. This type of target often pre-sputtered for about 10 min prior to deposition on a substrate to remove all possible impurities (for example, oxide layer) developed on the target surface. Recent developments, however, have customized new powder-based magnetron sputtering machine which required only powder form material for target. This allows to conserve the resources as it significantly reduces usage of raw material and simplifies the steps needed in target preparation as compared to conventional PVD systems which require additional step to form solid target. Loka et al. [ 111 ] has used both RF ad DC magnetron sputtering in preparing TiO 2 (∼30 nm)/Si(∼3 nm)/AgCr (∼20 nm)/TiNx(∼10 nm) multi-layer films deposition on soda-lime glass substrates. Layers of TiO 2 , Si, and TiN x films were deposited by RF sputtering using 200, 150, and 100 W, respectively. Meanwhile, AgCr (Cr 3.2 at%.) films were deposited by DC power of 30 W. The 99.99% pure solid targets were 2 inch in diameter and 0.25 inch thick. The TiN x layer was reactively sputtered with Ar (50 sccm) and N 2 (50 sccm). TiO 2 was deposited at a substrate temperature of 533 K, while the rest of the samples were prepared at room temperature. The multi-layer films were annealed at 673 K for 10 min in vacuum to produce anatase phase. The outermost layer of TiO 2 and the Si layers have shown different bandgap exhibited superhydrophilic behavior with a WCA of ∼5° after the UV irradiation. A study on DC magnetron sputtered ITO with high transparency (90%) was obtained by high substrate temperatures (>200 °C) during deposition or by post-annealing process afterwards [ 112 ]. This is due to the temperature promotes the crystallization of the layers and oxygen-vacancy creation. On the other hand, the deposition of ITO film at room temperature requires oxygen flows during magnetron sputtering. A special attention must also be given to avoid bombardment of the growing films with ion species from the O 2 plasma that may affect the microstructure of the film. In their study, they used an industrial pulsed DC unbalanced magnetron sputtering equipment to deposit ITO on silicon wafers and microscope glass slides [ 113 ]. ITO target (90% In 2 O 3 and 10% Sn 2 O 3 at. 99.99% purity) was placed 120 mm away from a rotatable substrate holder. They used 1500 W average power, 75 kHz pulse frequency, 4 μs pulse-off time, a duty cycle of 70%, base pressure of 2 10 × −6 mbar, 150 sccm argon flow, and 20 nm/min deposition rate. Pre-sputtering was done by Ar flow for 5 min and another 5 min with both O 2 and Ar flows. Later, the oxygen flow used for deposition varied from 0 to 6 sccm, with the sputtering pressure of 1.8 10 × −3 mbar. The gas entry supply was located closer to the substrate than to the target to avoid the acceleration of the oxygen ions by the potential applied to the target and, therefore, the bombardment of the growing film with O + from the plasma. They obtained 140 nm thick ITO layer after 7 min deposition. The use of low O 2 flows (0–1 sccm) resulted in smooth surface ITO (0.8 nm roughness) with cauliflower-like microstructure. When O 2 flow was 2–3 sccm, the cauliflower-like structure was mixed with crystalline grains. Meanwhile, 4 sccm and higher O 2 flow rates resulted in homogeneous polycrystalline surfaces and significant increase in surface roughness around 2 nm of average roughness and 2.5–3 nm of root mean square). 5.4 Potential fabrication for large scale production of antifogging coating Currently, the lab-scale development of antifogging coatings has witnessed wide success. The industrial application of these antifogging coatings demands mass production. Although the lab-scale studies have successfully developed many antifogging coatings, however most of the proposed coating techniques were not employed for large-scale production because of their multi-steps. As for now, the process of fabricating superhydrophilic and superhydrophobic surfaces should be easy so that it can be applicable. Thus, the choice of manufacturing method for antifogging coating depends on the suitable method for large-scale applications. A one-step thermal oxidation was used to fabricate transparent superhydrophobic and superhydrophilic sponge-like amorphous silica nanoparticle by greasing the substrate with silicone grease followed by one-step thermal oxidation [ 114 ]. The process does not involve any hazardous chemicals. Phase and thermal studies reveal that the thermo-oxidative decomposition of silicone grease results in superhydrophobic –H 2 C–Si–O–Si–CH 2 – network structure at 400 °C followed by thermally stable superhydrophilic HO–Si–O–Si–OH network structure at 500 °C as indicated in Fig. 17 . The superhydrophobic glass can achieve 92.4% transparency and WCA 168° with sliding angle of 2°. The superhydrophobic property can be achieved from optimum decomposition of silicone grease at 400 °C which partially removes hydrocarbon groups. While, the superhydrophilic surface is achieved at 500 °C due to the over decomposition of silicone grease and removal of alkyl groups which forms OH bonds in Si–OH groups. The superhydrophilic surface possesses 90.3% transparency. Duran et al. [ 115 ] has investigated the role of the structure and the chemistry of siloxane precursor on the performance of antifogging superhydrophilic coatings. They suggested the usage of atmospheric pressure dielectric barrier discharges (AP-DBD) under controlled N 2 /N 2 O atmosphere for large scale production. Four siloxane precursors with different structures and different number of Si–H and Si–CH3 groups; namely, 1,3,5,7-tetramethyl-cyclotetrasiloxane (TMCTS), octamethyl-cyclotetrasiloxane (OMCTS), 1,1,3,3-tetramethyldisiloxane (TMDSO), and hexamethyl-disiloxane (HMDSO) were deposited on glass samples. Among these four, only TMCTS-coated glasses featured an excellent antifogging performance. Fabrication of antifogging coatings from inorganic materials often involves multiple steps including seeds growth, one dimension nanomaterial prefabrication and post treatments. Therefore, a one-step seedless flame spray pyrolysis method was used to fabricate SiO 2 nanofibrous film on plain glass [ 51 , 116 ]. It is a flexible technique to produce materials of different compositions and morphologies with unique functionalities, besides great potential for large scale application and on-line processing. The as-prepared SiO 2 shows excellent superhydrophilicity (contact angle of up to 0°) with high concentration of hydroxyl groups at the surface [ 51 ]. A design of superhydrophilic or superhydrophobic micro and nano structures on transparent material surfaces can be achieved by laser texturing [ 117 , 118 ]. There are different types of laser-texturing with potential of preparing rough textured surface for antifogging application. Manufacturing of anti-fogging superhydrophilic microstructures on glass without impairing its light transmission capability by nanosecond laser was proposed by Yang et al. [ 119 ]. They used TiO 2 coating as an auxiliary material on glass substrates and the laser was precisely targeted onto the auxiliary material layer. The light-absorbing TiO 2 assisted in absorbing the laser energy and provides precise control for design of honeycomb hole arrays. The distance between glass surface and the laser focal point greatly affects the transparency of glass, with a recommended distance of 0.1–0.2 mm. Surface contact angle decreases as the pitch of the honeycomb structure decreases. Superhydrophilic glass surface (contact angle reached 4.7°) can be achieved when the length of the honeycomb structure is about 10 μm. On another study, nanosecond laser texturing was also used along with laser chemical modification to fabricate a superhydrophobic coating [ 120 ]. Surface modifications was carried out using low energy compounds. Alterations in the topography and physicochemical properties resulted in certain nanocavities packed with hard oxynitride nano-inclusions and hydrophobic agent. Other common types of coatings processes which suitable for wide range of substrates are spray coating, powder coating, dip coating, brush coating, and roll-to-roll coating. For the development of a coating on glass, magnetron sputtering and CVD are among the widely used techniques. In terms of developing antifogging coatings, Suligoj et al. [ 77 ] have studied on larger-scale outdoor-exposed testing in three different environments for 20 months. Haze, transparency and color are time dependent change and crucial to be studied for applications on transparent surface. They exposed a spray deposited-Zr-modified titania-silica (TiZr) in environments ranging from an urban area with industrial complexes to a more remote and rural-type area. The antifogging effect of TiZr material was very well expressed in controlled laboratory conditions (measuring droplet formation time) as well as in the real outdoor environment. Abreast to the development of fabrication techniques for antifogging coating, the test and evaluation of coatings for specific application must be able to determine the quality and standard of the products. By foreseeing this importance, the following section will list the common testing used for antifogging coating. 6 Antifogging testing 6.1 Simple observation test Many researches have adopted simple methods of steam test (hot-vapor test) and freeze test (cold-warm test) to observe the effect of antifogging on transparent surfaces [ 121 ]. These methods are intended for qualitative result and does not give any quantitative values. Steam test is performed by placing the transparent solid on a heated water bath for definite duration at the room temperature. The steam coming out from a hot water bath (at 85 °C for 5 s) will condense on the surface and form fog [ 50 ]. The antifog surface can also be tested by storing the solid into cool refrigerator which is called freeze test. Then the solid will be taken out to the ambient environment. Chevallier et al. [ 122 ] has found that these types of tests are far easier to demonstrate the antifogging effect and the coatings passing this test is considered antifogging surface, even though it does not meet with the ASTM standards criteria. Observation test on antifogging can also be done by setting the test sample in an artificial fogging chamber and exposed for a certain period of time [ 89 ]. An ultrasonic humidifier is used to regulate the relative humidity of the atmosphere and mimic a mist composed of numerous tiny water droplets with diameters that are less than 10 μm. The fog is directed and blown toward the test sample surface through a 3 cm diameter polytetrafluoroethylene (PTFE) tube. The image of the sample undergoing fogging will be captured and compared. 6.2 Antifogging - test standard and advanced test Although many researches preferred to test fogging using physical observations, the use of standards are very critical in quantifying the performance of antifogging coatings. The standards have lined specific methods and thresholds to evaluate the performance. Especially when a coating is going to the market, the following standards are fundamental to be fulfilled in order to get approval for commercialization and most importantly, the consumers can be assured with the quality of the product. Products to be used for antifogging either coating or film must verify the effectiveness based on any of the following standards. is a European standard for personal eye protection [ EN 166 123 ]. This standard is especially important for Personal Protective Equipment (PPE) in health and safety hazards. It outlines the compliance criteria to assess quantitatively through measurement of the change in light transmittance through the lens. Samples with 80% and above transmittance meet the antifogging requirements of European N mark. The change in transmittance must maintain above 80% for a minimum of 8 s. Four samples are required in the testing. is a European standard to test antifogging on eyewear [ EN 168 124 ]. The apparatus set up as described in Fig. 18 involves a collimated laser beam from above passing through a mirror and the sample under test. The sample is put horizontal over a heated water bath system kept at 50 ± 0.5 °C. The beam is reflected at a front surface mirror, passes through the sample again. It is then reflected to the mirror and is focused onto photodetector. Prior to the test, the sample is conditioned by immersion in distilled water at 23 ± 5 °C for 1 h, and then dabbed dry and conditioned in air for at least 12 h at 23 ± 5 °C. Relative humidity (RH) is controlled at 50 ± 3%. If fogs appear on the lens, the change in transmission will cause some of the light to fall outside the detector and reduce the detector output signal. is a standard test method for haze and luminous transmittance of transparent plastics [ ASTM D1003 125 ]. It describes a fundamental test evaluating the effectiveness of an antifogging coating. Based on this test, antifogging coating manufacturers determine specific light-transmittance and wide-angle-light-scattering. All measurements are determined using haze meter as presented in Fig. 19 (a), or spectrophotometer in Fig. 19 (b). A test apparatus setup maintained at 23 ± 2 °C and 50 ± 5% RH is used. The testing process is also conducted in accordance with Test Method D1044. When a material has a haze value greater than 30%, it is considered diffusing and should be tested in accordance with Practice E2387. is a standard especially for motorcycle helmets. All safety gears that meet the criteria can be sold in more than 50 countries worldwide. This standard is a general requirement for helmet to protect head during crash and other condition. The antifogging test must be conducted if the manufacturer wants to claim the antifogging helmet visor. There is also a standard under development on antifogging coatings for exterior lighting of road vehicles which is ECE 22.05 ISO/DTS 5385 [ 126 ]. specifies as standard specification for ski and snowboard goggles [ ASTM F659 127 ]. This standard outlines a resistant to fogging as one of the criteria required for goggles and face shields used by alpine skiers. Herein, 80% of light transmittance at 30 s must be achieved as a minimum acceptable level. There are a few instruments developed by companies to equip the necessities of simulating the fogging and measure the antifogging. The fog formed on the surface due to the condensation of small water droplets will scatter the incident light and reduce the surface transparency. The transmitted light can be detected by available devices to quantify the intensity of the fogging. A patent has suggested a method to evaluate the effectiveness of antifogging coatings of eyewear lenses as an advanced alternative from the conventional test in EN 168 as depicted in Fig. 20 [ 128 ]. This patent develops a new method that considers the lens in vertical, the airflow between face and eyewear, and allowing factor such as wearing hat/helmet that possibly influence the airflow for study. In order to environmentally control the test, they made an insulated chamber with controlled temperature and RH. The controlled environment is provided by a closed liquid cooling system and stream of warm moist air from outside the chamber. The chamber atmosphere is cooled to 10–12 °C with RH of less than 30%. A stream of warm moist air is directed between the lens and the face of head form. Thus, the rear part of the eyewear is maintained in warm moist condition and front part is kept cold and moist. A haze meter is used to detect and record percentage of haze value over a period of time. A detector behind the head form will detect light transmission through the eyewear lens, along a tunnel made in the head form and transfer the data to the haze meter [ 128 ]. Although the standards on antifogging products evaluation are present, there are still a space of improvement that can be done. As can be seen from majority of the literatures on antifogging coating, very few have used such test standards in providing results and discussion about antifogging properties. With the limitation of knowledge and standard instruments, the antifogging characterization is still far behind. Thus, efforts of designing and developing such instruments are seen novel. Furthermore, more researches need to be reported following the antifogging standard test or at least obtaining quantitative results from the test so that future upgrades on the available test standards can be achieved. 6.3 Tests associated with the antifogging coating 6.3.1 Water contact angle measurement It is used to determine the wettability of a material surface. Since antifogging effect is closely related to the wettability state of a material, it is a critical measurement for characterizing an antifogging coating. The most common used method to measure the contact angle is sessile-drop method as shown in Fig. 21 . Sessile-drop method involves a syringe pump to produce a droplet of liquid (with a definite volume) and a camera to observe the droplet on a substrate. A software is usually equipped with the instrument where user can control the liquid dispensing, record the image upon liquid deposition, analyze the raw data, and give the value of contact angle immediately. The sessile drop method is utilized to define the surface energy of a solid surface via measuring the contact angles of different probe liquids on the surface which can be used to evaluate the surface energy. There are two types of measurements which are static contact angle and dynamic contact angle measurements. Static contact angle measurement is usually done using time-dependent method in which the angle is measured by the drop shape as a function of time. While, dynamic contact angle measurement uses captive method. In captive method, water droplet is dropped onto a horizontal surface with the syringe still attached to the droplet. Syringe will actively inflate or deflate the droplet during the measurement by adding more water or removing a bit water from the droplet. Inflating the droplet will give advancing contact angle and deflating the droplet will give receding contact angle. There is also another method of measuring dynamic contact angle which is a tilting base method. In this method, the sample stage will be tilted or the sample itself is tilted whilst deposition of water droplet onto it. Fig. 22 shows the dynamic contact angle by captive method and tilting method. 6.3.2 Surface transparency Transparency of materials like glass and plastics are important in certain applications. Therefore, their coatings must also not compromise the transparency. The change in transparency of long-term exposed objects is another important characteristic for initially transparent materials. Transparency can be measured through the ratio in the intensity of light transmitted through the object using a lux-meter. An ultra violet-visible spectroscopy (UV-Vis Spectroscopy) can also be used to determine surface transparency. The light transmittance in the visible range can be obtained from UV–Vi's transmission spectra. Transmittance values of 80% and above are usually considered as good transparency for optical application [ 53 , 79 , 82 ]. Surface roughness and thickness of coating can affect the transparency. For instance, surface roughness of superhydrophobic solid must be less than one-quarter of the visible length or else it will compromise the transparency [ 131 , 132 ]. 6.3.3 Thickness and roughness Thickness and roughness are a few characteristics to be considered for good transparency. Roughness is one of the factors for modifying wettability of materials for antifogging coatings. The roughness can also be calculated by the manipulation of the image. Another more precise method to obtain roughness of the surface is by using the atomic force microscopes (AFM). 6.3.4 Band gap Band gap is one of the important criteria studied when modifying photo-responsive inorganic material for antifogging application. UV–Vi's transmission spectra is one of the methods used to determine the bandgap. A significant drop in transmittance at the wavelength shorter than 350 nm can be attributed to the absorption of the light caused by the excitation of electrons from valence band to the conduction band. The absorption coefficient α can be estimated by Eq. (9) [ 133 ]: where (9) α = 1 d ln ( 1 T ) d is the film thickness and T is the transmittance at different wavelength. Then, the optical bandgap energy of the coating is determined by using the Tauc plot, and extrapolating the linear region of the plot toward low energies. The formula is given as in Eq. (10) : where (10) α h v = A ( h v − E g ) m A is a constant, hυ is the energy of the electromagnetic radiation, E is the optical bandgap, and g m is a constant, having different values of 1/2, 2, 3/2 and 3 for allowed direct, allowed indirect, forbidden direct and forbidden indirect electronic transitions, respectively. The absorption edge of UV-Vi's spectra shifted towards longer wavelengths (red shift) is associated with the reduce bandgap and improves the photo responsive sensitivity. A shift towards shorter wavelengths (blue shift) is associated with the increase in bandgap. The structural disorder in the films can create localized states in the bandgap and extend the light absorption to visible region. It can be estimated by calculating the Urbach tail, Eq. (11) : where (11) α = α 0 exp ( h v E u ) is a constant, α 0 is the Urbach tail. E u 6.3.5 Adhesion strength The adhesion strength of coating can be estimated using a micro-scratch test where a certain force is applied in parallel with the sample surface. The applied load is increased gradually until the coating detaches from the substrate which is called as the failure point and the critical load is determined. After the sample undergoes the micro-scratch testing process, the sample is observed using microscope to get the image of the scratch applied on the sample and identifies the distance of the scratch itself [ 134 ]. The adhesion properties can be done according to the international standard ISO 2409 (Paints and varnishes-cross-cut test). 6.3.6 Surface damage by abrasion Abrasion can be simulated using specified equipment. It is measured by amount of surface damage represented as a change in light diffusion through the lens. The smaller the change, the better is the surface resistance towards abrasion. For example, the abrasion resistance is described in the US Military Combat Eye Protection System (GL-PD 10-12 section 4.3.3.4.3.1). Sample eyewear is placed on the Taber Linear Abrader and rubbed with wear-eraser for 20 cycles with 750 g additional weight on the abrader arm. Haze of the abraded track is then measured using BYK-Gardner haze-Gard plus with a reduced ¼ inch opening. The percent haze gain is the difference between the haze readings taken before and after the abrasion. Lower haze change indicates improved abrasion resistance. On the other hand, surface damage of a coating can also be tested by its resistant to washing many times. 6.4 Computer-aided optimization and statistical analysis Automation and computerization in research and technology have been rapidly developed and utilized [ 135 , 136 ]. Their performance has nonetheless assisted in obtaining faster and rather accurate results compared to manual work. The utilization of software tools in optimization is one of the approaches which improve the research experimentation. Specifically, in antifogging application, there are very few studies have utilized computer in the design of experiment (DOE) and optimalization. Only recently, there are some publications have started to use this technology in improving their research. For instance, Chang et al. [ 53 ] has used image processing using algorithm developed in MATLAB software to quantify the antifogging performance. They computed clarity index by using the images obtained during the antifogging test and thus making simple test more accountable. A response surface methodology was applied to optimize the antifogging performance of 1, 3, 5, 7-tetramethylcyclotetrasiloxane (TMCTS)/N 2 O coatings deposited on glass [ 137 ]. The coatings were fabricated by atmospheric pressure plasma enhanced CVD and studied in terms of dissipated power (DP) [N 2 O]/[TMCTS] ratio, and sample scroll speed. They used Box-Behnken experimental design and the regression model relating transmittance of the coated glasses to these deposition parameters. They reveal that the antifogging performance strongly depends on the second-order interaction of the dissipated power and [N 2 O]/[TMCTS] ratio. Meanwhile, the sample scroll speed does not have significant impact on the antifogging performance, but it is important in getting desired thickness during in-line manufacturing. Contour plots show that the dissipated power required to prepare optimal antifogging coatings should be at least 0.7, 0.5 or 0.4 W cm −2 if the [N 2 O]/[TMCTS] ratio in the plasma is 20, 30, or 40, respectively. The coated glass achieves 80% light transmission during steam antifogging test. 7 Potential applications of antifogging coatings Antifogging surfaces have the potential to be applied in every-day use devices and equipment. From the eye-glasses to the visors of astronauts [ 12 ]. Many research works have applied antifogging coating for the car wind shields [ 13 ], head lamps and mirrors [ 17 ]. Also, it finds applications in optical devices such as optical microscopes [ 18 ], cameras lenses, goggles [ 40 ], etc. Moreover, it is applied in high reflective mirrors [ 63 ] to avoid glare and scattering of the incident lights. 8 Summary With the success on development of antifogging coating through creation of superhydrophilic and superhydrophobic surfaces, the spotlight of research now has shifted towards permanent antifogging coating. Very limited researches reported permanent antifogging effect and few reported the stability of the coatings in terms of their wettability and antifogging properties. Knowing that the inorganic materials have potential for more durable coating compared to polymer-based coating, it is recommended to explore these types of materials. Until now, the use of inorganic materials other than TiO 2 and SiO 2 are still scarce in research toward optical application particularly antifogging. Besides, although there have been many studies on TiO 2 and SiO 2 especially in order to improve the TiO 2 photo-responding sensitivity, there are still lack of studies using new antifogging tests. Thus, the effort of designing and developing of such instruments are seen novel. Furthermore, more researches need to be reported following the antifogging standard test or at least obtaining quantitative results from the test so that future upgrades on the available test standard can be achieved. It is also important to mention that utilization of computer software to design the experiment and optimize the process is required to obtain optimum results. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgement This study was conducted due to South Asia Taiwan University (SATU) Joint Research Scheme (JRS) Match Project for Sustainable Development Goals (SDG). It was financially supported under Research University (RU) grant by Universiti Malaya with project no: ST042-2021.
|
[
"MARGRAIN",
"HERBOTS",
"DURAN",
"DURAN",
"FENG",
"HU",
"PHAN",
"GUNKO",
"THOMASYOUNG",
"FANG",
"SUN",
"INTROZZI",
"MANSOOR",
"MANABE",
"LYU",
"SHIBRAEN",
"ZHAO",
"FENG",
"YANG",
"CHANG",
"HONG",
"RATURI",
"NIE",
"BAI",
"LAI",
"CHEMIN",
"WHYMAN",
"AMBROSIA",
"CASSIE",
"CHAKRABORTY",
"SHAO",
"SUN",
"MILLBROOKE",
"YUAN",
"PARK",
"DURAN",
"YE",
"GRUBE",
"CHOI",
"RIPP",
"SON",
"NUNDY",
"CHEN",
"JOSHI",
"EMELINE",
"WATANABE",
"LI",
"ZHURAVLEV",
"LI",
"JIA",
"HUANG",
"CHANG",
"HSU",
"FENG",
"TADANAGA",
"HOHNE",
"ZHU",
"SCARPELLI",
"KHATAEE",
"LI",
"XIONG",
"ETACHERI",
"CHEMIN",
"KUMAR",
"WANG",
"LI",
"SYAFIQ",
"DU",
"WANG",
"DUAN",
"BHARTI",
"LEE",
"WANG",
"MATSUDA",
"LE",
"SULIGOJ",
"SAJJAD",
"XU",
"CHEN",
"ZHU",
"XIONG",
"KIM",
"SUKLEE",
"BHARTI",
"ALI",
"MACIEJEWSKI",
"SUN",
"SHANG",
"ZHANG",
"RAUT",
"XUEFENGGAO",
"YIN",
"DANESHMAND",
"WANG",
"MANAKASETTHARN",
"WANG",
"GAO",
"WANG",
"SUN",
"FENG",
"ZHANG",
"KANG",
"OWENS",
"ASTHANA",
"ZHANG",
"KAYANI",
"SAHU",
"SHAN",
"LOKA",
"GUILLEN",
"TXINTXURRETA",
"SIDDIQUI",
"TRICOLI",
"OLANREWAJUIJAOLA",
"KUMAR",
"YANG",
"BOINOVICH",
"CHEN",
"CHEVALLIER",
"DAIN",
"ZIEGLER",
"CUI",
"KUNG",
"CHO",
"YANG",
"GONG",
"RAHMATI",
"CHOHAN",
"KAMATCHIHARIHARAN",
"DURAN"
] |
0c78a140aeb740fa885498b79cd08ffd_Insights into heat islands at the regional scale using a data-driven approach_10.1016_j.cacint.2023.100124.xml
|
Insights into heat islands at the regional scale using a data-driven approach
|
[
"Colaninno, Nicola"
] |
Urban heat island (UHI) phenomenon is crucial in the context of climate change. However, while substantial attention has been given to studying UHIs within cities, our understanding at the regional level still needs to be improved. This study delves into the intricate dynamics of the regional heat island (RHI) by examining its relationship with land use/land cover (LULC), vegetation, and elevation. The objective is to enhance our knowledge of RHI to inform effective mitigation strategies. The research employs a data-driven approach, leveraging satellite data and spatial modeling, examining surface and canopy-layer regional heat islands, and considering daytime and nighttime variations. To assess the impact of LULC, the study evaluates three main categories: anthropized (urbanized), agricultural, and wooded/semi-natural environments. Furthermore, it delves into the influence of vegetation on RHI and incorporates elevation data to understand its role in RHI intensity. The findings reveal meaningful variations in heat islands across different LULCs, providing essential insights. Although urbanized areas exhibit the highest RHI intensity, agricultural regions contribute notably to RHI due to land use changes and reduced vegetation cover. This emphasizes the significant impact of human activities. In contrast, wooded and semi-natural environments demonstrate potential for mitigating RHI, owing to their dense vegetation and shading effects. Elevation, while generally associated with reduced heat island, shows variations based on local conditions. Ultimately, this research underscores the complexity of the RHI phenomenon and the importance of considering factors such as different temperatures and their daily variation, landscape heterogeneity, and elevation. Additionally, the study emphasizes the significance of sustainable spatial planning and land management. Targeted efforts to increase vegetation in high daytime land surface temperature areas can reduce heat storage and mitigate RHI. Similarly, planning for agroforestry and green infrastructure in agricultural areas can significantly increase resilience to climate.
|
1 Introduction Cities play a pivotal role in addressing the challenges posed by climate change, as they contribute to its causes and face its consequences. Global warming severely impacts cities, negatively affecting human health and well-being. Urban areas have unique characteristics influencing the urban climate, resulting in higher temperatures than surrounding rural and peri -urban regions. This phenomenon is known as the urban heat island (UHI). When studying the UHI, it is essential to consider three levels: the canopy layer UHI (CLUHI), which focuses on the air temperature between urban roughness components; the boundary layer UHI (BLUHI), which refers to the air temperature above the roofs; and the surface UHI (SUHI), which pertains to the warming of urban surfaces [28,34,42] . In recent decades, the urban heat island has gained significant attention in urban climatology and planning [33,36] . However, most studies have focused on the UHI phenomenon within cities, while there is still a need to explore its dynamics at the regional scale and in metropolitan areas. While UHI analysis studies have a history dating back to 1972, with increased momentum since 2010, studies explicitly focusing on the Regional Heat Island (RHI) have been more active since 2019 [10] . This research aims to reduce the notable gap in the body of literature regarding the dynamics of the heat island phenomenon at the regional scale. The term regional heat island was proposed by Yu et al. [47,46] to describe significant variations in thermal conditions within urban agglomerations due to reduced distances between cities. They report that the heat island effect in cities can extend beyond city boundaries and form a larger regional heat island, particularly as cities merge into metropolitan systems. Also, they identify the regional heat island as an area that manifested a relative land surface temperature of more than 2 °C with respect to the average value of the whole area [11,47] . However, because of the recent interest in the topic, a comprehensive exploration of how multifaceted aspects, including ecological, climate, and socioeconomic factors [4] , contribute to the formation of RHI still needs to be represented. Indeed, Yu et al. [46,47] mainly based the study on land surface temperature, while Degefu et al. [10] , albeit providing a relevant analysis that includes the effects of green space and land use/land cover on the urban thermal environment, focused the attention on a relatively limited area, considering an 8 km long transect based on a mobile traverse. A relevant resource to broaden our knowledge of the phenomenon relies on remote sensing data and techniques, which have been extensively employed over the last decades to study heat islands [10] . In particular, land surface temperature has been largely assessed based on thermal infrared-derived imagery from satellites such as Landsat, ASTER, and MODIS [10,12,48] . MODIS data has been utilized to investigate the influences of various factors on the trends of RHI over different years [4] . The modeling of Surface UHI using MODIS LST has been approached through methods such as multiple linear regression at different times of day and night [44] , geographically weighted regression [16] , and support vector machine regression considering different features like the land cover, solar radiation, temperature, humidity, precipitation, wind speed, aerosol optical depth, and soil moisture to estimate nighttime SUHI [20] . Likewise, as satellite-based optical imaging enables vegetation analysis through spectral vegetation indices, numerous studies have explored the combined use of thermal and optical data, emphasizing the negative correlation between land surface temperature (LST) and the Normalized Difference Vegetation Index (NDVI) through linear regression analysis at different spatial resolutions [33,43] and during either daytime or nighttime [40] . However, although scientific research has revealed that LST and NDVI are among the most significant remote sensing variables for assessing air temperature [9] , and vegetation is a fundamental element influencing the intensity of urban heat islands, the correlation between NDVI and urban temperatures should be carefully considered. As also emerges from this research, the correlation between LST and NDVI is strongly seasonal- and time‐dependent, and vegetation’s cooling effect improves effectiveness mostly during daytime than nighttime [7,37] . Despite the numerous heat island studies using a remote sensing approach, another aspect deserves attention. Previous research, including studies conducted by Huang et al. [16] and Zhang & Du [48] , have analyzed solely the surface heat island or have neglected the interplay between air temperature and land surface temperature in the context of heat islands. On the other hand, it has been recognized that land surface temperature alone is insufficient for accurately assessing the heat island phenomenon and its spatial implications, particularly during the daytime [30] . Indeed, near-surface air temperature is pivotal to enhancing the knowledge of the heat islands. To overcome this gap, modeling the near-surface air temperature (NSAT), approximately 2 m above the ground, is crucial yet challenging [16] . Various approaches have been investigated. A temperature-vegetation index (TVX) method is proposed to enhance daily maximum air temperature estimation using MODIS surface temperature [50] . Machine learning techniques, such as random forest algorithms [23,45,48] and deep learning approaches [35] , have recently been explored for NSAT estimation. Geographically weighted regression has also proven to be effective in modeling near-surface air temperature, both during the day and at night, as it considers spatial non-stationarity [13,16,8] . However, NSAT estimation has predominantly revolved around monthly or daily average [23] or maximum air temperature [3,35] . Only some studies have reconstructed air temperature at higher temporal scales, such as sub-daily or hourly resolutions [48] , albeit at the expense of spatial resolution. Hence, a need for high spatiotemporal resolution still exists. Although this study does not entirely solve the problem, as it does not provide complete hourly temperature modeling, it contributes to the topic by proposing an operational model to estimate instantaneous NSAT for specific day and night hours, with a spatial resolution of about 900 m. Finally, recent research has increasingly focused on the relationship between land use/land cover transformation and regional climate. Several studies have employed MODIS or Landsat data, along with NDVI and land-use variables, to model the impact of different land uses on surface temperatures [5,6,11,18] . These studies have explored various aspects, such as the contribution of LULC to regional heat islands, the mitigation of urban heat islands through different types of LULC [49] , and the simulation of land-use change effects on LST [5,18] . They have also analyzed temperature patterns associated with land-use configurations and changes [24] , quantified the effects of urban and green areas on regional climate change using time-series analysis of land use and land cover [21] , and assessed the intensity of the urban heat island effect in relation to LULC, NDVI, and LST [17] . Given that, in most cases, the data used for heat island studies is the surface temperature, this research raises the question of why not only LST but also near-surface air temperature should be considered. This standpoint is relevant because LST is not a good proxy for the heat island phenomenon, especially during daylight hours. This research aims to comprehensively understand the regional heat island phenomenon, considering the interplay between land surface temperature, near-surface air temperature, vegetation, and land use/land cover. The study addresses two primary research questions: how to effectively model and estimate NSAT for both day and night-time at a scale suitable for regional studies and what drives the dynamics of RHI within metropolitan areas. A practical model for estimating instantaneous NSAT is introduced, assessing the significance of the NDVI on the model’s performance. The study further explores the spatial and temporal interactions between NSAT and LST, considering the influence of vegetation, land use/land cover, and different elevations on RHI. The novelty of this work lies in its holistic analysis of RHI that involves the combined use of near-surface air temperature and land surface temperature to enhance our understanding of the phenomenon and examine the correlation between them in the context of heat islands, considering both day and night variations, and the impact of relevant environmental factors that can shape the RHI. A comprehensive understanding of the heat island phenomenon at a regional scale is essential for informing evidence-based policies, fostering sustainable urban development, and enhancing climate resilience. 2 Materials and methods This section outlines the methodology employed in this study to investigate the regional heat islands phenomenon, including surface and canopy layer heat islands (SRHI and CLRHI). Using remotely sensed data, namely land surface temperature and normalized difference vegetation index from MODIS sensors on Terra and Aqua satellites [19] , a Shuttle Radar Topography Mission (SRTM)-derived Digital Elevation Model (DEM) and ground-based temperature measurements, a Geographically Weighted Regression model is designed to estimate day and night near-surface air temperature. Hence, the analysis assesses the impact of different land use and land cover (LULC) types, specifically anthropized (urbanized), agricultural, and wooded/semi-natural environments, using LST and NSAT. To evaluate the effect of vegetation on temperature, the NDVI is integrated. The analysis of vegetation's influence on surface and canopy layer heat islands considers NDVI's correlation with LST and NSAT under daytime and nighttime conditions. Correlation analyses between NDVI and temperature at different elevations and within various LULC categories are also conducted to assess elevation's impact on NSAT and LST. Finally, the intensity of the regional heat island phenomenon is assessed by comparing SRHI and CLRHI within different LULC categories and examining temperature differences between LULCs and NDVI variations concerning elevation. Fig. 1 presents a methodology flowchart summarizing the key steps, offering a visual overview of the research process. 2.1 Case study, period under investigation, and data sources The area under investigation is the Lombardy region, northwestern Italy, the fourth Italian region by extent, encompassing around 23,860 km 2 , and the first in terms of the resident population, with around 10 million residents in 2020. The region, with cold winters, no dry season, and hot summers, is classified as Cfa (humid subtropical) by the Köppen climatic classification scheme [2,29] . In Lombardy, warming has accelerated significantly in the previous 30 years, resulting in an average air temperature anomaly of around +0.2–0.3 degrees Celsius when compared to the reference period of 1968–1996 [39] . This research considers a 4-day heatwave between the 29th of July and the 1st of August 2020. The heatwave is identified by comparing the 90th percentile of the daily maximum temperature over a reference period (1973–2019) with the daily maximum temperature of each day during July and August 2020. Because the objective addresses extreme events, the focus is on the hottest days during the heat waves, i.e., the 31st of July and the 1st of August 2020. Employed data sources rely on the Moderate-resolution Imaging Spectroradiometer (MODIS), the Shuttle Radar Topography Mission Digital Elevation Model (SRTM-DEM), weather stations’ measures, and the Land Use/Land Cover (LULC). The MODIS sensor is on board the Terra and Aqua spacecraft as part of NASA's Earth Observing System (EOS). These satellites follow a sun-synchronous, near-polar circular orbit, allowing global coverage once or twice daily. MODIS version 6, with a 1-km spatial resolution, was used in this study. It provides daily per-pixel data and includes LST products (MOD11 and MYD11) and NDVI products (MOD13A2 and MYD13A2). The NDVI data are generated from the best available pixels over 16 days. The SRTM-DEM, obtained from the Shuttle Radar Topography Mission, offers high-resolution elevation data across 80 % of the Earth's surface between 60° north and 56° south latitude. The data are available at 1-arc-second (30-meter) resolution for the United States and 3-arc-second (90-meter) resolution worldwide. A 30-meter resolution DEM, created through resampling the high-resolution dataset, is also accessible worldwide. Weather data from the Regional Meteorological Service (SMR), established by the Regional Environmental Protection Agency (ARPA), is used for retrieving air temperature ( T ). The SMR operates a network of 250 automated stations, providing daily meteorological and climatological data, including temperature, humidity, radiation, wind speed, and precipitation. The data, recorded at 2 meters from the ground, are obtained in 10-minute intervals. a For land use/land cover, the research employs the DUSAF (uses of agricultural and forest land) database. DUSAF is the official LULC database in Lombardy. It is constructed from aerial orthophotos and satellite images and uses SPOT6 and 7 satellite images at a 1.5-meter resolution. The database classifies LULC into hierarchical levels, with the first level (Level 1) categorizing five main classes, i.e., Anthropized, Agricultural, Wooded Territories and Semi-natural Environments, Wetlands, and Water Bodies. Subsequent levels provide further detail. In this study, the Level 1 classification is used. Fig. 2 illustrates the summer 2020 heat waves, showcasing the two hottest days, July 31st and August 1st ( Fig. 2 a). Additionally, it presents the LULC configuration (2b), a MOD11A1 LST image captured at 10.42 LT on July 31st, 2020 (2c), and the SRTM DEM (2d). The figure also includes the administrative boundaries of the Lombardy region (red line in Fig. 2 c and d) and the spatial distribution of the ARPA stations (cyan dots). 2.2 The significance of the NDVI for modelling near-surface air temperature The initial focus of the research was to evaluate the explanatory power of different variables in developing an operational Geographically Weighted Regression (GWR) model for estimating the near-surface air temperature. The variables assessed included Land Surface Temperature (LST), Digital Elevation Model (DEM), and Normalized Difference Vegetation Index (NDVI). Two NDVI products, MOD13A2 and MYD13A2, obtained from Terra and Aqua satellites, respectively, were considered. Linear regression was conducted to assess the relationship between the weather station-derived air temperature ( T ) and the independent variables at various days and times. The evaluation included measures such as Pearson coefficient ( a r ), coefficient of determination (R 2 ), and the F-test of significance, provided in Table 1 , to account for the correlation's direction, strength, and statistical significance. A strong positive correlation was observed between LST and T , indicating that higher LST values are associated with higher a T . Conversely, the DEM negatively correlated with a T , with higher elevation values corresponding to lower a T . Interestingly, the correlation between DEM and a T was more robust in the daytime than LST, indicating that DEM significantly influences a T during the day. The correlation between LST and a T decreased at the peak temperature time (13:10). During the nighttime, the correlation between LST and a T increased, suggesting that the two temperatures converged at night, with a T being more strongly correlated with LST than DEM. a In contrast, both MODIS-derived NDVIs showed weak or almost no correlation with T . Although a slight negative correlation emerged during mostly nighttime, neither of the two NDVIs proved to be influential explanatory variables for a T , whether during daytime or nighttime, for the specific days, times, and areas under investigation. Instead, the two NDVIs show a highly significant correlation with each other, with a Pearson of 0.96, slope of 1.03, and intercept of −0.03. Accordingly, only the MYD13A2 was used as the NDVI for subsequent analysis. a Fig. 3 displays scatterplots of MYD13A2-NDVI against T at different time observations, indicating an absence of a discernible trend in the point cloud pattern. The scatterplots exhibit distinct patterns with two contrasting trends rather than a clear linear correlation. The distribution shows a positive association at lower temperatures, transitioning to a negative association as temperature increases. Ultimately, we point out an inconsistent correlation between NDVI and temperature in this experiment, and for the whole area under investigation, that prevents using NDVI for air temperature prediction. a 2.3 GWR for modelling near-surface air temperature Geographically Weighted Regression (GWR) is a widely discussed geospatial analytical technique in literature, particularly effective for examining non-stationary phenomena and investigating heterogeneity in data relationships. The GWR model calibrates coefficients and predictions locally using a neighboring region or bandwidth surrounding the target point. The local regression, described by Equation (1) [13,14,25] , encompasses position i represented as a vector of coordinates in either a projected or geodetic coordinate system. where (1) Y i = β i 0 + ∑ n = 1 m β in X in + ∊ i Y is the dependent variable at location i i (x , i , y i ) X is the in n th independent (explanatory) variable at location i , m is the number of independent variables, β is the intercept parameter at location i0 i , β is the local regression coefficient for the in n th independent variable at location i , and ε is the random error at location i i . At each regression point i , the model's parameters (or coefficients β) are estimated locally by the weighted least squares. The weights, expressed in a matrix form, depend on the observed location with respect to the other observations in the dataset [22] . Using a GWR model, instantaneous NSAT is estimated, with MODIS-LST and DEM as the independent variables. The lack of a significant correlation between NDVIs and T led to the exclusion of NDVI as a predictor. The GWR model is designed according to Equation a (2) , where Y is the air temperature i T at location ai i (x , while i , y i ) X and i1 X are LST and DEM, respectively. An exponential weighting scheme is employed, with a bandwidth of 40 Km automatically defined based on the spatial distribution of the points representing the weather stations. i2 Instantaneous NSAT is estimated day- and night-time, consistent with MODIS spatial and temporal resolution. MOD11A1 and MYD11A1 LST were obtained on the 31st of July and the 1st of August 2020. Four Terra/Aqua MODIS image scenes have been used, i.e., 10:42 local time (LT) and 21:48 LT for Terra on the 31st of July and 13:06 LT and 2:00 LT for Aqua on the 1st of August 2020. Due to cloud coverage, some images are severely limited, and not all days are practicable. A quick image reconstruction phase is undertaken to address small missing data due to cloud coverage in the selected images (2) T a i = β i 0 + β i 1 LST i 1 + β i 2 DEM i 2 + ∊ i [8] . As there is a slight time difference because of the 10-minute temporal resolution discrepancy between T and MODIS-LST, to minimize this difference, a T is selected at the nearest time to the MODIS local passing time. Specifically, a T is taken at 10:40 and 21:50 on July 31, 2020, and at 02:00 and 13:10 on August 1, 2020. Hence, four models are computed. All 250 available weather stations were included in the analysis without imposing a preferred search distance to select observations. However, only 205 stations provided a T values. Also, a few stations do not allow 10-minute observation but hourly data. Consequently, some stations were excluded at 10:40, 21:50, and 13:10. For the 13:10 model, 197 values were available. In contrast, for the 10:40 and 21:50 models, there were 195 values after excluding two stations that reported abnormal values (assigned −9999). For the 02:00 model, all 205 stations were used since it aligns with the top of the hour. a 3 Results 3.1 Model assessment and estimated NSAT The performance of the models is extensively discussed in Colaninno and Morello [8] . In this work, the effectiveness of the models is summarized in Table 2 , employing a rigorous 2-fold cross-validation (CV) protocol. Three predictor combinations are examined, including LST and DEM together, LST only, and DEM only. The 2-fold CV involves randomly splitting the testing dataset into two equal groups, with 50 % of the points used for training the model and the remaining 50 % for validation. The average error of the two tests is computed to evaluate model performance. Key performance metrics, such as the CV adjusted coefficient of determination (Adj.R2), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Bias Error (MBE), are given to assess the models' accuracy and predictive capacity. The model’s predictive capacity experiences a substantial decline when considering only LST, particularly during daytime. This is evident from the increased errors and a significant decrease in the CV-Adj.R 2 . Notably, at 13:10, the hottest time, the performance of all models is reduced. In contrast, the nighttime model exhibits higher performance due to the strong correlation between LST and T at night. Conversely, when incorporating DEM as a predictor, the situation is reversed. Despite elevation being widely acknowledged as a significant variable for a T in previous studies a [15,26,27,41] , its effect on T is more pronounced during daytime and less influential at night. The poorest performance is observed at 02:00, the coldest hour. a Incorporating LST and DEM as predictors enhances temperature estimation accuracy, with daytime models influenced by elevation and nighttime models benefiting from the LST-temperature correlation. Time-specific modeling approaches are crucial for improved performance in temperature estimation. Fig. 4 shows, as an example, the result of the estimated instantaneous NSAT (4a) and the land surface temperature (4b) for the 31st of July 2020 at 10:40, along with a transect E-E 1 over the metropolitan area of Milan ( Fig. 4 a). 3.2 Land surface temperature and near-surface air temperature Incorporating both near-surface air temperature and land surface temperature is essential in conducting comprehensive heat island studies, as they provide valuable insights into the phenomenon at various times of the day. Considering a 90-kilometer transect E-E 1 , as depicted in Fig. 4 , which traverses the city of Milan, it is observed that the profiles of NSAT and LST converge as temperatures decrease during the nighttime hours ( Fig. 5 ). Additionally, land use/land cover characteristics influence the trends, with urbanized areas exhibiting higher LST and NSAT. Notably, while NSAT demonstrates a flattened pattern during the day, increased variability is observed for both temperatures during nighttime. At 02:00, when temperatures are at their lowest, the profiles of the temperatures exhibit proximity, resulting in an almost perfect overlap in highly urbanized areas like the city of Milan. Consequently, LST can be a reliable proxy for urban heat island studies, specifically during nighttime. Notably, during nighttime, there is a tendency for the difference between LST and NSAT to be primarily negative, indicating that NSAT is slightly higher than LST. A reversal in the trend is observed at night. The inversion is observed at 21:50 and 02:00, indicating the differential thermic inertia between the land surface and air, with air temperature exhibiting a slower cooling rate. The situation dramatically changes during the hottest hours, when the NSAT and LST curves diverge significantly, displaying distinct shapes. Analysis of the transect demonstrates reduced spatial variability in NSAT during the day. As temperatures increase, the NSAT curve becomes flatter. At 13:00, NSAT becomes dramatically flattened, indicating a more uniform air temperature distribution across the landscape. In the daytime, LST can reach temperatures up to 10 degrees higher than the air temperature. Furthermore, LST exhibits higher values and significant variability across the territory during the hottest hours. The curves exhibit a slight convergence in non-urbanized areas, wherein different land uses and covers directly influence the variability of LST. To account for the impact of different land use/land cover on the LST-NSAT interaction, three main LULC categories are considered: anthropized (urbanized), agricultural, and wooded and semi-natural environments. In the Lombardy region, the anthropized area is 15 % of the territory, the agricultural area covers 42 %, and the wooded and semi-natural areas cover 40 %. Around 4 % is covered by wetlands and water bodies. Including these specific categories is relevant for climate-resilient planning, as they hold different implications and significance. Fig. 6 analyzes the correlation between LST and NSAT for the entire study area and the different LULCs. Although there is a highly positive correlation between NSAT and LST, the correlation patterns during extreme heat reveal a temperature-dependent relationship, with higher temperatures exhibiting reduced correlation. The lowest correlation is found at 13:10, with an R 2 value of approximately 78.2 %. During the morning, at 10:40, characterized by lower maximum temperatures, the correlation is relatively higher, reaching around 80.5 %. Conversely, at night, as temperatures decrease, NSAT and LST exhibit a strong positive correlation, approaching R 2 values of approximately 98 % at 21:50 and 99.6 % at 02:00. When focused on anthropized and agricultural areas, the distinction between daytime and nighttime correlations becomes more evident. During the night, near-surface air and land surface temperatures exhibit a consistently high and relatively unchanged correlation, with R 2 values of approximately 94 % and 98 %. Conversely, the correlation is significantly reduced during the day. Notably, during the daytime, when temperatures are approximately below 30 degrees, a robust linear correlation is still evident. However, the increase in LST outpaces NSAT at higher temperatures, suggesting a saturation effect of the latter. This phenomenon underscores urban materials' substantial heat storage capacity during the daytime, making the ground considerably warmer than the air at higher temperatures. Although agricultural areas generally show a slightly lower correlation, their behavior is similar to anthropized areas. Barren or sparsely vegetated lands can absorb solar radiation significantly, particularly during daylight hours. Wooded and semi-natural environments exhibit distinct behavior. The correlation between LST and NSAT is notably strong both during the daytime, with an R 2 of approximately 80 %, and at nighttime, with an R 2 exceeding 99 %. The high linear correlation can be attributed to dense vegetation, which enhances the evapotranspiration effect and, more importantly, provides ample shading over the ground surface. For the wooded areas, if we consider the whole territory, very high elevations are reached in the mountains, where temperatures dramatically drop during the daytime. 3.3 The effect of vegetation on LST and NSAT The impact of vegetation on heat islands was examined based on the NDVI. Although the relationship between temperatures and vegetation is widely acknowledged, diurnal variations (day/night) and the spatial heterogeneity of the landscape must be considered. A minimal correlation is observed between NDVI, LST, and NSAT across the entire region, as shown in Figs. 7 and 8 . The points cloud distribution reveals a dual trend for the whole region instead. A positive correlation is evident at lower temperatures, while the expected negative correlation is observed as temperatures increase. These patterns persist throughout all observed periods and are consistent for NSAT and LST. On the other hand, the analysis reveals an enhanced correlation between vegetation and daytime land surface temperature in anthropized and agricultural areas ( Fig. 7 ). A clear negative trend is observed, indicating that higher levels of vegetation are associated with lower surface temperatures. This suggests that vegetation has a more significant and direct cooling effect on surfaces than on the air. It emphasizes the importance of green measures, particularly in man-made environments, as vegetation can effectively mitigate the intensity of heat islands by reducing the heat-storing capacity of urban materials during the day. Resembling the nighttime scenario observed in agricultural areas, the absence of correlation or a discernible linear trend is observed in woodland and semi-natural environments, where natural and densely vegetated surfaces inhibit the attainment of excessively high temperatures. 3.4 The impact of elevation on NSAT and LST Although, generally, as elevation increases, the intensity of the heat island diminishes, the elevation at which heat islands arise can vary due to local climate conditions, air mass mixing, topography, and urban environment characteristics. As a general approximation, the observable effects of heat islands are typically concentrated within a few hundred meters above sea level. This study employs a threshold of 500 m to investigate the correlation between temperatures and vegetation across different elevations. Table 3 shows the NDVI-LST and NDVI-NSAT correlations at various elevations for the three LULCs. The minimum elevation considered in this analysis is 200 m, as highly urbanized areas would be excluded at lower elevations. The analysis is given for the entire region and different elevation ranges, namely, below 200 m, between 200 and 500 m, below 500 m, and above 500 m. The table also provides the corresponding percentage of area covered for each extent. Approximately 95 % of anthropized and 94 % of agricultural areas fall below the 500-meter elevation threshold, while 80 % of the woodlands and semi-natural environments are over 500 m. A high correlation is shown between NDVI and daytime LST for anthropized and agricultural areas. However, the correlation decreases when considering only the portion of the territory between 200 and 500 m. Although anthropized and agricultural areas exhibit similar trends, it is noteworthy that a stronger correlation between NDVI and LST is observed in agricultural areas. Enhancing vegetation health and density can benefit both classes, with a more pronounced impact on agricultural lands regarding LST. Some relevant correlation with daytime LST, of about −77 %, is also found for woodlands under 200 m, where the effect of elevation has less impact on the temperatures. Although the correlation between NDVI and NSAT is generally weaker, it is worth noting that for areas below 500 m elevation, the correlation between NDVI and NSAT at night becomes more pronounced for anthropized areas, reaching values around −62, −68, or −69 %. This suggests that vegetation health and density can also contribute to cooling effects over the air temperature during nighttime hours in densely urbanized environments. Overall, a negative correlation is observed in all cases below 500 m, indicating that higher vegetation health and density correspond to lower temperatures. This pattern, however, is not evident for woodlands over 500 m, where the correlation becomes positive. Notably, while the correlation over 500 m is not significant for anthropized and agricultural areas, it is relevant for woodlands. As previously mentioned, at high elevations, vegetation is not a key driver influencing temperatures, suggesting the presence of other influential factors that should be considered. 3.5 About the intensity of the regional heat islands RHI intensity is examined through the main factors that cause its changes. As reported in Table 4 , the surface RHI (SRHI) and canopy layer RHI (CLRHI) are studied and compared based on three elements: the intensity of RHI within different LULCs, heat islands determined by average temperature differences between LULCs, and the intensity of heat islands within LULCs based on changes in the NDVI. The analysis focuses on the region below 500 m. To calculate the relative RHI intensity within land use/land covers, the average temperatures of each LULC are compared to the minimum temperature value observed in the area under investigation. To prevent excessively low (outliers) values resulting from potentially biased sensor measurements, the 2nd percentile of temperatures is used to determine the minimum value. Regarding the RHI intensity, the analysis reveals relevant variations across different LULCs. Anthropized areas exhibit the highest intensity, reaching maximum SRHI values of 7.0 at 13:10 (daytime) and CLRHI values of 2.4 at 02:00 (night). This suggests that urbanized regions experience more pronounced heat islands than other land cover types. Agricultural areas exhibit slightly lower values, with SRHI of 5.2 at 13:00 and CLRHI of 2.0 to 2.1 at each observation. Such values are alarming. It is reported that relative RHI intensities of LST ranging from 2 to 8 are recognized to be high-risk values [11,46] . Is it worth noting that agricultural areas show slightly higher CLRHI intensity than anthropized at the hottest hour, i.e., 13:00. Woodlands show lower values, with SRHI ranging from 2.3 at 02:00 to 4.6 at 13:00, and CLRHI from 1.2 to 1.8 being always below 2 degrees. Above all, for the CLRHI, there is a critical jump between the values of anthropized and agricultural areas compared to woodlands during the day. The differences between average LULC temperatures also emphasize that anthropized areas exhibit significantly higher temperatures than woodland areas. While the difference between anthropized and agricultural areas reaches 1.7 and 1.8 for SRHI at night, the difference with woodlands reaches 2.2 and 2.4. However, it is worth pointing out that in terms of CLRHI, the difference between anthropized and agricultural areas is very low (roughly around zero), even showing an inversion, with negative values, during daytime. The anthropized and agricultural areas in the daytime reach similar intensity values regarding air heat islands. In contrast, the daytime difference between agricultural and woodlands is almost 1 degree. Regarding the analysis of differences in average values among LULCs, it's crucial to emphasize that the average values, particularly for land surface temperature, can exhibit significant variations, particularly in agricultural areas. These variations are influenced by the variability of land cover types, including bare lands, densely vegetated areas, and sparsely vegetated areas within agricultural regions. Analyzing heat island intensity within different LULC categories based on variations in NDVI yields further insights. NDVI values range from −1 to 1, where lower values indicate areas with less vegetation cover, while values around 0.4 represent healthy and dense vegetation. In this study, a threshold of 0.5 is applied. NDVI values below 0.5 indicate areas with lower vegetation presence, while values above 0.5 represent areas with higher vegetation density. Anthropized areas with lower NDVI values display significant heat island intensity. The temperature difference between areas with low NDVI and high NDVI within anthropized areas ranges from 1.6 to 4.2 for SRHI and from 0.3 to 1.4 for CLRHI. Woodlands reach maximum values of 1.7 for the surface heat island and 0.9 for the CLRHI. This indicates that areas with lower vegetation cover experience a more pronounced heat island effect, thus emphasizing the substantial impact of human manipulation on the local climate. In the case of agricultural areas, and mainly for the SRHI, NDVI-based intensity is elevated. The daytime SRHI for agricultural areas ranges from 4.7 to 5.7, higher than the intensity found for the anthropized or woodland areas. This occurrence reports the critical presence of surface heat islands in agricultural regions. While increasing vegetation health and density by 0.5 NDVI can be highly beneficial. 4 Discussion 4.1 Unveiling RHI dynamics 4.1.1 On the intricate correlation between surface and air temperatures The impact of different predictors on estimating near-surface temperature on a regional scale is first discussed. Through rigorous analysis, three predictor combinations have been examined: LST and DEM together, LST only, and DEM only. The results provide a clear picture of the influence of these predictors on air temperature estimation accuracy and support the hypothesis that incorporating LST and DEM would enhance accuracy. Daytime models are particularly sensitive to elevation, while nighttime models benefit from the strong correlation between LST and air temperature at night. Once both land and near-surface temperatures are available, a pivotal question arises regarding how temperature varies across different times of the day based on the two temperatures. Indeed, although the difference between urban and rural temperatures is more prominent at night, causing nighttime heat island intensity to be higher, observed temperature values can reach concerning levels during the day. When addressing heat illnesses, the focus should not be limited to nighttime heat islands alone. The findings support the concern of relying on both temperatures and considering different hours. Distinct temperature patterns during different times are revealed, particularly the divergence between LST and NSAT during daytime hours. According to the results, and in line with other studies [1,15,30,38] , it is worth noting that while the thermal profiles of NSAT and LST tend to align at night, they significantly diverge during the day. During the hottest hours, LST could reach temperatures up to 10 degrees higher than NSAT. In summary, higher temperatures reduce the correlation, while lower temperatures produce a stronger positive correlation. Consequently, using LST as a proxy to evaluate the daytime heat island effect is inappropriate. 4.1.2 The significance of vegetation and LULC on temperature variations Further, this research delves into the impact of land use/land cover and vegetation on temperature variations. The significance of vegetation in climate studies is undeniable. However, vegetation must be carefully considered within specific geographical and temporal contexts. This study reports significant variations in the correlation between vegetation and land surface temperature and near-surface air temperature under various temperature conditions. In certain areas, such as densely urban regions, low-lying areas, or regions with limited land use/land cover variability, a notable linear (negative) correlation is often observed, particularly during high daytime temperatures [7,18,37] . Nevertheless, in the context of this research, while developing the model to estimate near-surface air temperature, a weaker correlation between normalized difference vegetation index and air temperature was found. This deviation from established patterns in previous research can be attributed to the unique conditions of the study area, including significant landscape heterogeneity, distinct geographical features, and specific land use/land cover patterns. Regarding the mitigating effect of vegetation on temperatures, this study aligns with existing knowledge. By considering both land surface and air temperature and assessing various land use/land covers, the findings corroborate the existing understanding that emphasizes the more effective cooling effect of vegetation under high-temperature conditions [31] . Additionally, it underscores that vegetation's cooling impact is more pronounced during daytime than nighttime [7,37] . However, this primarily holds true for thermal comfort indices or land surface temperature. When examining the impact of vegetation on air temperature within urbanized areas, as demonstrated in Table 4 (NDVI-difference-based intensity), vegetation's influence on air temperature is significantly high at night, as previously noted [32] , when temperatures are lower. Finally, the impact of vegetation on both LST and NSAT for different land use/land cover types was assessed, specifically anthropized (urbanized), agricultural, and wooded/semi-natural environments. The results highlight the importance of considering various LULCs when addressing heat islands. Temperatures are particularly sensitive in anthropized and agricultural areas, where the cooling effect of vegetation in mitigating heat islands is more significant than in wooded and semi-natural areas. In anthropized and agricultural regions, daytime LST is influenced by heat storage in urban materials or barren and sparsely vegetated lands, causing the ground to be warmer than the air and leading to higher daytime surface regional heat island intensity for both anthropized and agricultural regions. Although evapotranspiration subsequently reduces the effect of heat islands in agricultural areas, mostly at nighttime, resulting in a divergent trend and a cooling effect, the patterns of Surface RHI and air RHI (CLRHI) are similar. During the daytime, the intensity of SRHI is approximately double that of CLRHI, as shown in Table 4 . In any case, the air RHI intensity remains around 2 degrees, which is identified as a critical value [46] . In contrast, wooded and semi-natural areas exhibit lower RHI intensity due to dense vegetation and shading. As highlighted by Yu et al. [46] and supported by this research, wooded and semi-natural regions demonstrate exceptional potential for heat mitigation, surpassing that of grassland. 4.1.3 Why should elevation be considered? Elevation consistently emerges as a pivotal factor impacting temperature patterns throughout the regional heat island [4,21] . Accordingly, this study deals with the distinctive temperature dynamics that arise over a regional extent at different elevations. Spatial and temporal interactions between near-surface air and land surface temperatures are addressed, revealing how elevation uniquely shapes these interactions across heterogeneous geographic areas. The findings underscore the intricate interplay of elevation, land use/land cover, and vegetation in molding temperature patterns. The influence of elevation is particularly noticeable in heat island effects, which tend to concentrate within a few hundred meters above sea level. While a strong correlation was found between vegetation and daytime land surface temperature in anthropized and agricultural regions, this correlation weakens when focusing on elevations between 200 and 500 m. Below 500 m, increased vegetation is associated with cooler temperatures, but this pattern reverses for woodlands situated above 500 m, where the temperature correlation becomes positive. In summary, although vegetation plays a crucial role, especially in anthropized and agricultural areas, other influential factors become more significant at higher elevations. This underscores the need for a comprehensive understanding of local conditions to accurately model temperature variations on a regional scale. 4.2 Spatial planning and policy implications and recommendations This work provides valuable insights for policymakers, urban planners, and environmental practitioners. First and foremost, it emphasizes the importance of investigating the urban heat island phenomenon on multiple scales, particularly in metropolitan systems that extend beyond city boundaries. The research elucidates how urban heat islands can extend their influence into agricultural zones, underscoring the interconnectedness between urban and rural climates. This highlights the necessity for holistic climate studies not only focusing on the urban environment. Also, the results highlight the pivotal role of land use/land cover in shaping temperature variations, emphasizing its significance in climate-proof planning. Vegetation emerges as a compelling tool for mitigating the regional heat islands, being effective not only in densely populated but also in agricultural areas. Indeed, the correlation patterns found between vegetation (NDVI) and temperature support the pivotal role of greenery in cooling urban and rural environments. It is worth noting that agricultural areas, although less urbanized than heavily developed regions, exhibit discernible impacts on the local climate due to human activities and modifications associated with farming activities. The conversion of natural vegetation to agricultural land and the removal of tree cover can diminish the cooling effect of evapotranspiration, resulting in higher temperatures and an intensified heat island effect. As the cooling effect of woodlands is higher than grass or sparsely vegetated areas, providing the most significant potential for mitigating heat islands in areas characterized by high levels of human activities, it is essential to safeguard and enhance wooded and natural environments. To address these implications and engage in sustainable spatial planning, some actions could be considered, such as: • Advocate for data-driven decision-making and investment in data infrastructure and analytical capabilities to effectively address heat islands phenomenon. • Shaping zoning regulations to optimize the equilibrium between urban development and the preservation of natural green spaces. • Encouraging the establishment and maintenance of green infrastructures, including urban afforestation, green corridors, parks, and community gardens, within urban and peri -urban regions to mitigate daytime temperature extremes. • Promoting agroforestry and sustainable agricultural practices, incorporating tree planting and other vegetation strategies in agricultural landscapes to moderate temperature variations and enhance resilience. This involves advocating sustainable farming practices such as shade trees, cover crops, short rotation forestry, and precision irrigation systems to mitigate temperature impacts on crop production. 4.3 Limitations and future research directions Some limitations must be pointed out, delineating potential directions for future research. First, the spatial resolution, reliant on around 1-kilometer pixel raster data, may result in a loss of spatial heterogeneity. Future research could prioritize enhancing the spatial resolution of temperature modeling. Similarly, the climate dynamics within broadly generalized land uses may still need to be fully represented. For instance, the generalization obscures variations within anthropized regions, such as industrial settlements or large urban parks, as well as within agricultural areas, which can yield more nuanced temperature behaviors depending on different vegetation covers. Hence, future research could delve into specific land uses, exploring internal factors contributing to temperature variations, such as vegetation density, building materials, and local topography. The analysis incorporates land surface and air temperatures and explores correlation patterns between them and LULC, vegetation, and elevation. However, other factors like humidity remain unaccounted for, which could enhance our understanding of the phenomenon. Furthermore, while the NDVI is widely accepted as a proxy for vegetation density and health, it has limitations in differentiating between various vegetation types, such as grass and tree cover. Additionally, it is worth mentioning that the analysis is limited by its specific period, which may not fully capture climate variations over time. Although the study focuses on heat stress during the hot season, providing a reasonable estimate of the primary concerns for an extreme heat event, the findings could be tested during different heat waves, for instance. Likewise, the research could extend its temporal scope to assess the impacts of climate change on temperature dynamics over multiple years. This long-term perspective would offer valuable insights into the evolving nature of heat islands. Lastly, the findings are specific to the study region and may not directly apply to other geographic areas with different climate conditions, land cover characteristics, or urbanization patterns. Comparative studies across other regions will be planned. This would enable researchers to identify common trends and unique regional factors influencing the RHI dynamics. Currently, caution should be exercised when extrapolating these insights to other regions. 5 Conclusions This research employs a data-driven approach leveraging remote sensing and spatial statistics to study how vegetation and land use/land cover affect the heat island phenomenon at the regional scale. It comprehensively analyzes the regional heat island, considering surface and canopy layer heat islands. Thus, the study emphasizes the importance of correlating land surface and air temperature while considering factors like elevation, landscape composition, and configuration. Based on the correlation analysis between land surface temperature and near-surface air temperature, the analysis first suggests that targeting areas with high daytime temperatures can boost thermal resilience. Increasing vegetation in these areas reduces daytime heat storage and lessens heat island intensity. Moreover, the research highlights the impact of all human-induced land use changes on the climate. It is worth noting that although heavily anthropized areas significantly contribute to the heat islands, remarkably, agriculture also impacts the heat island intensity on a regional scale. While only highly vegetated and natural areas demonstrate potential for heat mitigation. Hence, from sustainable spatial planning and land management perspective, besides adopting specific vegetation strategies for different land cover types, sustainable practices like agroforestry and green infrastructure should be increasingly promoted, especially in agricultural areas, as they can reduce heat island effects. Ultimately, this research deepens our understanding of the regional heat island phenomenon and emphasizes the need for sustainable spatial planning to combat heat islands and enhance climate resilience, providing valuable insights for evidence-based policies and informed urban development. CRediT authorship contribution statement Nicola Colaninno: Conceptualization, Methodology, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Visualization. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment This research is part of the project ‘ MultiCAST - Multiscale Thermal-related Urban Climate Analysis and Simulation Tool ,’ which has received funding from the European Union’s Horizon 2020 (H2020) Research and Innovation program under the Marie Skłodowska-Curie Action - Individual Fellowship | Global Fellowship (MSCA-IF-GF), with grant agreement number 101028035. The area under investigation is among the MultiCAST case studies.
|
[
"ALVI",
"BELDA",
"CHEN",
"CHEN",
"CHEN",
"CHUN",
"COLANINNO",
"CRISTOBAL",
"DEGEFU",
"DUTTA",
"ELMES",
"FOTHERINGHAM",
"GOOD",
"HUANG",
"KARAKUS",
"KIM",
"KING",
"LAI",
"LEE",
"LEUNG",
"LI",
"LOUGEAY",
"LU",
"LUSSANA",
"MUTIIBWA",
"OKE",
"PEEL",
"PEPIN",
"PERINI",
"QIU",
"RASUL",
"ROTH",
"SHEN",
"SOUCH",
"SUN",
"SUN",
"TIANGCO",
"UBOLDI",
"VOOGT",
"WENG",
"XING",
"YOO",
"YU",
"YU",
"ZHANG",
"ZHOU",
"ZHU"
] |
3ef732f1c6944f4d82727979af3bdcc9_Time to treat the climate and nature crisis as one indivisible global health emergency_10.1016_j.gaceta.2023.102353.xml
|
Time to treat the climate and nature crisis as one indivisible global health emergency
|
[
"Abbasi, Kamran",
"Ali, Parveen",
"Barbour, Virginia",
"Benfield, Thomas",
"Bibbins-Domingo, Kirsten",
"Hancocks, Stephen",
"Horton, Richard",
"Laybourn-Langton, Laurie",
"Mash, Robert",
"Sahni, Peush",
"Mohammad Sharief, Wadeia",
"Yonga, Paul",
"Zielinski, Chris"
] | null |
Over 200 health journals call on the United Nations, political leaders, and health professionals to recognise that climate change and biodiversity loss are one indivisible crisis and must be tackled together to preserve health and avoid catastrophe. This overall environmental crisis is now so severe as to be a global health emergency. The world is currently responding to the climate crisis and the nature crisis as if they were separate challenges. This is a dangerous mistake. The 28 th Conference of the Parties (COP) on climate change is about to be held in Dubai while the 16 th COP on biodiversity is due to be held in Turkey in 2024. The research communities that provide the evidence for the two COPs are unfortunately largely separate, but they were brought together for a workshop in 2020 when they concluded that: “Only by considering climate and biodiversity as parts of the same complex problem…can solutions be developed that avoid maladaptation and maximize the beneficial outcomes.” 1 As the health world has recognised with the development of the concept of planetary health, the natural world is made up of one overall interdependent system. Damage to one subsystem can create feedback that damages another—for example, drought, wildfires, floods and the other effects of rising global temperatures destroy plant life, and lead to soil erosion and so inhibit carbon storage, which means more global warming. Climate change is set to overtake deforestation and other land-use change as the primary driver of nature loss. 2 3 Nature has a remarkable power to restore. For example, deforested land can revert to forest through natural regeneration, and marine phytoplankton, which act as natural carbon stores, turn over one billion tonnes of photosynthesising biomass every eight days. Indigenous land and sea management has a particularly important role to play in regeneration and continuing care. 4 5 Restoring one subsystem can help another—for example, replenishing soil could help remove greenhouse gases from the atmosphere on a vast scale. But actions that may benefit one subsystem can harm another—for example, planting forests with one type of tree can remove carbon dioxide from the air but can damage the biodiversity that is fundamental to healthy ecosystems. 6 7 The impacts on health Human health is damaged directly by both the climate crisis, as the journals have described in previous editorials, and by the nature crisis. 8,9 This indivisible planetary crisis will have major effects on health as a result of the disruption of social and economic systems—shortages of land, shelter, food, and water, exacerbating poverty, which in turn will lead to mass migration and conflict. Rising temperatures, extreme weather events, air pollution, and the spread of infectious diseases are some of the major health threats exacerbated by climate change. 10 “Without nature, we have nothing,” was UN Secretary-General António Guterres's blunt summary at the biodiversity COP in Montreal last year. 11 Even if we could keep global warming below an increase of 1.5 12 ∘ C over pre-industrial levels, we could still cause catastrophic harm to health by destroying nature. Access to clean water is fundamental to human health, and yet pollution has damaged water quality, causing a rise in water-borne diseases. Contamination of water on land can also have far-reaching effects on distant ecosystems when that water runs off into the ocean. 13 Good nutrition is underpinned by diversity in the variety of foods, but there has been a striking loss of genetic diversity in the food system. Globally, about a fifth of people rely on wild species for food and their livelihoods. 14 Declines in wildlife are a major challenge for these populations, particularly in low- and middle-income countries. Fish provide more than half of dietary protein in many African, South Asian and small island nations, but ocean acidification has reduced the quality and quantity of seafood. 15 16 Changes in land use have forced tens of thousands of species into closer contact, increasing the exchange of pathogens and the emergence of new diseases and pandemics. People losing contact with the natural environment and the declining loss in biodiversity have both been linked to increases in noncommunicable, autoimmune, and inflammatory diseases and metabolic, allergic and neuropsychiatric disorders. 17 For Indigenous people, caring for and connecting with nature is especially important for their health. 10,18 Nature has also been an important source of medicines, and thus reduced diversity also constrains the discovery of new medicines. 19 Communities are healthier if they have access to high-quality green spaces that help filter air pollution, reduce air and ground temperatures, and provide opportunities for physical activity. Connection with nature reduces stress, loneliness and depression while promoting social interaction. 20 These benefits are threatened by the continuing rise in urbanisation. 21 22 Finally, the health impacts of climate change and biodiversity loss will be experienced unequally between and within countries, with the most vulnerable communities often bearing the highest burden. Linked to this, inequality is also arguably fuelling these environmental crises. Environmental challenges and social/health inequities are challenges that share drivers and there are potential co-benefits of addressing them. 10 10 A global health emergency In December 2022 the biodiversity COP agreed on the effective conservation and management of at least 30% percent of the world's land, coastal areas, and oceans by 2030. Industrialised countries agreed to mobilise $30 billion per year to support developing nations to do so. 23 These agreements echo promises made at climate COPs. 23 Yet many commitments made at COPs have not been met. This has allowed ecosystems to be pushed further to the brink, greatly increasing the risk of arriving at ‘tipping points’, abrupt breakdowns in the functioning of nature. If these events were to occur, the impacts on health would be globally catastrophic. 2,24 This risk, combined with the severe impacts on health already occurring, means that the World Health Organization should declare the indivisible climate and nature crisis as a global health emergency. The three pre-conditions for WHO to declare a situation to be a Public Health Emergency of International Concern are that it: 1) is serious, sudden, unusual or unexpected; 2) carries implications for public health beyond the affected State's national border; and 3) may require immediate international action. Climate change would appear to fulfil all of those conditions. While the accelerating climate change and loss of biodiversity are not sudden or unexpected, they are certainly serious and unusual. Hence we call for WHO to make this declaration before or at the Seventy-seventh World Health Assembly in May 2024. 25 Tackling this emergency requires the COP processes to be harmonised. As a first step, the respective conventions must push for better integration of national climate plans with biodiversity equivalents. As the 2020 workshop that brought climate and nature scientists together concluded, “Critical leverage points include exploring alternative visions of good quality of life, rethinking consumption and waste, shifting values related to the human-nature relationship, reducing inequalities, and promoting education and learning.” 3 All of these would benefit health. 1 Health professionals must be powerful advocates for both restoring biodiversity and tackling climate change for the good of health. Political leaders must recognise both the severe threats to health from the planetary crisis as well as the benefits that can flow to health from tackling the crisis. But first, we must recognise this crisis for what it is: a global health emergency. 26 This Comment is being published simultaneously in multiple journals. For the full list of journals see: https://www.bmj.com/content/full-list-authors-and-signatories-climate-nature-emergency-editorial-october-2023
|
[
"OTTOPORTNER",
"RIPPLE",
"FALKOWSKI",
"DAWSON",
"BOSSIO",
"LEVIA",
"ATWOLI",
"ATWOLI",
"MAGNANOSANLIO",
"JELSKOV",
"COMEROSRAYNAL",
"FALKENBERG",
"DUNNE",
"ALTVES",
"SCHULTZ",
"MACGUIRE",
"WONG",
"SIMKIN",
"ARMSTRONGMCKAY"
] |
83e258f8f982461c93f83adddd52d409_Comparison of four-year toxicities and local control of ultra-hypofractionated vs moderate-hypofract_10.1016_j.ctro.2023.100593.xml
|
Comparison of four-year toxicities and local control of ultra-hypofractionated vs moderate-hypofractionated image guided prostate radiation with HDR brachytherapy boost: A phase I-II single institution trial
|
[
"Beaudry, M.M.",
"Carignan, D.",
"Foster, W.",
"Lavallee, M.C.",
"Aubin, S.",
"Lacroix, F.",
"Poulin, E.",
"Lachance, B.",
"Després, P.",
"Beaulieu, L.",
"Vigneault, E.",
"Martin, A.G."
] |
Purpose/Objective(s)
To analyze the long term efficacy and safety of an ultra-hypofractionated (UHF) radiation therapy prostate treatment regimen with HDR brachytherapy boost (BB) and compare it to moderate-hypofractionated regimens (MHF).
Materials/Methods
In this single arm, prospective monocentric study, 28 patients with intermediate risk prostate cancer were recruited in an experimental treatment arm of 25 Gy in 5 fractions plus a 15 Gy HDR BB. They were then compared to two historical control groups, treated with either 36 Gy in 12 fractions or 37.5 Gy in 15 fractions with a similar HDR BB. The control groups included 151 and 311 patients respectively. Patient outcomes were reported using the International Prostate Symptom Score (IPSS) and Expanded Prostate Index Composite (EPIC-26) questionnaires at baseline and at each follow-up visit.
Results
Median follow-up for the experimental arm was 48.5 months compared to 47 months and 60 months compared to the 36/12 and 37,5/15 groups respectively.The IPSS and EPIC scores did not demonstrate any significant differences in the gastrointestinal or genitourinary domains between the three groups over time. No biochemical recurrence occurred in the UHF arm as defined by the Phoenix criterion.
Conclusion
The UHF treatment scheme with HDR BB seems equivalent to standard treatment arms in terms of toxicities and local control. Randomized control trials with larger cohorts are ongoing and needed to further confirm our findings.
|
Introduction Every day in Canada, an average of 63 men receive a diagnosis of prostate cancer while about 11 men die of the disease [1] . External Beam Radiotherapy (EBRT) and radical prostatectomy (RP) are two accepted treatment modalities for newly diagnosed prostate cancer with no significant difference in prostate-specific mortality at long term follow-up in retrospective or observational studies [2] . For patients with an intermediate or high risk prostate cancer choosing EBRT with or without androgen deprivation therapy, adding a brachytherapy boost achieves a better PFS than EBRT alone [3,4] and it should be offered to eligible patients according to ASCO-CCO guidelines [5] . Dose escalation in prostatic cancer showed a reduction in biochemical failure and an improvement in metastasis-free survival [6,7] . Interest in that field has been increasing constantly, and recent randomized controlled trials show promising results for hypofractionated radiotherapy (HFRT) compared to conventionally fractionated radiotherapy (CFRT) in reducing the number of fractions and still maintaining the same efficacy and safety [8,9,10] . Evidence now supports the use of ultrahypofractionated (UHF) EBRT regimens, also known as stereotactic body radiation therapy (SBRT) in intermediate and high risk prostate cancer [11–13] .This treatment scheme, which implies 4–6 treatments with a dose of 5–9 Gy per fraction, would therefore suit the alpha/beta ratio of prostate and be even more convenient in terms of treatment duration. However, few studies to date have compared UHF with a brachytherapy boost to HFRT or CFRT with a brachytherapy boost. We hypothesized that UHF with HDR BB may reduce the socioeconomic burden [14] on patients while maintaining a biochemical control and comparable toxicities to standard treatment. In the following article, we present and compare our 4-year follow-up results and outcomes to the accepted treatment standard. Materials and methods Study design and participants We conducted a prospective, single arm, monocentric phase I-II study at our center in Quebec City, Canada. Our project was approved by the CHU de Québec - Université Laval ethical committee. Patients with biopsy-proven prostate adenocarcinoma classified as NCCN’s intermediate risk were recruited if they were clinical stage (T1c-T2), had a prostate-specific antigen PSA score of < 20 ng/ml and Gleason score of 6 or 7. Patients were excluded if they had a history of previous pelvic radiotherapy, active collagenosis, inflammatory disease or bilateral hip replacement. Data was later compared to two control groups, both treated with a standard moderate hypofractionation regimen with the same HDR BB at our center in an overlapping time period between 2010 and 2017. They were included for the present analysis if they met the same inclusion criteria as cited above. All participants provided written informed consent. Procedures Men in the experimental arm of 25 Gy in 5 fractions received a 5 Gy daily fraction starting in mid-week and given on a 7-day period followed by a HDR BB of 15 Gy in a single fraction. The MHF groups were comprised of men who received either 36 Gy in 12 fractions with a 3 Gy daily fraction or 37.5 Gy in 15 fractions with a 2.5 Gy daily fraction, all followed by the same HDR BB of 15 Gy in a single fraction. Biological doses in the UHF regimen were calculated to be equivalent to the standard treatment schedule assuming an alpha/beta ratio of 1.5 (see Supplementary Table 5 ). Short term androgen deprivation therapy (STADT), from 4 to 6 months, was administered per physician’s preference if Gleason score was 7 (4 + 3) or if there was presence of more locally extensive disease (>50 % positive biopsies) corresponding to RTOG protocol [15] . IGRT technique using fiducial gold markers was required for all groups for daily match on prostate and first proximal cm of seminal vesicles. Intensity-modulated radiation techniques (IMRT) with volumetric-modulated arc therapy (VMAT) and inverse planning were used for all treatment groups. Dose constraints in the experimental arm were followed for organs at risk such as the bladder and rectum (see Supplementary Table 4 ). Minor deviations in the prescribed doses were permitted to meet those constraints. Energy used for EBRT was 6 MV. The clinical target volume (CTV) consisted of the prostate plus the first proximal cm of the seminal vesicles as identified on the planning CT scan at the time of treatment planning. The planning target volume was obtained by a 3D expansion of 5 mm of the previously described CTV. Pelvic lymph nodes were not included. The HDR brachytherapy procedure has already been described before [16] . Under general anesthesia, 14 to 21 interstitial catheters were placed into the prostate gland through the perineum via ultrasound guidance. Dosimetric optimization was done using ultrasonographic-based planning (Oncentra Prostate v.4.2.2 brachytherapy software), allowing contouring of the prostate and organs at risk. The prescribed dose was 15 Gy. Details on dosimetric goals and constraints are provided in the supplementary appendix. Cystoscopy was performed to ensure the bladder and urethra integrity. All EBRT plans were reviewed in a weekly quality assurance meeting with other radiation oncologists at our center. Target volumes, isodoses, organ doses constraints and DVH were validated by colleagues for compliance with protocol guidelines. A kilovoltage (KV) imaging marker match was performed daily and cone beam CT (CBCT) scans were acquired at each fraction in the experimental arm (weekly for the reference arms). Follow-up and outcomes Follow-up visits and PSA testing were scheduled six weeks after the implant and every 4 months for the first year, then every 6 months for years 2 to 5 and yearly thereafter. Biochemical recurrence was defined by the Phoenix criterion [17] as nadir plus 2.0 ng/ml. Patient-reported outcomes included: the International Prostate Symptom Score (IPSS), GU-GI-Sexual toxicity and QOL questionnaires all validated in prostate cancer patients. The EPIC-26 questionnaires [18] were given at baseline, 12-, 24-, 36- and 48-months FU in the UHF arm (vs at baseline, 24 and 36 months for MHF2.5). Main toxicities were reported by the treating physician according to the CTCAEv4 scale. Patients’ files were reviewed for specific survival and causes of death. Statistical analysis 30 patients were planned to be recruited for this feasibility study but only 28 patients were eligible for data analysis. For the comparison of our QOL and toxicities endpoints, we used the linear mixed model analysis with the mean IPSS and EPIC-26 domains score over time. Toxicities were evaluated by the CTCAE v4 and compared in terms of events and grades between groups. Differences between numeric variables were tested by ANOVA or a non-parametric Kruskal-Wallis test. An independent samples median test was used to evaluate differences in follow-up along cohorts. BRFS was evaluated by means of the Kaplan-Meier estimate with a log-rank test to compare treatment groups. The definition of PSA 0,2 ng/ml at 4 years was used to compare biochemical control between groups with logistic regression to control for predefined factors. Analyses were performed in a per protocol manner by a specialized statistician using SPSS v27 and R v4.0 softwares. ≤ Results Demographic characteristics The experimental cohort (28 patients) for the UHF treatment regimen were enrolled between July 2015 and November 2016. The patient baseline characteristics are presented in Table 1 . Data are compared to two control groups treated with standard regimens at our center. The first comparative group was comprised of 311 men, all treated between June 2010 and November 2017 with a MHF regimen of 37.5 Gy in 15 fractions associated with a 15 Gy HDR BB. The second control group gathered 151 patients treated with a MHF scheme of 36 Gy in 12 fractions associated with a similar HDR BB between April 2013 and April 2015. Demographic characteristics were similar between groups, with median age in the three cohorts at 67–69 years with a prostate Gleason score of 7 and a clinical stage T2a. More patients received androgen deprivation therapy in the 36/12 group (43 %) compared to the 25/5 group (36 %) and to the 37.5/15 group (31.5 %), p = 0,0056. ( Fig. 1 ). Follow-up At the time of analysis, median follow-up was 60 months for the 37.5 Gy group, 47 months for the 36 Gy group and 48.5 months for the 25 Gy group. Follow-up was significantly longer in the 37.5 Gy group compared to the experimental group of 25 Gy, p = 0,001. Adverse events IPSS scores reported to 48 months showed no significant difference between groups. At baseline, the average IPSS scores were 6.7, 7.6 and 8.3 and drop to 5.4, 6.6 and 7.5 at 48 months for the 25 Gy, 36 Gy and 37.5 Gy regimens respectively. A tendency towards a greater reduction was seen in the experimental arm of 25 Gy in 5 fractions while being non-significant. Fig. 2 shows the average EPIC scores over time for the experimental arm and the MHF arm of 36 Gy in 12 fractions. In both groups, scores at 48 months were similar to baseline for urinary incontinence, urinary irritative or obstructive symptoms, bowel and hormonal domains. Regarding sexual function, we observed a greater fall in sexual function at six months in the experimental arm followed by a partial recovery after one year, as is described in other brachytherapy series [19,20] . The difference was however statistically significant when compared to the 36 Gy in 12 fractions group (average score 33.3 (CI 25.8–40.8) vs 44.22 (CI 3.0–38.22) p = 0.03) at the 7–12 months interval p = 0,005, and the 19–24 months interval p = 0,049. Within the three months following brachytherapy, all patients were prescribed an alpha blocker for prevention of acute urinary symptoms. There was no significant difference between groups for early grade 3 toxicities according to the CTCAE v4, p = 0,084. Only one patient in the experimental arm reported an acute grade 3 toxicity and was hospitalized for the treatment of an acute pyelonephritis one month following the intervention and after doing self-catheterization. Two patients in the 37.5/15 group presented a grade 3 macroscopic hematuria requiring hospitalization or an invasive intervention following the implant. No patient presented a grade 3 acute toxicity in the 36/12 group. Regarding specific late toxicities according to the CTCAE v4, 1,9 % grade 3 genitourinary toxicities were observed in the 37.5/15 compared to 1,3% in the 36/12 cohort and 0 % in the 25/5 cohort, p = 0,696. Grade 2 toxicities were similar between groups, while no grade 4 or grade 5 toxicity was reported for all patients ( Table 2 ). Outcomes After 48 months of follow up, 26 biochemical recurrences had occurred in the 37.5 Gy group, compared to 7 events and 0 events in the 36 Gy group and 25 Gy group respectively. Estimated biochemical recurrence-free survival at 4 years was 91 % (standard error 0.02) for the 37.5 Gy arm, 95 % (0.02) for the 36 Gy arm and 100 % (0.00) for the 25 Gy arm. The percentage of patients who reached a PSA nadir < 0,4 ng/ml was 87.6 % in the 37.5 Gy group compared to 92.3 % for the 36 Gy group and 92.1 % for the 25 Gy group. The number of patients who reached a PSA < 0,2 ng/ml at 4 years was 74 % in the experimental cohort compared to 78 % in the 36/12 group and 71 % in the 37,5/15 group, p = 0,40. There was no significant association identified for PSA < 0,2 ng/ml at 4 years according to ISUP score or ADT use, p = 0,64. Discussion Large scale randomized trials and a recent meta -analysis have demonstrated that ultrahypofractionation in prostate cancer is at least as safe and effective as conventional fractionation [10,21,12,22] . However, none of those trials combined ultrahypofractionation with a brachytherapy boost. HDR brachytherapy boost has several advantages compared to SBRT alone, as it prevents geographical miss, it is cost-effective, and it shows better local control for patients with intermediate risk prostate cancer [23–26] . So far, at least three small prospective clinical trials have combined both modalities for intermediate risk prostate cancer [27–29] . A few months ago, Den RB and al [27] reported high biochemical control rates and low toxicities in a very similar EBRT + BB treatment scheme of 5 fractions SBRT + 15 Gy BB in a phase IB trial. The results of their UHF treatment scheme of 5 Gy daily fractions were however combined with other MHF treatment regimens in the analysis. While we have a longer follow-up in our cohort, our results are much alike. Two other trials, which included high risk patients as opposed to ours, have also published very good outcomes with the same combination as ours [28,29] . Gorovets and al reported on 101 patients with intermediate to high-risk prostate cancer treated with a HDR brachytherapy of 15 Gy × 1 fraction followed by SBRT treatment of 5 Gy × 5 fractions. After a follow-up of 24,1 months, no early or late grade 3 toxicities were observed. The 2-year biochemical relapse free survival was 97 %. As for Musunuru and al, they presented results on efficacy, quality of life and toxicity for 31 patients who received HDR- BT of 15 Gy × 1 fraction to prostate and up to 22,5 Gy to MRI nodule, followed by 25 Gy in 5 weekly fractions to pelvis. Median follow-up was 61 months. Acute and late grade 3 toxicities were respectively 7 % and 3 % and were all genitourinary. The 5-year biochemical-failure rate was 18,2 % and all failures occurred in the high risk group patients. In our study, IPSS scores and EPIC scores of the 28 patients in our experimental arm demonstrated acceptable sexual and genitourinary toxicity over time. We observed also low grade 3 acute and late toxicities. Those results are reassuring compared to ASCENDE-RT [30] . We did not treat the pelvic nodes in our study but nevertheless for this risk group, the biochemical control is good. Our trial has several limitations. Our small cohort of patients was not powered nor designed for calculations on biochemical recurrence-free survival or overall survival, but while our data shows promising results, a larger cohort of patients will be needed to further confirm our findings. This constitutes the goal of our next trial, and the recruitment of a larger cohort of 205 patients has started to demonstrate the non-inferiority of the UHF regimen. Another limitation of our study is the relatively short follow-up for the UHF arm (48 months). While a PSA 0,2 ng/ml at 4 years after brachytherapy can define cure according to J Crook’s biochemical definition ≤ [31] , a longer follow up is often necessary to evaluate late toxicities. The strength of our study lies in the comparison of our three cohorts of patients, with all the patients treated in the same manner and conditions at our center. Groups were homogenous for intermediate risk prostate cancer patients. The use of the IPSS and EPIC-26 questionnaires, which are standardized, allows us to compare our results externally. In conclusion, the results of our trial demonstrate feasibility of UHF radiotherapy with a combined brachytherapy boost for intermediate risk prostate cancer. Such a treatment scheme reduces treatment time significantly and is more convenient for patients. A further trial with a larger cohort is needed to confirm our findings. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.ctro.2023.100593 . Appendix A Supplementary data The following are the Supplementary data to this article: Supplementary data 1
|
[
"HAMDY",
"SMITH",
"POLLACK",
"VIANI",
"DEARNALEY",
"CATTON",
"LEE",
"MORGAN",
"WIDMARK",
"BRAND",
"BACHAND",
"ROACH",
"VIGNEAULT",
"CONTRERAS",
"MORTON",
"DEVRIES",
"LEHRER",
"SATHYA",
"DAYES",
"MORRIS",
"HOSKIN",
"DEN",
"GOROVETS",
"MUSUNURU",
"CROOK"
] |
9787834ed114475eb3677c646cfa3616_Prophylactic antenatal corticosteroids for fetal lung maturity Known unknowns and unknown unknowns_10.1016_j.crwh.2020.e00242.xml
|
Prophylactic antenatal corticosteroids for fetal lung maturity: Known unknowns and unknown unknowns
|
[
"Parnell, Laura",
"Ayuk, Paul"
] | null |
The use of prophylactic antenatal corticosteroids (ACS) was arguably one of the most important advances in obstetric care to be made during the second half of the 20th century, with clear benefits for babies born before 34 + 6 weeks of gestation [ 1 ]. The 21st century has seen progressive expansion of the criteria for ACS use to include women at risk of late pre-term (35 to 36 + 6 weeks of gestation) birth [ 2 ] and women having early term(37 to 38 + 6 weeks of gestation) elective caesarean sections [ 3 ], although this is not universal. Women having a planned induction of labour at 35 to 38 + 6 weeks of gestation are, however, not generally considered for ACS. Questions remain about the balance between the risks and benefits of ACS after 34 + 6 weeks of gestation as there are no data on long-term outcomes. There are also increasing concerns about the long-term outcomes for babies exposed to ACS before 34 + 6 weeks of gestation but subsequently delivered at term [ 4 , 5 ]. This more recent evidence reveals new unknowns about ACS, only some of which are currently being actively investigated. The benefits of ACS in babies born before 34 + 6 weeks of gestation are profound and include a reduction in rates of perinatal and neonatal mortality, respiratory distress syndrome (RDS), intraventricular haemorrhage, necrotizing enterocolitis and systemic infections in the first 48 h of life [ 1 ]. However, the majority of women given ACS do not deliver within the optimal window of between 24 h and 7 days of administration [ 6 ], and a large proportion deliver at term, where no benefits are anticipated. Combined with the current low threshold for ACS administration, this means that a large number of babies exposed to ACS before 34 + 6 weeks of gestation may not actually benefit from the intervention. There is increasing concern about the potential for harm in such babies [ 4 , 5 ]. A Finnish observational register-based study found that ACS exposure was associated with a reduction in birth weight, birth length and head circumference in babies subsequently born at pre-term, early-term and term gestations [ 4 ]. These findings on in-utero growth are consistent with reports from animal studies. In a Canadian population-based study, Melamed et al. [ 5 ] found an association between exposure to ACS during pregnancy and healthcare utilisation during childhood related to suspected neurocognitive and neurosensory disorders in babies born at term. While these data are concerning, they should not deter clinicians from offering ACS to women at increased risk of giving birth before 34 + 6 weeks of gestation. However, there is an urgent need for risk-assessment strategies to enable better targeting of ACS. Avenues to explore include digital tools supported by machine learning or artificial intelligence. The use of ACS after 34 + 6 weeks of gestation is not universal. There is high-quality evidence that ACS between 34 + 0 and 36 + 6 weeks of gestation result in a reduced incidence of RDS, transient tachypnoea of the newborn (TTN) and surfactant use [ 2 ]. However, the risk of neonatal hypoglycaemia is also increased [ 2 ], for which the long-term consequences are unknown. In women having a planned early term caesarean section (at 37 + 0 to 38 + 6 weeks of gestation), ACS reduce the risk of RDS, TTN and admissions to neonatal units, and also reduce the length of stay on neonatal units. However, there are no data on long-term outcomes following the administration of ACS after 34 + 6 weeks of gestation. Based on data on animal studies and recent observational data from population studies [ 4 , 5 ], there is some concern for neurodevelopmental, cardiovascular and metabolic outcomes. Given the large number of babies born at late pre-term gestations (35 to 36 + 6 weeks) and by early (37 + 0 to 38 + 6 weeks) caesarean section, any long-term consequences of ACS are likely to impact a large number of individuals and families, with implications for health, educational and social services and the wider economy. Research is needed to identify any long-term benefits and risks. Currently, clinicians and parents have to balance the known short-term benefits and risks with unknown but potential long-term risks and benefits. It is therefore not surprising that there are marked variations in care within maternity units, across maternity units, and across nations with respect to ACS use after 34 + 6 weeks of gestation. There are no data on the short- and long-term benefits of ACS prior to induction of labour at 35 to 38 + 6 weeks of gestation, again resulting in variations in clinical care. This is of particular importance as the rate of induction of labour at these gestational ages has risen sharply over the last decade. ACS are a highly effective intervention in women at increased risk of giving birth before 34 + 6 weeks of gestation and their use should be encouraged. However, tools should be developed to facilitate better targeting. Maternity care providers should monitor and report ACS use, including number of babies born before 34 + 6 weeks of gestation without ACS exposure, the number of exposed babies that are born between 24 h and 7 days of administration, after 7 days and after 37 + 0 weeks of gestation. Monitoring should continue until it is clear that there are no long-term adverse effects in babies born after 37 + 0 weeks of gestation. With respect to the use of ACS after 34 + 6 weeks of gestation, parents should be given individualized information on the neonatal risks associated with early birth, the known benefits and risks of ACS, informed that there are no data on long-term risks and benefits and supported to make an informed choice. Given the known short-term risks and benefits of ACS in women at increased risk of giving birth at 35 to 36 + 6 weeks of gestation and those having early caesarean section, including such women in clinical trials in order to obtain long-term outcomes may not be justifiable. However, there are no data on the use of ACS prior to induction of labour after 34 + 6 weeks of gestation. This is an ideal population for a clinical trial of ACS with a focus on both short- and long-term outcomes. Contributors The two authors contributed equally to the manuscript. Conflict of Interest The authors declare that they have no conflict of interest. Funding The authors received no funding from an external source in relation to this editorial. Provenance and Peer Review This editorial was commissioned and not externally peer reviewed.
|
[
"ROBERTS",
"SACCONE",
"SOTIRIADIS",
"RODRIGUEZ",
"MELAMED",
"ROTTENSTREICH"
] |
d709f0d61a4640bf88cdb86adeb41869_Is it Really Possible to See the Great Wall of China from Space with a Naked Eye_10.3921_joptom.2008.3.xml
|
Is it Really Possible to See the Great Wall of China from Space with a Naked Eye?
|
[
"López-Gil, Norberto"
] | null |
Dear Editor: In October 2003, after the first Chinese astronaut Yang Liwie returned from his first journey into Space, a popular belief was apparently called into question when he stated that he had not been able to see the Great Wall of China. Liwie's observation contradicted the information previously presented in several books, board games and various television contests, to quote a few examples. After Liwie's declarations, the Chinese government asked for his statement to be removed from various reports. The problem arose a few months later when the American astronaut Eugene Cernan stated at a conference that according to the news from the European Space Agency (ESA) issued on the last 11 th of May, in an orbit between 160 and 320 km, the Great Wall is visible to the naked eye. Various international newspapers rushed to explain that Cernan attributed his colleague Liwie's error to bad atmospheric and/or lighting conditions at the moment of his observation. In an attempt to further clarify things, the ESA published together with Cernan's declarations a picture of a part of the “Great Wall” photographed from Space. In this picture the wall looked like a route full of bends that resembled river meanders. One week later, when everything seemed perfectly clear and the myth had been finally reborn, another communication from the ESA dated the 19 th of May 2004 (no longer available in the ESA's website) acknowledged that the Great Wall in the picture was actually a river! The ESA had been warned of its error by Mr. Albert Kisskoy, Pr. Gary Li of the University of the State of California and Dr. Zhimin Man from the Fundan University of Shangai. After this little uproar it is still unclear for some people whether the myth is true or not. In order to answer this question, it is not necessary to go into Space and look: it suffices to know a little about the human visual system and its limits. Not even the best of human eyes at a simple glace could see the Great Wall of China from Space. The impossibility is due to the limitation of the human eye when it comes to seeing small diffusing objects. The relevant parameter is not the Wall's length (about 7300 km), but its width, which is usually less than 6 m. See Figure 1 . To illustrate this with a simple example, looking at the Great Wall from a distance of 160 km would be the same as looking at a 2 cm diameter cable from more than half a kilometre away! No matter how good the atmospheric conditions, lighting and contrast are —unless the object was self-illuminated or it reflected the sun as a small mirror— it would be totally impossible to see this cable (or, for similar reasons, the Great Wall) at a simple glance, because the eye would need a visual acuity greater than approximately 20/3, which is 7.7 times the normal visual acuity , and more than three times the maximum acuity reached by a falcon 1 , an eagle 2 , or a human eye 3 . Even an optically perfect human eye 4 would not be able to see the monument for two reasons. First, the sampling due to the finite cone spacing in the central fovea 5-7 imposes a limit to the visual acuity of 2.3 (about 20/9). In this case, a perfect image of the Great Wall would be about one third the size of a single cone excluding pupil diffraction effects. Second, pupil diffraction effects also limit the human visual acuity to 5 (20/4) 5-7 (for a 6 mm pupil and a 555 nm wavelength). In other words, the edges of the Wall have a spatial frequency that is about two and a half times higher than the cut-off frequency (189 c/deg) of a perfect human eye with a 6 mm pupil. Nevertheless, according to Westheimer experiments 6-8 , the minimum angle subtended by a line for it to be seen from the distance is approximately only 2 seconds of arc. Such angle is smaller than the one subtended by the Great Wall when observed from Space. Westherimer's results are based on the detection of a black line against a bright background; in this scenario, the black line causes a local dip in the luminance of the image, which makes it possible for the eye to detect it. Such a great local change in luminance also makes the detection of the stars at night possible (if bright enough), as does the reflection of the sun in a small distant mirror (as used in a boat to indicate its position). Therefore, in principle, if the Great Wall reflected the sunlight as a long mirror or it were self-illuminated with high-power lamps it could probably be seen form Space. However, in this hypothetical case, the astronaut would not be seeing the Wall but either the lamps or the sunlight reflection. Moreover, natural sun reflection would be very unlikely due to the type of material it was built with (limestone, clay, granite and brick). 9 Obviously, it would be even less likely to see the Great Wall from the moon, situated at a minimum distance of 350,000 km, because the visual acuity would have to be 17,000 times (!) better than that of the normal human eye (in this case it would amount to seeing the cable from a distance of more than 1000 km). In this sense, if the question was: “Could we see the Great Wall of China at a simple glance from Space?” The answer would also have to be “no”, because an astronaut located on the limit of the atmosphere, about 80 km (50 miles) away, would need a visual acuity of approximately 3.9 (about 20/5) to be able to see it. As a simple exercise, Google Earth © can be used to see the Wall at lat.=40.48234, lon.=116.180592 if one is close enough to the ground. However, once you are more than 40 miles away, it cannot be seen. This simple experiment does not really answer the question since the visualization of the Wall will depends not only on our vision, but also on the satelite image resolution, our computer screen, etc. Despite this, it can be observed that, at a height of 40 miles, the Wall is not visible but the landing runway of the Yongning Airport, located about 4 miles WNW to the Wall, is. Moreover, if the Great Wall was visible from Space, then, contrary to common claims, it would not be the only visible manmade object since astronauts would also enjoy the view of the Pyramids of Egypt, the Golden Gate Bridge, the Eiffel Tower, and probably their own house in case it is more than 6 m wide and long. For some unknown reasons (perhaps marketing-related) this belief is one of the “unscientific walls” that has become popular, imposing a false limit to our vision of the world.
|
[
"OYSTER",
"FOX",
"REYMOND",
"CAMPBELL",
"HIRSCH",
"APPLEGATE",
"CHARMAN",
"ROORDA",
"WESTHEIMER"
] |
67ca3a401684452b89a49894114a57e6_Bioactive glasses and glassceramics for hyperthermia treatment of cancer state-of-art challenges and_10.1016_j.mtbio.2021.100100.xml
|
Bioactive glasses and glass–ceramics for hyperthermia treatment of cancer: state-of-art, challenges, and future perspectives
|
[
"Danewalia, S.S.",
"Singh, K."
] |
Bioactive glasses and glass-ceramics are well-proven potential biomaterials for bone-tissue engineering applications because of their compositional flexibility. Many research groups have been focused to explore the utility of bioactive glass–ceramics beyond bone engineering to hyperthermia treatment of cancer. Hyperthermia refers to raising the temperature of tumor close to 44°C at which malignant cells perish with negligible harm to normal cells. Hyperthermia can be employed by many means such as by ultrasonic waves, electromagnetic waves, infrared radiations, alternating magnetic fields, etc. Magnetic bioactive glass–ceramics are advantageous over other potential candidates for thermoseeds such as nanofluids, superparamagnetic nanoparticles because they can bond not only to the natural bone but also with soft tissues in few cases, which helps regenerating the affected part due to its bioactive nature. Strict restrictions on clinical settings (
H
×
f
<
5
×
10
9
) force the research activities to be more focused on material characteristics to raise the implant temperature to required ranges. Lots of efforts have been made in past years to tackle these challenges and design best-suited glass–ceramics for hyperthermia treatment. This review aims to provide essential information on the concept of hyperthermia treatment of cancer and recent developments in the field of bioactive glass–ceramics for cancer treatment. The advantages and disadvantages of magnetic glass–ceramics over other potential thermoseed materials are highlighted. In this field, the major challenges are to develop magnetic glasses, which have fast and bulk crystallization with optimized magnetic phases with lower Curie and Neel temperatures.
|
1 Introduction Cancer is a generic term used to represent a large group of diseases affecting human body [ 1 ]. It is one of the deadly and fearsome diseases that causes a large number of deaths worldwide, irrespective of developed or developing countries [ 2 ]. Over 1.7 million new cancer cases were estimated in the USA in 2018, causing an estimate of more than 0.6 million deaths [ 3 ] Worldwide research efforts have shown extraordinary progress to understand the complex nature of cancer. A decline of 26% was observed in the death rate from 1991 to 2015 because of the improvement in early detection techniques as well as reduction in various types of smoking [ 4 ]. This decline saved more than 2.3 million lives all over the world. Despite of all these efforts, the current medical practices to encounter cancer are incomplete. There is no single mechanism to cure cancer; instead, a combination of various modalities is to be involved for better results [ 5–8 ]. Mostly used techniques for cancer treatment include radiotherapy (to treat cancer cells with radiation), chemotherapy (to treat cancer cells with chemicals/drugs), and hyperthermia (to treat cancer cells with heat) along with other recently developed techniques, as given in Fig. 1 . All these techniques have their own advantages and disadvantages [ 7 , 9 ]. Hyperthermia is one of the promising techniques that has shown great potential to perish cancer cells via heat generation [ 10 ]. So many materials have been developed and tested to check their efficiency to cure cancer via hyperthermia. Among these materials, bioactive glass–ceramics have been proved to be quite useful materials [ 11–14 ]. Besides their heat generation ability, suitably selected glass compositions are also able to exhibit bioactive response toward the natural bone and even soft tissues in some cases as a result of exchange reactions with physiological fluids [ 15 , 16 ]. This way, along with the elimination of the cancer cells, bioactive glass–ceramics may also help in regeneration of the affected bone parts. Glasses and glass–ceramics having transition metal (TM) oxides in their compositions have been widely studied for their magnetic and bioactive nature. Lot of research has been carried out to address the challenges to design better and more suitable materials, which can act as thermoseeds for hyperthermia treatment of cancer. After decades of research on glasses and glass–ceramics, it is worthwhile to look at the overall perspective and outline the major findings and crucial points for effective future development of the glasses and glass–ceramics for cancer treatment. Because of the very fast growing field and advent in technology, there is need to have a comprehensive and state-of-the-art review articles frequently on magnetic glass–ceramics for hyperthermia treatment of cancer, even having some good review articles on similar topics [ 13 , 17 ]. In a recent review article, Miola et al. have reviewed the magnetic and structural features of sol-gel, as well as melt-quenched glasses [ 17 ]. The present article not only reviews the glasses and glass–ceramics for magnetic induction hyperthermia (MIH) treatment of cancer but also their bioactivity and influence of heat treatment and composition on both the properties. This review encompasses a wide range of published literature on glasses and glass–ceramics targeting heat generation in alternating magnetic fields. The present review article starts with a formal introduction to hyperthermia, its variants, advantages, and disadvantages. The structural, magnetic, and bioactive characteristic of the magnetic glasses are reviewed. Finally, some of the aspects are discussed from the material science's point of view that can be explored in near future to best utilize the full potential of magnetic bioactive glass–ceramics for the cancer treatment. 2 Concept of hyperthermia treatment of cancer The word hyperthermia originates from Greek words hyper, i.e. raising, and therme, i.e. heat. Technically, the term hyperthermia refers to the elevation of temperature of a part of the body at a temperature more than that of the normal body temperature and maintaining it for a specific time duration [ 18 ]. It involves the heating the malignant cells to high temperatures (close to 43°C) by external or internal means with minimum harm to normal cells of human body in its neighborhood. Within temperature ranges 42–46°C, the cell apoptosis takes place. While at even higher temperatures, i.e. around 48°C, cell necrosis occurs. Both these mechanisms lead to the cell death [ 19 ]. Actually, cancer is the uncontrolled growth of the cells, which spreads in the adjoining body parts [ 20 ] and ends up to be fatal for the patient if not treated on time. Mostly, another word tumor is frequently used as a synonym to cancer cells, but it must be stressed that tumor may and may not be cancerous. If a tumor remains intact at a certain part of the body, it is not cancerous. However, if it spreads in other body parts with time, it is definitely cancerous. The present article is concerned about cancerous tumor cells. Cancer cells need lots of nutrients to grow, which they intake by developing a large network of blood vessels. Usually, blood vessels associated with cancer cells are of large size, which may create a misconception in reader's mind that the cancer blood vessel system is superior than that of normal cells. However, the blood vessel system in cancer cells is more likely a one-way traffic system rather than a two-way system as in normal cells. That means blood circulation (blood flow) through cancer cells is significantly lesser as compared to that in normal cells. A detailed discussion on the blood vessel system of tumor cells is reported by Nagy et al. [ 21 ]. The blood vessel system of cancer cells is insufficient to take away any heat provided during hyperthermia treatment. Therefore, cancer cells cannot withstand temperatures exceeding 41–42°C. By contrast, healthy cells owing to their better blood vessel system can survive even few degrees above this temperature. Thus, controlling the temperature near the cancer cells around 43°C is key for their successful elimination, without affecting the neighboring healthy cells to much extent [ 6 ]. Hyperthermia is usually employed in combination with other treatment therapies such as radiotherapy and chemotherapy of cancer cells. It is reported that the temperature elevation due to the hyperthermia process increases the sensitivity of the cells toward radiotherapy and chemotherapy [ 6 ]. It happens due to shrinkage of the cells by the damage of proteins and structures upon heating above the certain temperature [ 6 , 22 ]. For simplicity, the biology of these events is not discussed in detail in the present article. The interested readers are suggested to refer to the available review article for deeper understanding [ 23 ]. Hyperthermia is practiced in the clinical usage for many years [ 24 , 25 ]. It is advantageous over conventional radiotherapy and chemotherapy in particular cases where solid tumors are the most difficult to eliminate [ 26 ]. Some tumors are drug-resistant as well as radiation-resistant. Such tumors cannot be eliminated by chemotherapy and/or radiation therapy. In such cases, hyperthermia is more useful. Other than increasing the cytotoxicity to the tumor cells, hyperthermia also triggers certain anti-tumor immune responses that helps preventing the growth of tumor cells [ 27 ]. Depending on the size/spreading of the tumor cells and their location within the body, there are commonly three clinical methods of hyperthermia treatment: • Local hyperthermia; • Regional hyperthermia; • Whole-body hyperthermia. Local hyperthermia is meant for small tumors (up to 5–6 cm) [ 7 ]. Mostly, radio waves, microwaves, ultrasound waves are used to produce required heat of this type of tumor. The method of treatment can be both invasive or non-invasive. For an invasive treatment, a specially designed probe is inserted inside the tumor, and the tip of this probe heats up the tumor. On the other hand, for the non-invasive treatment, waves carrying high energy are focused on the tumors using machines outside the body. Regional hyperthermia is employed for relatively large tumor cells where the whole limb or organ needs the treatment. One of the variants of regional hyperthermia is perfusion hyperthermia where blood from the targeted part of body is pumped out, heated, and pumped back into the targeted part. While pumping the blood back, the anti-cancer drugs can be loaded along with the blood. This way, chemotherapy and hyperthermia are employed in combination with each other [ 7 ]. Other method of regional hyperthermia includes heating the organ/body parts by placing some devices on the surface of body part and using focusing radio/microwaves onto the targeted area. Whole-body hyperthermia is used for metastatic cancer where the tumor cells are spread though the body. Body temperature in this modality can be raised by many ways such as using heating blankets, immersing the patient into warm water, or putting the patients into large thermal chambers. The body is heated to temperatures similar to that in high fever for a short time duration. General anesthesia or other drugs may be provided during the treatment to make the patient sleepy. Whole-body hyperthermia is also applied to assist chemotherapy. The heat treatment to the body makes certain immune cells more active to kill cancer cells effectively for few after-treatment hours [ 28 ]. 2.1 Side-effects and limitations of hyperthermia Table 1 summarizes the limitations of different hyperthermia modalities [ 22 , 29 ]. One of the natural physiological consequence of hyperthermia is thermotolerance . The treated tissue may become susceptible to the heat effects after the removal of the heat provided. This thermotolerance can protect the treated tumor against further treatment. Another limitation with hyperthermia is related to its applicability. It cannot be used at all the affected sites in human body. At deep-seated sites of cancers such as bladder, brain, etc., hyperthermia is quite difficult to apply [ 30 ]. Most of the side-effects after hyperthermia treatment are temporary, except few cases. Side-effects of hyperthermia get worse depending on the stage of the cancer. Local hyperthermia is least hazardous relative to other modalities. Regional and whole-body hyperthermia have similar side-effects, where in certain cases, whole-body hyperthermia can have serious side-effects. These effects are lessening with technological advancement and deeper understanding of the treatment modalities [ 24 ]. 3 Magnetic induction hyperthermia (MIH) Heat can be produced in many ways as mentioned in previous sections. Based on earlier experiences to generate the heat in industrial applications, for the first time in 1957 (to best of our knowledge), magnetic hyperthermia was proposed on the basis of heat generation ability of iron oxide due to hysteresis losses [ 31 ]. When alternating magnetic fields are used to produce heat, it is named as MIH. Other similar phrases are also used in literature to indicate this type of hyperthermia such as magnetically induced hyperthermia, magnetically mediated hyperthermia, or simply magnetic hyperthermia. In this technique, ferrimagnetic/ferromagnetic/superparamagnetic materials (called thermoseeds) are injected into the tumor cells, and the system is subjected to externally applied alternating magnetic fields. These thermoseeds produce heat under alternating magnetic field via different mechanisms. Fig. 2 represents schematic of MIH treatment. Ferrimagnetic/ferromagnetic thermoseeds experience magnetic hysteresis under the alternating magnetic field. Magnetic moments of these materials try to orient in the direction of magnetic field. However, on reversal of direction of the magnetic field, complete reversal of magnetic moments does not occur. Thus, magnetization versus applied magnetic field graph is characterized by a hysteresis loop. The area of the hysteresis loop signifies the work done during reversal of the magnetic moments with the changing magnetic field. This work is manifested as thermal energy, which is dissipated to the surrounding and tumor cell is killed due to this heat. By contrast, superparamagnetic materials induce the heating effect under alternating magnetic fields by Brownian relaxation or Neel's spin relaxation, which is ascribed to the rotation of magnetic particles or magnetic moments, respectively [ 32 ]. Superparamagnetic systems are also favorable, as they exhibit zero remanence after the alternating magnetic field is removed [ 33 ]. Generally, the heat generation capacity of a material is measured in terms of the specific absorption rate (SAR). It represents the amount of energy converted into heat per unit mass and time: S A R = C Δ T Δ t 1 m Here, C denotes the specific heat of the material, denotes the initial slope of the time-dependent temperature curve, and Δ T / Δ t m denotes the mass of the magnetic material. 3.1 Controlling the heat generation Among other hyperthermia modalities, MIH is better known for its better control on the temperature. Heat generation by a material during hyperthermia can be controlled by the many factors, namely material's characteristics, its dosage, clinical settings, etc. [ 34 ]. Fig. 3 depicts the dependence of heat generation on various factors. The heat dissipated in ferrimagnetic and ferromagnetic materials primarily depends on their magnetic parameters, i.e. saturation magnetization ( ), coercive field ( M s ), and shape of the hysteresis curve. A larger hysteresis area signifies larger heat generation under alternating fields. Moreover, the dosage of the material injected into the tumor directly affects the heat generation. To minimize the sufficient dose of heat mediator, material with high SAR is required. The size and size distribution of the magnetic particles also affect the heat generation [ H c 32 , 35 ]. In general, homogeneously distributed fine particles generate more heat than that of coarser particles. Clinical settings, for instance, the magnetic field strength and the frequency at which the magnetic field is alternating can also affect the heat generation of a material [ 36 ]. However, due to biomedical reasons, there is a strict limit on clinical settings ( < H × f ) [ 5 × 10 9 37 , 38 ]. With these conditions imposed, the treatment outcomes have to rely up on thermal conversion efficiency of the thermoseeds [ 38 ]. For superparamagnetic nanoparticles, heat generation is mostly dependent on their size, as indicated by Fig. 4 . Ma et al. [ 39 ] found that Fe 3 O 4 superparamagnetic nanoparticles generate more heat up to 46 nm. However, above 46 nm, heat generation reduced with the growth of Fe 3 O 4 nanoparticles. This is because of apparent hysteresis losses for larger particles. Conversely, smaller particles exhibit heat generation due to Neel's relaxation and Brownian rotation. 3.2 Advantages of MIH Using magnetic fields to induce heat is advantageous over other hyperthermia modalities. Magnetic interactions are realized as action at a distance. No wires need to be there in connection with the thermoseeds. Properties of the thermoseeds can be optimized to enable a self-control over temperature. For example, if the thermoseeds have a Curie temperature close to 43°C, then at temperatures exceeding 43°C, the material will turn into a paramagnetic material. As paramagnetic materials do not produce heat under alternating magnetic fields, such a system will not increase the temperature anymore. Body cells do not get excessively heated up under alternating magnetic fields. As thermoseeds are non-radiative, it is easier for physicians to implant the thermoseeds without any special attention, such as in brachytherapy [ 40 ]. Brachytherapy is a kind of internal radiation therapy that allows to provide higher radiation doses to the cancer sites by placing the radioactive sources inside the tumor itself. It has fewer side-effects than that of externally provided radiation in conventional radiotherapy. Similar to brachytherapy, MIH can also be used to impart local heating effects with minimum harm to neighboring healthy cells. 4 Materials as thermoseeds for MIH As mentioned in Section 3 , materials must be ferrimagnetic, ferromagnetic, or superparamagnetic in nature to induce any heating effect. Various materials proposed as thermoseeds include metallic compounds, magnetic fluids, nanomaterials, glasses, glass–ceramics, etc. [ 29 , 41–43 ]. Various materials tested for hyperthermia applications have their advantages and limits as summarized in Table 3 . Metallic alloys for instance Fe–Pt, Ni–Si, Ni–Cu are found to be of great interest as their Curie temperature can be modified to be in the optimal ranges [ 41 ]. However, such materials suffer with problems like corrosion, bio-inert nature, and instability within the sites in human body. It limits their uses in hyperthermia treatment of cancer. However, metallic alloys otherwise are extensively used as biomaterials for various applications [ 44 ]. Instead of using metals or alloys, use of oxides of magnetic elements such as iron have been proved to be of great significance. Iron oxide is the most widely studied and clinically used compound among others magnetic oxides like nickel oxide and cobalt oxide. This is because of its notable magnetic properties along with biocompatible nature. Nevertheless, all the phases of iron oxide are not of magnetic significance. Magnetic fluids have shown a great potential as thermoseeds in hyperthermia applications [ 29 , 33 ]. Magnetic fluids are generally magnetic nanoparticles dispersed in aqueous media or some hydrocarbon. When the particle size approaches 20 nm or less, the iron oxide nanoparticles become superparamagnetic. That is why, these particles are abbreviated as SPIONS (superparamagnetic iron oxide nanoparticles). SPIONS have shown a great potential as thermoseeds [ 41 ]. SPIONS can also act as contrast agents for MRI purposes [ 45 ]. Additionally, SPIONS can be guided to the targeted site via external magnetic fields. Being superparamagnetic, SPIONS exhibit zero coercivity and zero remanence. Thus, no magnetic interactions are observed after the removal of the external magnetic fields. However, certain issues limit the uses of SPIONs as thermoseeds in the anti-cancer therapies. Dissolution of the SPIONS is a major concern, which leads to the possible release of iron particles in the body. It may have adverse effects such as promoting the growth of the tumors. Secondly, SPIONs are prone to agglomeration during the application of alternating fields. SPIONS have high surface energy due to their high surface area to volume ratio. Furthermore, there exist attractive magnetic and van der Waals forces due to which individual particles tend to agglomerate [ 46 ]. Such problems can be reduced to some extent with effectively coating SPIONs with some biocompatible materials such as silica, small organic molecules, hydroxyapatite (HAp), etc. [ 47–51 ]. These coatings may reduce the release of iron species and the dipole interactions of the magnetic particles. However, this approach has limited success only. To completely eliminate dipole interactions, the coating must be thick enough. This may lead to an instability of colloidal solution of the SPIONS. Also, SPIONS possess insufficient thermal conversion efficiency due to their degraded magnetic susceptibility. Further, iron oxide shows different magnetic parameters ( , M s etc.) in its crystalline, nanoparticle, and composite form. In the bulk form, magnetite (Fe H c 3 O 4 ) has ~92 emu g M s −1 , while maghemite (γ-Fe 2 O 3 ) has ~76 emu g M s −1 . Relatively lower values of are observed in their nanoparticles, as the degree of crystallinity differs in the core and on the surface. Bulk and nanoparticles of these materials also differ in coercivity. In contrast to bulk forms, nanoparticles of these magnetic phases are superparamagnetic and show nearly zero M s at the physiological temperature, i.e. ~37 °C. Based on their structure and magnetic characteristics, different materials have different clinical applications. Some of the commercially available materials/systems useful for hyperthermia therapy are given in H c Table 2 . Magnetic bioactive glass–ceramics are another potential candidate as thermoseeds for hyperthermia treatment of cancer. Some of the issues with SPIONs are avoided in case of magnetic bioactive glass–ceramics due to their inherent characteristics. In bioactive glass–ceramics, the magnetic phase is encapsulated within the bioactive glass matrix, which prevents any leaching of metal ions in the body environments, which otherwise might be harmful [ 53 ]. Agglomeration of the magnetic species (here embedded in solid matrix) is not an issue with the magnetic bioactive glass ceramics. As discussed later in the next sections, magnetic bioactive glass–ceramics can bond to the natural bone. So, thermoseed once implanted would stay at the application site. Thus, hyperthermia heat cycles can be repeated whenever needed at a later stage of the treatment (if required). In case of bone cancer, the bone is damaged and becomes weaker after removal of the tumor. SPIONs do not have any ability to regenerate damaged bone tissues. However, the bioactive glass–ceramics can also help in regenerating such affected bone parts. These properties make magnetic bioactive glasses advantageous over other materials as thermoseeds. A glass matrix can also be used to control the growth of the nanocrystallites of magnetic phases. However, glass–ceramics meet a problem having high a Curie temperature. To the best of our knowledge, the glass–ceramics with Curie temperature close to 44°C have not been reported yet. Other ceramics such as certain manganates have been reported for their Curie temperature in ranges close to that required for hyperthermia [ 54 ]. Advantages and disadvantages of different materials/systems for hyperthermia are summarized in Table 3 . 5 Bioactive glasses and glass–ceramics Successful use of any material for biomedical applications is restricted by its biocompatible properties. Hench et al. synthesized Bioglass® , which were able to bond with the natural bone [ 55 ]. The chemical composition of the Bioglass® (also called 45S5 glass) is 45 –24.5 SiO 2 -24.5 CaO -6 Na 2 O . When this glass is immersed in body fluids, after some time (depending on various factors discussed later), a layer of HAp is developed over its surface, which helps it to bind with the bone. Later, Ohura et al. suggested that artificial implants can make bond with the living bone if they can form HAp on their surfaces in the body environments [ P 2 O 5 56 ]. Moreover, the phenomenon of development of HAp can be reproduced using simulated body fluid (SBF) having the ionic concentrations close to that of human blood plasma [ 57–59 ]. While, SBF cannot replicate every aspect of the physiological environments and should not be considered as a single criterion to rate the biological performance of a material. Rather, an in vitro test using SBF can be regarded as a preliminary tool or initial indicator of in vivo bioactivity of the material [ 60 ]. The SBF tests are economic, fast, risk-free, and reproducible prior to the in vivo studies. These days, the bone bonding glasses/glass–ceramics are usually termed as bioactive glasses/glass–ceramics . In the following sections, structural, magnetic, and bioactive characteristics of such bioactive glasses prepared for MIH are reviewed. 6 Structure–property relationship Materials can be categorized on the basis of their structure–properties relationship. The properties of materials are either structure-insensitive or structure-sensitive, for instance Young's modulus and ultimate tensile strength, respectively. For high-performance materials, knowledge of these properties and their variation with atomic structure is essential. Therefore, in the following section, the properties relevant to the present review are discussed. 6.1 Structural and magnetic properties Glass is an amorphous solid material that lacks long-range atomic periodicity. Above 10 Å, the periodicity of the structural units is absent in these substances. By definition, a material is said to be glass if it exhibits glass transition ( ) on heating or cooling [ T g 61 ]. During this transition, glass loses its brittleness. On the other hand, a material is termed as a glass–ceramic if it contains crystalline phase(s) grown into the glass matrix [ 62 ]. Glass–ceramics are usually obtained after synthesis processing of the base glass. The base glass is subjected to heat treatment at appropriate temperatures for sufficient time duration. This controlled heat treatment leads to the formation of nuclei in the glass matrix. The crystallization is induced then with the growth of these nuclei within glassy phases. Besides this, sometimes the glass–ceramics are formed even during the quenching of glass [ 63 ]. Such in situ crystallization is observed when certain components of the glasses such as TM metal oxides are immiscible with other glasses ingredients [ 64 ]. Phase separation is the consequence of presence of such oxides, where different phases have different local chemical compositions and structures. Further, some of the TM oxides, for instance , TiO 2 , are found to be good nucleating agents [ Fe 2 O 3 65–68 ]. These nucleating agents speed up the nucleation process and result in easier crystallization in a glassy matrix. The properties of the final material are dependent on the type and volume fraction of the crystalline phases embedded in the glassy matrix. Sometimes, crystallization makes the glass–ceramics more durable to acid and base attacks and reduces its dissolution [ 69 , 70 ]. However, formation of the crystalline phases may also prove to be detrimental for chemical durability for certain compositions [ 71 , 72 ]. In general, the mechanical properties of the glass–ceramics are superior to their glass counterparts [ 73 , 74 ]. Suitable heat treatment given to glasses containing magnetic ions such as iron gives rise to glass–ceramics with better magnetic properties. For example, crystallization of in the glass matrix shows ferrimagnetic behavior of the glasses [ Fe 3 O 4 75 , 76 ]. In order to formulate glasses and glass–ceramics for MIH treatment of cancer, it is very important to understand their magnetic and bioactive properties with respect to composition. The choice of selecting ferromagnetic elements is very much limited. In the elemental form of TMs, only three elements (iron, nickel, cobalt) are ferromagnetic; chromium is anti-ferromagnetic, while other elements are either diamagnetic or paramagnetic. Hence, most of the elements are not of much use for MIH treatment particularly in their elemental form. Further, nickel and cobalt cannot be used owing to their toxic nature [ 77 ]. Thus, most important choice for the MIH is iron and its compounds [ 78 ]. However, the magnetic properties of the elements differ from their compounds. Iron is generally incorporated in the glasses and glass–ceramics in its oxide form. It should crystallize either as magnetite (Fe 3 O 4 ) or maghemite (γ-Fe 2 O 3 ) for hyperthermia. There is another possibility that iron oxide crystallizes as α-Fe 2 O 3 , which is non-magnetic. Fe 3 O 4 and γ-Fe 2 O 3 are approved for the medicinal use [ 79 , 80 ]. Ferrite particles coated with biocompatible phases, i.e. HAp has been reported to be useful for hyperthermia treatment [ 81 ]. Unstable calcium hexaferrite phase was stabilized by doping lanthanum in place of some of the calcium ions. Magnetic measurements showed that such materials could generate appropriate heat for the destruction of tumor cells via hysteresis losses. Gadolinium-based compounds (Gd 5 Si 4 ) have also been developed and investigated for the hyperthermia treatment [ 82 ]. Many compounds exhibit superparamagnetic nature at nanoscale regime, which allows using them for MIH applications. In fact, iron oxide also gives interesting magnetic properties at nanoscale, which affects their use for aforementioned applications [ 83–85 ]. First experimental studies describing the feasibility of hyperthermia treatments using magnetic materials were carried out by Gilchrist et al. [ 31 ]. Ferrimagnetic materials got special attention when Stauffer et al. [ 86 , 87 ] reported that ferrimagnetic materials can be used as localized heat sources at the targeted sites inside the human body kept under alternating magnetic fields. The idea of using magnetic glass–ceramics for hyperthermia treatment of cancer appeared after a report by Luderer et al. [ 88 ]. They showed that non-bioactive glass–ceramics containing lithium ferrite were useful as thermoseeds for hysteric hyperthermia. Afterward, so many reports followed with various designs and materials that can be used for the MIH in a better way. Ikenaga et al. [ 89 ] performed hyperthermia treatment using an animal with metastatic bone tumors, where ferromagnetic ceramic pins were used as the source of heat under magnetic field. Almost all the tumor cells implanted in the bone marrow were killed upon the given treatment. Ohura et al. [ 90 ] reported the magnetic and bioactive properties of SiO 2 –B 2 O 3 –P 2 O 5 –CaO–Fe 2 O 3 glasses and subsequently heat treated at 1,050°C to obtain glass–ceramics. Magnetite and wollastonite were the major crystalline phases formed, which are considered to be desirable for good bioactivity [ 91 , 92 ]. The addition of iron oxide enhanced the chemical durability of the glasses and retarded the Ca–P-rich layer during in vitro tests. Higher iron oxide content (≥3 wt%) completely prevented the HAp layer formation. Interestingly, heat-treated glass–ceramics formed Ca–P-rich layer after 8 days of implantation. Ebisawa et al. [ 93 ] studied ferrimagnetic glass–ceramics obtained by heat treating SiO 2 –CaO–FeO–Fe 2 O 3 glasses. The glass–ceramic contained 36 wt% of magnetite. Due to some amount of iron oxide remaining in the glass matrix, the glass–ceramics did not show any bioactivity. However, addition of Na 2 O to above composition accelerated the apatite formation of the samples in SBF. The addition of B 2 O 3 retarded, while P 2 O 5 accelerated, the apatite layer formation. Simultaneous addition of P 2 O 5 and B 2 O 3 resulted in good magnetic properties and most effective apatite layer formation. However, the mechanism of the apatite layer formation process was not clear. Jagadish et al. [ 94 ] explored the formation of bioactive glass–ceramics with calcium ferrite crystalline phase. Upon heat treatment, α-Fe 2 O 3 and CaFe 4 O 7 phases were grown within the glass matrix. The presence of iron in the glass compositions increased the chemical durability of the glass. No direct evidence was found for the formation of apatite layer on the surface of the glass–ceramics even after 30 days of immersion in SBF. Only the formation of silica-rich layer indicated the initial stage of apatite layer formation. These glass–ceramics exhibited the absorption of microwave power, indicating their possibility to be used for microwave hyperthermia. Singh et al. [ 42 ] studied the effect of glass composition on the crystallization, in vitro bioactivity and magnetic properties of SiO 2 –Na 2 O–Fe 2 O 3 –CaO–P 2 O 5 –B 2 O 3 glasses. Na 3 CaSi 3 O 8 and Na 3 Fe -x PO x 4 were identified as the major crystalline phases formed in the glass–ceramics. Magnetic moments did not saturate even up to 12 kOe. Glass–ceramics exhibited low hysteresis area with random variation in coercivity with change in iron oxide content. Ca–P-rich layer on the surface of the glass–ceramics was observed after 36 days of immersion in SBF. Lee et al. [ 53 ] used higher amount of iron oxide to prepare ferrite-based glass–ceramics for hyperthermia treatments. They demonstrated by in vitro as well as in vivo tests that these glass–ceramics could kill the cancer cells locally after keeping in alternating magnetic field for 9 min. Carcinoma cells in the vicinity of ferrimagnetic material were killed; on the other hand, cells 5 cm apart from ferrimagnetic material were not affected much ( Fig. 5 ). It shows the advantage of MIH in local heat generation without any harm to the cells lying apart. However, the researchers also recommended long-term studies to further confirm the results. Similarly, high iron-containing calcium-silica-phosphate glasses were studied for the magnetic and structural properties. In this glass, silica was replaced by Fe 2 O 3 up to 30 mol% [ 95 ]. Glass stability was higher for higher iron oxide-containing samples. Glass–ceramics were obtained by heat treatment (1,000–1,200°C) of the as-quenched glasses. Magnetite was the major phase along with hematite (non-magnetic) and maghemite. Iron ions seem to form magnetic domains even in glasses. The samples were proposed for the hyperthermia treatment of cancer; however, their bioactivity was not reported. In ferrite-based glasses, the formation of useful crystalline phases, i.e. magnetic Fe 3 O 4 and γ-Fe 2 O 3 , is difficult to achieve mainly due to following reasons: first, low control over ratio, and secondly, higher stability of non-magnetic α-Fe Fe 2 + / Fe 3 + 2 O 3 phase than γ-Fe 2 O 3 and Fe 3 O 4 phases [ 80 ]. Formation of simultaneous presence of α-Fe 2 O 3 along with Fe 3 O 4 leads to reduce the heat generation tendency of the sample during hysteresis losses. Bretcanu et al. [ 75 ] came up with the adjustment of the heat generation with the chemical composition, primarily by changing iron content. They investigated the effect of crystallized Fe 3 O 4 on the magnetic properties of ferrimagnetic glass–ceramics. Nanometric magnetite crystals were found in as-quenched form of the glasses. Saturation magnetization increased, while coercivity decreased with the increase in iron oxide content in the composition. Smaller crystal size was found to generate more heat during hysteresis losses than that of bigger crystallites. They concluded that by controlling the composition (ratio of iron oxides), the generated heat can be controlled. In the subsequent year, they studied the effect of preparation parameters on the crystalline phase formation and its effect on magnetic properties [ 64 ]. Excess amount (45 wt%) of iron oxide in the composition resulted in the formation of glass–ceramics during the quenching of the melt. With increase in the melting temperature, the volume fraction of magnetic phase increased and consequently the saturation magnetization also increased ( Table 4 ) . Similar observations were also made by other research groups [ 96 ]. The glasses changed from pseudo-single domain to multi-domain glass–ceramics at 1,500 °C. Possibly due to same reason, coercive field for the glasses melted at 1,550°C was lowest among the series and with smaller hysteresis area than other glasses. The samples prepared by melting process were found to exhibit higher specific losses than that of prepared by co-precipitation method. It is still a big challenge to obtain a glass–ceramic with simultaneously good magnetic and bioactive properties. Magnetic species containing crystalline phase are required for good magnetic properties, while bioactivity decreases at the same time due to lower dissolution rate of glass–ceramics, which led to lower physiological reactions between sample and SBF. In an attempt to resolve such problems, Arcos et al. [ 97 ] introduced new biphasic material prepared by the mixture of sol-gel-derived glass for good bioactivity and a melt-quench-derived ferrimagnetic glass for magnetic properties. This biphasic material exhibited good in vitro bioactivity after 15 days of immersion in SBF. Due to dissolution of sol-gel glass during immersion in SBF, the saturation magnetization of the residual composition is increased. On the other hand, coercivity decreased drastically (400–250 Oe) due to stress relaxation of the crystalline part, which can affect the performance of the materials implanted for longer times. Similar biphasic materials were also studied by Ruiz-Hernandez et al. [ 98 ] and they found that apatite phase could not grow on the iron-containing glass–ceramic individually. Mixing of sol-gel glass improved the hyperthermia performance of the parent glass by modulating its coercive field. Saturation magnetization increased with increase in the glass–ceramic content. However, coercivity showed a random trend with the composition of the system. The SAR varied in accordance with the coercive field rather than with iron content. Biocompatible nature of biphasic materials was indicated by the in vitro experiments. Shah et al. [ 99 ] presented another approach to optimize magnetic characteristics of the glasses. They prepared SiO 2 –CaO–P 2 O 5 –Na 2 O–Fe 2 O 3 –ZnO glass system followed by heat treatment at 600°C and cooled under the aligning magnetic field of 10 kOe. It helped the magnetic domains to set in their easy axis (the axis along which even small magnetic field is sufficient to reach saturation magnetization), which caused the saturation magnetization and coercivity to increase. Thus, the heat generation capacity of these field-cooled glass–ceramics enhanced as compared to that of normally cooled glass–ceramics. These glass–ceramics exhibited growth of HAp after 3 weeks of immersion in SBF [ 100 ]. It has been seen that Fe 3 O 4 and calcium-based silicates are mostly formed crystalline phases in such heat-treated glass–ceramics [ 101 ]. The former is responsible mainly for magnetic properties, while the latter is reported to be bioactive in nature. Jiang et al. [ 102 ] also found such phases in silicon oxide composite containing zinc and iron oxide prepared by sol-gel method. Calorimetric measurements indicated lower specific power loss and increase in the temperature of the composite as compared to that of zinc ferrite glass–ceramics. The observations of cell culture experiment revealed that these composites promoted osteoblast proliferation more visibly than zinc ferrite glass–ceramics and HAp. Singh et al. [ 103 ] examined the glass–ceramics having finely dispersed nano-crystallites of zinc ferrite obtained after controlled heat treatment of x (ZnO, Fe 2 O 3 ) (65- x )SiO 2 20(CaO, P 2 O 5 ) 15Na 2 O (6≤x ≤ 21 mol%) glasses. The zinc ferrite and calcium sodium phosphate crystallized as the main phases. The effect of zinc iron oxide content on the magnetic properties these glass–ceramics was observed. The glass–ceramics changed from paramagnetic to fully ferrimagnetic material at higher zinc iron oxide content. The samples showed good in vitro bioactivity in 30 days of immersion in SBF [ 104 ]. Magnetic properties of the borate glass–ceramics consisting of Fe 2 O 3 and ZnO were investigated by Pascuta et al. [ 105 ]. Glass–ceramics containing 15 mol% Fe 2 O 3 exhibited ferromagnetic interactions along with superparamagnetic contribution. The characteristics of both spin glass systems and superparamagnetic were present. It is interesting that non-interacting superparamagnetic particles exhibited magnetic hysteresis even at higher temperatures. There are some other similar reports on ferrimagnetic glass–ceramics containing zinc and iron oxides with similar observations [ 106 ]. Our group reported magneto-structural as well as bioactive properties of multicomponent glass–ceramics having different concentrations of titania [ 107 ]. After heat treatment, superparamagnetic glass–ceramics were obtained. Formation of HAp was observed on the surface of samples after 42 days in SBF. Gopi et al. [ 108 ] used the ultrasonic irradiation technique to functionalize the HAp with the magnetite nanoparticles. The ultrasonic irradiation with two different frequencies of 28 and 35 kHz at the power of 150 and 320 W, respectively, was used for the synthesis purposes. The ultrasound irradiation of 35 kHz at 320 W showed the efficient diffusion of magnetic nanoparticles to the HAp host matrix, which was helpful for the formation of magnetic HAp. The samples showed superparamagnetic nature exhibiting very low coercivity . The saturation magnetization ( ( H c ) ) value of magnetic HA was less than that of the magnetite nanoparticles. Sharma et al. [ M s 109 ] studied the biocompatibility and the magnetic properties of iron oxide/carbide nanocomposites encapsulated by carbon. Iron carbides are not bioactive in nature. However, the presence of iron oxide and the non-magnetic carbon shell improved the biocompatibility of the nanocomposite, which was confirmed by using different cell lines. Jayalekshmi et al. [ 110 ] prepared magnetic and degradable polymer/bioactive glass composite nanoparticles. The prepared composites showed soft ferrimagnetic behavior. Iron in Fe 2+ state acted as a network modifier, while Fe 3+ acted as an intermediate in glass. The structural and microstructural properties of the glasses/glass–ceramics with composition 34SiO 2 -(45- x )CaO–16P 2 O 5 -4.5MgO-0.5CaF 2 - x Fe 2 O 3 have been reported by Sharma et al. [ 111 ] where iron oxide showed the network-modifying character. Apatite, hematite, wollastonite, and magnetite were the major crystalline phases formed. The further studies indicated that the glass–ceramics having 15 and 20 wt% iron oxide show good biocompatibility. CaF 2 in added to the glass compositions to control dissolution rate. It does not affect the bone bonding capability; however, fluorine ions retard the dissolution rate. Many researchers included CaF 2 in various amounts to their glass compositions for specific reasons [ 16 , 112–114 ]. Singh et al. [ 112 ] observed the crystallization and bioactivity of phosphosilicate glasses containing Fe 2 O 3 converted to glass–ceramics at 1,050°C. The formation of nanocrystalline magnetite was strongly dependent on the initial iron oxide content. The samples exhibited better bioactivity at higher iron oxide content. It should be noted that though many reports claim that the presence of iron oxide decreases the in vitro bioactivity; there are some reports indicating that glasses containing Fe 2 O 3 exhibit good bioactivity. Thus, there are conflicting reports in literature indicating variable influence of iron oxide on bioactivity of the glass. Manganese and its compounds are also being considered in glasses and glass–ceramics because of their importance from biological point of view. ions enhance the osteogenesis process, while their absence may cause several problems such as bone deformation, growth inhibition, or bone resorption [ Mn 2 + 115 ]. Moreover, ions enhance the ligand-affinity of integrins, which in turn promotes the cell adhesion by mediating interactions between extracellular matrix and cell ligands [ Mn 2 + 116 , 117 ]. Bigi et al. [ 118 ] reported that Mn-doped HAp coatings on etched Ti substrates exhibit better osteoblasts proliferation and activation of their metabolism. Manganese ions also have positive effects on proliferation in thin β-tricalcium phosphate film coatings on Ti substrates [ 119 ]. Thus, addition of manganese to the biomaterials may be useful for the integration of implants. Along with having good bioactive properties, manganese is of great interest for scientists because of its magnetic character. Manganese dioxide is anti-ferromagnetic in nature, but in ionic form manganese ions may give unique magnetic properties depending on their interaction between nearest neighbor ions. It leads to modify its d -orbital to atomic diameter ratio in such ways that tend to give positive exchange energy. In the presence of iron oxide, it forms manganese ferrite in glass compositions. Recently, many reports devoted to the application of Mn-ferrite for hyperthermia [ 35 ]. Li et al. [ 114 ] synthesized glass–ceramics with composition MgO–CaO–SiO 2 –P 2 O 5 –CaF 2 –MnO–ZnO–Fe 2 O 3 and studied their in vitro surface bioactivity. After heat treatment at 1,200°C, apatite, fluorapatite, wollastonite, and Zn 0·75 Mn 0·75 Fe 1·5 O 4 were the major crystalline phases present in the glass–ceramics. The bioactivity of glass–ceramics reduced with the doping of Mn–Zn ferrite, but a hydroxycarbonate apatite layer was found on the sample surface 14 days of immersion in the SBF. In another report, they prepared similar composition where they grew MnFe 2 O 4 and Fe 3 O 4 phases in the glassy matrix [ 120 ]. Along with the in vitro testing, cell culturing studies were also performed to observe the cell proliferation over the surface of glass–ceramics. The co-culturing experiments of samples with ROS17/2.8 cells indicated the successful attachment of the cells and good proliferation on the surface of samples. Magnetic glasses exhibited better cell affinity as compared to that parent glass matrix. The presence of manganese played important role in improving the cell affinity of the samples. Similar to iron oxides, manganese oxide may also act as an intermediate oxide because of its possibility to exist in higher oxidation states. The magnetic parameters of the glasses and glass–ceramics discussed above along with other similar reports [ 121–123 ] are given in Fig. 6 . Recently, a new class of materials called mesoporous materials with high surface area has emerged as a promising platform for cancer therapeutic applications [ 124 ]. These materials differ from microporous and macroporous materials in their pore sizes ( Fig. 7 ). Materials having pores of size in 2–50 nm range are referred to as mesoporous materials. Among various mesoporous materials, those based on silica have been the center of research for drug delivery applications. It is because of their similar biocompatible properties as that of conventional nanocarriers, low toxicity, and better understanding of their synthesis methodologies [ 125 ]. Moreover, their surface area, pore size, and shape can be controlled by compositional changes, heat treatments, changes in synthesis methodology, etc. [ 126–128 ]. Yan et al. used sol-gel and template synthesis method to prepare highly ordered mesoporous glasses, which exhibited high bioactivity because of high surface area [ 129 ]. Anand et al. prepared ternary glasses (SiO 2 –CaO–P 2 O 5 ) via the sol-gel method using three different surfactants [ 130 ]. In vivo studies indicate that all the prepared samples were biocompatible, biodegradable, as well as non-toxic. Among these samples, the one prepared with ionic surfactant, i.e. hexadecyltrimethylammonium bromide (CTAB), exhibited larger surface area than those prepared using non-ionic surfactants. All the samples exhibited bone regeneration tendency. Mesoporous glasses can carry anti-cancer drugs [ 131 , 132 ]. The drug is loaded on the mesoporous material basically via the solvent evaporation method or via adsorption. Mesoporous carriers are dipped in the drug solution for sufficient time. During this time, drug penetrates into the pores of the carrier mesoporous material [ 133 ]. Kaya et al. found that silica-based mesoporous bioactive glasses exhibit a great potential to deliver antibiotics than the conventionally used method to prevent infections [ 134 ]. The path of mesoporous materials containing magnetic elements can also be controlled with externally applied magnetic field in addition to production of heat because of hyperthermia effects. Thus, in addition of being an effective drug delivery system, such materials can simultaneously be employed for chemotherapy as well has hyperthermia treatment of cancer. Such materials can also be triggered by the means of pH change, magnetic field, or heat effects to release the carried anti-cancer drug in the desired site at desired time [ 135–138 ]. Silica-based mesoporous nanospheres have shown a great potential for drug loading–deloading and bioactive properties [ 139–143 ]. Incorporation of various metallic ions and their influence on the characteristics of mesoporous host glasses are well known [ 144–148 ]. Magnetic mesoporous glass scaffolds were prepared by Zhu et al. in the system Fe 3 O 4 –CaO–SiO 2 –P 2 O 5 [ 149 ]. They reported that the replacement of CaO by Fe 2 O 3 in the glasses reduced the dissolution rate in physiological environments. At the same time, it improved the osteoblast cell proliferation and differentiation. The glasses were loaded with gentamicin to check the drug loading and release. It was found that the magnetic mesoporous glasses exhibited sustained drug release capabilities. The superparamagnetic nature of some of these magnetic glass scaffolds indicated their potential for hyperthermia treatment of cancer. Li et al. observed that magnetic mesoporous silica nanocarriers have favorable selectivity among healthy and cancerous cells [ 150 ]. They studied cell viability of two kinds of cells, i.e. HT-1080 (which represented cancer cells) and NIH/3T3 (which represent normal cells). These cells were incubated for different time durations with anti-cancer drug DOX, Fe 3 O 4 encapsulated with mesoporous silica nanoparticles (Fe 3 O 4 @MSNs) and peptide-Fe 3 O 4 @MSNs. It was found that for HT-1080 cells, when treated with DOX and peptide-Fe 3 O 4 @MSNs/DOX, the cell viability after 24 h was just 46 and 50%, respectively. On the other hand, cell viability was 80% for NIH/3T3 cells (Normal cells) treated with peptide-Fe 3 O 4 @MSNs/DOX ( Fig. 8 ). It indicates the selective response of these particles toward normal and cancer cells. Jafari et al. have reviewed structural, biocompatible, and drug loading capacity of mesoporous silica nanoparticles in their recent article [ 151 ]. They presented promising future of such materials along with the concern that such materials will take time to impact the clinical market. Similar conclusions are drawn by Albinali et al. who compared targeted drug delivery efficiency for various materials [ 152 ]. They concluded that mesoporous silica is a remarkable drug carrier. However, they found that it is challenging to bring the nano-drug carriers in clinical practices. Mass-scale synthesis of these materials, their quantitative assessment, detailed profiles in terms of toxicity, safety, immunogenicity, etc. are the major concerns to be addressed for the successful use of these materials. Kargozar et al. presented mesoporous bioactive glasses as remarkable platforms for anti-bacterial strategies [ 153 ]. Extensive investigations on loading and release of various metal ions such as copper, cerium, silver, gallium, etc. on mesoporous glasses are compared. They identified that lengthy and expensive regulatory paths for approval of biomolecules and brittle nature of the pores are two major barriers for significant acceptance of mesoporous materials by the Food and Drug Administration (FDA). From the above discussion, it is inferred that glasses, glass–ceramics, and mesoporous materials have a great potential for hyperthermia treatment of cancer. However, these materials have their own advantages and disadvantages over each other as summarized in Fig. 9 . So far, the structural and magnetic properties of various compositions are described. Various factors affecting bioactive properties of glasses and glass–ceramics are described in the next section. 6.2 Factors affecting bioactivity of glasses/glass–ceramics An effective thermoseed for hyperthermia treatment of bone cancer (especially) is one with appropriate magnetic properties along with good bioactivity. Thus, it becomes essential to understand the response of glasses and glass–ceramics under bio-mimicking fluids. Various steps in the formation of the HAp layer over surface of the glasses are depicted in Fig. 10 . The bioactivity process depends on the interaction of particles on the surface of glass and ions of SBF in contact with glass surface. Thus, all the factors affecting the ease of release of ions from glass surface and chemical environment at glass-SBF interface will also influence the rate of HAp formation on the surface of the glass. The bioactivity of the material mainly depends on material characteristics and immersion conditions in SBF, as shown in Fig. 11 . In the next subsections, dependence of the bioactivity on these factors is elaborated. 6.2.1 Composition and structure of glass The composition of the glasses/glass–ceramics has a marked impact on the formation of HAp in SBF. Application-specific bioglasses can be obtained because of the compositional sensitivity of their properties [ 154 ]. The nature and amount of network modifiers present in the system control the dissolution behavior of the glasses/glass–ceramics, and consequently, the rate of HAp formation. The addition of compounds that improve the strength of glass network delays the HAP formation. For example, the addition of intermediates such as MgO strengthens the network and delays the apatite layer formation [ 155 , 156 ]. On the other hand, the addition of network modifiers such as Na 2 O, CaO, etc. breaks the network and makes the network more prone to ion leaching. However, excess leaching of the alkali ions is not favorable, as it may be cytotoxic and lead to cell death [ 157 ]. In contrast to these reports, Kapoor et al. [ 158 ] observed that there is no direct correlation between the dissolution of alkali-free glasses with their network connectivity. Rather, they found that the leaching behavior of the glasses was more sensitive to the specific chemistry of the glass constituents, i.e. their ionic radii, oxidation state, etc. Hoppe et al. [ 159 ] gave a brief review on the various therapeutic inorganic ions, which could show favorable impact in bone regeneration due to their release from bioactive glasses into the physiological environments. In the glasses with composition similar to Bioglass®, bioactivity is very sensitive to the Ca/P ratio. Natural bone content hydroxyl apatite has Ca/P ratio 1.67 [ 103 ]. The glasses with Ca/P ratio ~1.67 show better bioactivity. The Ca/P ratio affects the structural and mechanical properties of the glasses too [ 160 ]. The bioactivity of the glasses is also affected by the amount and type of network formers in the glass. Many researchers studied the bioactivity of glasses with various amounts of B 2 O 3 and SiO 2 . It has been found that the degradation rate can be controlled by suitably choosing B 2 O 3 /SiO 2 ratio in the glass [ 157 , 161 , 162 ]. In the binary calcium borate glasses, boron forms poor three-dimensional network in comparison to silicate glasses. As a result, borate glasses exhibit higher dissolution rates [ 163 ]. However, with the increase in BO 4 units, formation of the HAp layer slows down because of better network connectivity. Phosphate glasses have also been studied for their bioactivity. However, phosphate glasses suffer excessive dissolution in comparison to silicate glasses [ 164 ]. Excessively soluble glasses suffer passive dissolution in the physiological environments and cannot be used for regeneration of the tissues. A balanced glass composition is, thus, required for active resorption to occur without detrimental effect on the cell activity. The faster dissolution of the phosphate glasses can be controlled by adding some intermediate oxides. For instance, addition of up to 3 mol% Al 2 O 3 has been reported to remarkably reduce the dissolution of phosphate glasses [ 165 ]. However, higher amount (≥5 mol%) of Al 2 O 3 had negative effects on the bioactive nature of the glasses [ 166 ]. The chemical durability of the glasses can be improved by befittingly incorporating other ions such as [ G a 3 + , Z n 2 + , F e 3 + , T i 4 + a n d A l 3 + 167–170 ]. Groh et al. [ 171 ] reported that the alkaline earth/alkali ions ratio is critical to design glasses to be processed easily at high temperatures. It is reported that by increasing the calcium content, partially replacing potassium with sodium, and incorporating small amount of fluoride increases the sintering behavior of the glasses. El Batal et al. [ 172 ] studied the bioactivity rate of the glass–ceramics synthesized by the controlled heat treatment of phosphosilicate glasses. It was observed that sodium silicate-based crystalline phases were formed after heat treatment of the glasses, which slightly retarded the bioactivity rate of the glasses. There are many similar reports in literature where higher degree of crystallinity retarded the dissolution rate and bioactivity of the glass–ceramics [ 173–175 ]. In recent times, fluoride-based bioactive glasses gained interest, as these glasses favor the formation of the fluoro-apatite (FAp) layer when dipped in SBF, which is more stable than HAp or carbonated HAp layer [ 176 , 177 ]. Such glasses may be useful for dental applications. Oxygen and fluoride ions have similar ionic sizes and chemical properties. The incorporation of fluoride ions has been reported to reduce the phase separation of the glasses and improve network connectivity of the parent glass [ 178 ]. Similarly, TiO 2 is also found to enhance the bioactive and mechanical properties of glasses without harming their bioactive properties. It is lightweight and bioactive itself and has been extensively used in biomedical applications [ 179 , 180 ]. TiO 2 is a well-known nucleating agent and favors the devitrification of glass [ 65 , 66 ]. Also, glasses containing TiO 2 are observed to sinter effectively at lower temperatures as compared to TiO 2 -free glasses [ 181 ]. It can be concluded that the composition of the glasses, their degree of crystallization, and type of crystalline phases affect the bioactivity. Hence, designing, preparation, and processing parameters need to be appropriately chosen to have a suitable material for biomedical applications. 6.2.2 Role of sample's surface area The higher surface area to SBF volume (SA/V) or sample weight to volume (W/V) ratio provides large number of particles of the samples interacting with the SBF. The increased area for reaction leads to faster apatite layer formation rates. Therefore, same glass but different in shape, i.e. plate, particulate and powder exhibit different dissolution rates in SBF [ 182 ]. There are ample reports on the bioactivity of the glasses and glass–ceramics using a range of SA/V or W/V ratios [ 183–186 ]. The high surface area and textural characteristics of mesoporous glasses have been reported to dominate their degradation properties and hence lead to good in vitro response in SBF studies [ 187 ]. Glasses with same composition but prepared via a different technique may exhibit different response to the bio-mimicking fluids [ 188 ]. The melt-derived glasses generally have non-porous surfaces with low intrinsic roughness and surface area. On the other hand, sol-gel-derived glasses have highly porous texture with large surface area [ 189 ]. The rate of formation and the thickness of the apatite layer vary with the morphological parameters, i.e. pore volume, pore size, surface area, etc. Consequently, sol-gel-derived glasses are more bioactive than the melt-quench-derived glasses [ 190 ]. Apart from the above-mentioned factors, immersion conditions such as SA/V ratio, flow or static arrangement, time duration, etc. are the factors that affect the realization of bioactivity of a glass. The reaction of the blood plasma with the implant will definitely be different to that of static conditions because, in circulating conditions, every time, fresh ions are available for reaction with the body part or implant [ 191 , 192 ]. However, in static solutions, the exchange reaction products are most likely to be stay in the vicinity of the implant–SBF interface, resulting in the drastic change in local pH of the solution and influencing the further reaction [ 185 , 193 ]. Some research groups have worked on the theoretical modeling of the bioactivity and dissolution of the bioactive glasses [ 194 ]. Computational tools such as in-silico studies are helpful in prediction and analysis of exchange interactions occurring at their interface of material and biological fluids and provide a useful structure–activity relationship [ 195–197 ]. Such computational techniques are time-saving, reproducible, risk-free, and can save lot of energy and cost to be invested in carrying out laboratory experiments. However, such computational results must be accompanied by the successive in vitro , in vivo and/or in situ experiments before clinical use. 7 Summary MIH is a promising technique for cancer treatment with lesser side-effects as compared to other existing techniques for cancer treatment. Among various magnetic materials, glasses and glass–ceramics are fascinating materials because of their bioactive nature as well as great scope to tailor the properties as per requirement. Many of the properties of glasses and glass ceramics can be optimized via compositional and processing techniques. However, despite of lot of research carried out in the field of bioactive glasses, still there is a lot of scope of research for better understanding and knowing the true nature of different compounds to precisely design appropriate biomaterials for specific applications. Based on the literature reviewed above, the following conclusions are drawn and categorized as challenges and future scope as the following. 7.1 Challenges/gaps persisting in the field • Despite enormous efforts dedicated to design suitable materials for hyperthermia, there are still many challenges faced in the translation of the concept to the clinical settings. One of the major challenges is the control of temperature during clinical practice. Ideally, the generated heat must not lead to temperatures exceeding 44°C because otherwise healthy cells would also be perished with excessive heat. However, the best known magnetic materials suitable for hyperthermia have sufficiently high Curie temperatures (e.g. magnetite has Tc ~577°C). A high Curie temperature allows the material to keep on heating up in alternating magnetic fields until the Curie temperature is reached. This makes temperature out of control at the clinical level. The glass–ceramics with a Curie temperature ~44°C need to be designed so that beyond this temperature, the material becomes paramagnetic, and no further heating is possible due to any hysteresis losses. • It is desirable to design glass–ceramics that can generate sufficient heat with minimum dosage. It can be achieved if glass–ceramic has sufficient hysteresis area with high magnetic saturation. For hyperthermia, iron must crystallize as magnetite (Fe 3 O 4 , with ~92 emu g M s −1 in bulk form) or maghemite (γ-Fe 2 O 3 with ~76 emu g M s −1 in the bulk form). A major challenge in optimizing the magnetic properties of bioactive glasses is the phase transformation during heat treatment. The glass–ceramic must be heat treated in order to improve volume fraction of magnetic content and consequently getting high magnetic saturation. However, magnetically important phases have strong chances to convert to other crystalline phases with no use. For example, magnetite changes to hematite during phase separation, which decreases the heat producing efficiency of the glass–ceramic. • Available magnetic materials with non-toxic and suitable biocompatible properties are limited. Fe 2 O 3 is highly used oxide among the limited range of materials. As mentioned above, higher amount of crystallized magnetic content is desired for better magnetic properties. However, in the melt-quench technique, only a small fraction (<5 mol%) of Fe 2 O 3 takes part in the glass formation in usual. More studies are required to maximize the solubility of Fe 2 O 3 using a chemical route like sol-gel and sputtering technique, and also, novel compositions need to be found to accommodate more Fe 2 O 3 into the glass matrix. • For biomedical reasons, there is a strict limitation on values of H and f so that should be less than H × f [ 5 × 10 9 38 ]. In such circumstances, the efficiency of the treatment has to rely largely on the thermal conversion efficiency of the thermoseeds. Thus, magnetic material must invoke sufficient heat to raise the temperatures of surrounding up to desired values (~44°C). For this to happen, suitable magnetic phase must be crystallized to high enough volume fraction. Here, glass–ceramics meet another challenge, which is the balance between magnetic and bioactive properties. Lower magnetic content may fail to generate desired heat, while increasing magnetic content may hamper the bioactivity of the glass. Heat-treatment parameters (temperature, duration, heating rate, environment) and composition of glass–ceramics must be selected in such a way to achieve bulk and fast crystallization in the glass. This way, the retarded bioactivity due to surface crystallization can be avoided. • As per the biocompatibility evaluation is concerned, only few reports appeared on simultaneous in vivo and in vitro studies of magnetic glasses/glass–ceramics. Under such circumstances, it is quite difficult to assess/compare performances of various glasses and glass–ceramics. For better understanding of the bioactive response of glasses and glass–ceramics, simultaneous in vitro and in vivo studies must be performed on large number of compositions in various circumstances such as in different chemical/biological environments. 7.2 Future scope To meet the required material properties for hyperthermia treatment of cancer, new compositions of glasses/glass–ceramics must be designed with better blend of magnetic as well as bioactive properties. • Mesoporous bioactive glasses can be potential candidates for combined hyperthermia and chemotherapy. More work has to be done in order to fully explore the potential of these fascinating materials. Synthesis methods need to be devised for mass-scale production of these materials. Also, various aspects of these mesoporous materials such as their toxicity, immunogenicity, biosafety, etc. need to be quantified. • The binding nature of initial bioactive glass compositions was limited to the natural bones only. With the advent of new compositions of bioactive glasses, which are capable of binding with soft tissues too, the applications of hyperthermia treatment can be extended beyond bone cancer. There is much scope of research in such kind of glass compositions to explore and optimize their properties to match application needs. • Another perspective concerned with hyperthermia treatment is to develop bioresorbable magnetic bioglasses. These types of materials may be useful to avoid any need of surgery for removal of implanted material after successful treatment. • Different approaches to synthesize glasses such as biphasic materials must be explored on wide range of compositions. • The clinical settings have limits due to biomedical reasons. The efficiency of hyperthermia is, thus, dependent on the thermal conversion efficiency of the thermoseeds. Materials must be tested within clinically possible set of magnetic field parameters ( < H × f ). The magnetic material targeted for hyperthermia applications must be able to heat up to 44°C with these magnetic field settings. 5 × 10 9 • Efforts must be dedicated to produce biocompatible glass–ceramics with a low Curie temperature close to 44°C. Some ceramic materials such as manganates have been reported to have their Curie temperatures close to the required range. Glass–ceramics incorporating such phases in their compositions can be explored to combine the bioactive nature of glass–ceramics and suitable magnetic properties of manganates. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
[
"MAHMOODZADEH",
"FITZMAURICE",
"SIEGEL",
"SIEGEL",
"SHETAKE",
"WUST",
"BEHROUZKIA",
"YU",
"VANDERZEE",
"HERGT",
"VERNE",
"MIOLA",
"BAINO",
"BRUNO",
"GERHARDT",
"FERNANDES",
"MIOLA",
"HABASH",
"PALANISAMY",
"HANAHAN",
"NAGY",
"JHA",
"CHANG",
"CHENG",
"KOULOULIAS",
"SARDARI",
"PEER",
"SULYOK",
"KOZISSNIK",
"OLIVEIRA",
"GILCHRIST",
"DEATSCH",
"LAURENT",
"KAUR",
"GIRI",
"ROSENSWEIG",
"JORDAN",
"LIU",
"MA",
"XIA",
"BAEZA",
"SINGH",
"KUMAR",
"KAUR",
"GOBBO",
"LAURENT",
"IQBAL",
"GLARIA",
"DAS",
"DADFAR",
"STERGAR",
"KUMAR",
"LEE",
"RAO",
"HENCH",
"OHURA",
"KOKUBO",
"TAKADAMA",
"OYANE",
"KOKUBO",
"SCHMELZER",
"HLAVAC",
"SINGH",
"BRETCANU",
"KRZMANC",
"FERNANDES",
"POIRIER",
"SINGH",
"CHAKRABORTY",
"ERROUISSI",
"MARCIAL",
"MCCLOY",
"GAWRONSKI",
"BRETCANU",
"ABDELHAMEED",
"CORCHERO",
"MACIASMARTINEZ",
"MULLER",
"DEB",
"AHMAD",
"ABENOJAR",
"LAHIRI",
"AADINATH",
"STAUFFER",
"SATOH",
"LUDERER",
"IKENAGA",
"OHURA",
"ENIU",
"SIQUEIRA",
"EBISAWA",
"JAGADISH",
"ENIU",
"LEVENTOURI",
"ARCOS",
"RUIZHERNANDEZ",
"SHAH",
"SHAH",
"SINGH",
"JIANG",
"SINGH",
"SINGH",
"PASCUTA",
"KAWASHITA",
"DANEWALIA",
"GOPI",
"SHARMA",
"JAYALEKSHMI",
"SHARMA",
"SINGH",
"DALI",
"LI",
"PALUSZKIEWICZ",
"MAYER",
"ARMULIK",
"BIGI",
"SIMA",
"LI",
"DALI",
"SHAH",
"BRETCANU",
"KARGOZAR",
"SHADJOU",
"ZHENG",
"VAZQUEZ",
"HU",
"YAN",
"ANAND",
"ZHOU",
"HE",
"WANG",
"KAYA",
"ZHANG",
"TREWYN",
"WANG",
"SHANTASINGH",
"MA",
"QIU",
"YUN",
"LUO",
"SHANAVAS",
"SALINAS",
"SHRUTI",
"BARI",
"GUPTA",
"WU",
"ZHU",
"LI",
"JAFARI",
"ALBINALI",
"KARGOZAR",
"KAUR",
"JHA",
"DIBA",
"FU",
"KAPOOR",
"HOPPE",
"LIN",
"RICHARD",
"MANUPRIYA",
"MANUPRIYA",
"LIANG",
"MANUPRIYA",
"SINGH",
"VALAPPIL",
"NAVARRO",
"OUIS",
"KAUR",
"GROH",
"ELBATAL",
"KAUR",
"JHA",
"DANEWALIA",
"BRAUER",
"KIM",
"DANEWALIA",
"RATNER",
"PETROCHENKO",
"SINGHDANEWALIA",
"ZHANG",
"ZHANG",
"STANCIU",
"SIRIPHANNON",
"LIAO",
"KUMAR",
"MIGUEZPACHECO",
"SEPULVEDA",
"SEPULVEDA",
"IZQUIERDOBARBA",
"KANG",
"RAMILA",
"SANZHERRERA",
"CORNO",
"RAIES",
"RAUNIO"
] |
c5c0d62fa12c491daab4fe78cd6970f5_Bone marrow failure_10.1016_j.htct.2020.09.008.xml
|
Bone marrow failure
|
[
"Calado, Rodrigo T."
] | null |
Aplastic anemia may be the result of the immune attack against hematopoietic stem and progenitor cells or the impairment of appropriate hematopoietic stem cell function due to inherited genetic defects. Although bone marrow transplantation is the preferential therapy for severe cases, the majority of patients lack a suitable sibling donor. The thrombopoietin receptor agonist eltrombopag has been recently added to immunosuppressive therapy, reaching high response rates and overall survival, rivaling matched-donor transplant results. Additionally, genetic defects in telomere-maintenance genes appear to be the most prevalence etiology of inherited aplastic anemia. Sex hormones may recover hematopoiesis in these cases. The occurrence of somatic genetic mutations in immune and inherited aplastic anemia may help to understand the complex dynamics of hematopoietic stem cells in vivo.
|
[] |
efb3a6c9544241889f1fb37ff8ac40ff_A passage-free simplified and scalable novel method for iPSC generation in three-dimensional culture_10.1016_j.reth.2024.02.005.xml
|
A passage-free, simplified, and scalable novel method for iPSC generation in three-dimensional culture
|
[
"Tsukamoto, Masaya",
"Kawasaki, Tomoyuki",
"Vemuri, Mohan C.",
"Umezawa, Akihiro",
"Akutsu, Hidenori"
] |
Induced pluripotent stem cells (iPSCs) have immense potential for use in disease modeling, etiological studies, and drug discovery. However, the current workflow for iPSC generation and maintenance poses challenges particularly during the establishment phase when specialized skills are required. Although three-dimensional culture systems offer scalability for maintaining established iPSCs, the enzymatic dissociation step is complex and time-consuming. In this study, a novel approach was developed to address these challenges by enabling iPSC generation, maintenance, and differentiation without the need for two-dimensional culture or enzymatic dissociation. This streamlined method offers a more convenient workflow, reduces variability and labor for technicians, and opens up avenues for advancements in iPSC research and broader applications.
|
eTOC blurb The current iPSC workflow is complex, time consuming, and prone to variability. This study introduces a new approach that eliminates the need for two-dimensional culture or enzymatic dissociation and simplifies iPSC generation, maintenance, and differentiation. Our streamlined method is convenient and paves the way for advancements in iPSC research and broader applications. 1 Introduction Induced pluripotent stem cells (iPSCs) are generated from somatic cells and possess characteristics similar to embryonic stem cells (ESCs), making them useful for disease modeling, etiological studies, and drug discovery [ 18 ]. However, the workflow for iPSC generation presents challenges, particularly during the establishment phase when specialized skills are required [ 8 , 13 ]. Manual expertise is necessary to select primary iPSC colonies with a good morphology, and spontaneous differentiation is a common issue within the first few passages. These intricacies contribute to inherent variability, emphasizing the need for an improved iPSC workflow. Maintaining iPSC lines under three-dimensional (3D) conditions offers scalability, which has been achieved using agitation rotor machines and bioreactors [ 5 , 10 ]. However, the enzymatic dissociation step for single-cell cultures under 3D conditions is time-consuming and complex ([ 7,10 ]. Establishing a stable human iPSC generation system within 3D culture while preserving pluripotency remains challenging. To address these issues and reduce technician variability and workload, a more convenient and streamlined iPSC workflow is necessary. This study aimed to develop a technique that enables iPSC generation, maintenance, and differentiation under 3D culture conditions, eliminating the need for two-dimensional (2D) culture and enzymatic cell dissociation. By simplifying this procedure, this method aims to enhance the utility of iPSCs in downstream applications. 2 Results 2.1 Adipose-derived mesenchymal stem cells reprogrammed to pluripotent state in the absence of 2D culture conditions To examine the feasibility of reprogramming human somatic cells under 3D culture conditions we performed experiments using human adipose-derived mesenchymal stem cells (AdSCs). Initially, we obtained an AdSC suspension from 2D-cultured cells through trypsinization. Subsequently, we introduced pluripotency-associated genes into detached AdSCs using Sendai virus vectors (SRV™ iPSC Vector, TOKIWA-Bio Inc., Japan) in suspension conditions for 2 h at 37 °C. Vector-transfected cells were then rinsed with phosphate-buffered saline (PBS), and cultured in 30-mL spinner flasks using a stirred bioreactor system [ 4 ]. Suspension cultures were grown in StemScale media [ 11 ]. To expedite the reprogramming process, we supplemented the medium with two small molecules, a Notch signaling inhibitor (N-[N-(3,5-difluorophenacetyl)-L-alanyl]-S-phenylglycine t -butyl ester [DAPT]) and a histone methyltransferase inhibitor (histone H3 methyltransferase disruptor of telomeric silencing 1-like inhibitor [iDOT1L]), based on our previous work [ 12 ]. Following approximately 30 days in culture, spheroid formations were observed ( Fig. 1 A). The number of spheres multiplied without cell dissociation procedures. Approximately 50 days after seeding, the cells reached confluence in the same reactor. Since SRV™ iPSC Vector contains Green Fluorescent Protein (GFP), SRV vector-positive cells could be positively identified with a GFP light source without the need for immunostaining. A spheroid with a GFP signal indicates the existence of Sendai virus vector remnant cells ( Fig. 1 A). The primary spheres grew in size and number without requiring enzymatic dissociation; therefore, we simply transferred a few spheroids to the next bioreactor as part of the passage procedure. The spheres maintained their growth throughout passage ( Fig. 1 B). Despite the extended culture period, we observed a mixture of GFP-negative and GFP-positive cells ( Fig. 1 C). Silencing exogenous genes is one of the criteria for complete cell reprogramming; therefore, we selectively passaged GFP-negative spheres. Upon subsequent cell growth, we confirmed the absence of Sendai virus (SeV) in the GFP-negative spheres ( Fig. 1 C). We successfully demonstrated the reprogramming of somatic cells under 3D conditions. Reprogrammed cells can be propagated without the need for single-cell dissociation steps; instead, they can be transferred as spheres to the next bioreactor. Despite GFP-negative selection, our methodology effectively eliminated the complexity associated with iPSC generation and streamlined the cultivation process, thereby making it easily reproducible. 2.2 iPSCs generated under 3D conditions exhibit a pluripotent state We conducted further analyses to assess the pluripotency of the newly established iPSCs under 3D conditions (3D-iPSCs). Some of the spheres were picked and seeded onto a Laminin-511 E8 fragment-coated dish in StemFit media [ 14 ]. The attached cells exhibited morphologies similar to those of human PSCs cultured in 2D conditions and stained for alkaline phosphatase (ALP; Fig. 2 A). Immunostaining revealed expression of the undifferentiated state markers, TRA-1-60, SSEA4, and OCT4, in cells ( Fig. 2 B). Embryoid bodies (EB) were formed from 3D-iPSCs; they demonstrated self-differentiation into three germ layers as revealed by immunocytochemistry ( Fig. 2 C). They formed tumors following subcutaneous transplantation into immunodeficient mice, and the tumors contained tissues from all three germ layers ( Fig. 2 D). To compare the characteristics of iPSCs generated using the two different reprogramming methods, we performed a human pluripotent stem cell (hPSC) ScoreCard assay, which quantifies the ability of a human PSC line to differentiate into the three germ layers in vitro [ 16 ]. We used previously established iPSCs from the same parental AdSCs under 2D conditions (AdSC-derived 2D-iPSCs) as a control [ 9 ]. The ScoreCard assay indicated that both AdSC-derived 2D-iPSCs and 3D-iPSCs had similar characteristics; both iPSCs were in an undifferentiated state and possessed the ability to differentiate into three germ layers’ cells types via EB formation ( Fig. 2 E). Even after passaging, the spheroids increased in size, cleaved into sheets, self-dissociated into smaller pieces, and grew again ( Fig. S1 ). The 3D-iPSCs maintained their growth for over 200 days and retained a stable 46,xx karyotype after prolonged culture at passage nine ( Fig. 2 F). Such 3D-iPSCs could be cryopreserved as cell aggregates using stem cell banker at −80 °C and were maintained after a normal thawing process ( Fig. 2 G). The newly established 3D-iPSCs were in a pluripotent state similar to that of 2D-iPSCs generated using the conventional reprogramming method. Additionally, Q-banding karyotype of the 3D-iPSCs demonstrated a stable karyotype; the cells were cryopreserved while preserving their spheroid structures. 2.3 3D-iPSCs differentiate into neural and cardiac cells following lineage specifications in 3D culture and under enzymatic dissociation-free conditions To facilitate the use of iPSCs in downstream applications, we employed specific protocols to induce neural and cardiac lineage differentiation. To establish a seamless workflow encompassing iPSC generation, cultivation, and differentiation, we implemented an orbital rotator system and induced differentiation under 3D conditions by transferring 3D-iPSC spheres directly into the differentiation medium ( Fig. 3 A). For neural lineage specification, we cultured 3D-iPSC spheres in a neural induction medium [ 6 ] ( Fig. 3 B). Immunostaining on day 6 following neural induction revealed expression of the neural stem cell markers, SOX1 and NESTIN, indicating successful neural lineage commitment. Upon cultivation in a neural stem cell expansion medium, we observed the neural formation of rosette-like structures through histological analysis. Immunostaining demonstrated cells continued to express NESTIN ( Fig. 3 C). For cardiac specification, we used a PSC cardiomyocyte differentiation kit [ 17 ] ( Fig. 3 D). At approximately day 20, beating cardiomyocyte-like cells were observed (data not shown). Immunostaining revealed the presence of the cardiomyocyte markers, actin (ACTN2) and cardiac troponin T (TNNT2), within these beating spheroids ( Fig. 3 E). Three-dimensionally cultured- and enzymatic passage–free iPSC spheres can differentiate into specific lineages. These findings highlight the effectiveness of performing a series of processes, including somatic cell reprogramming, iPSC expansion, and differentiation, into specific cell types in a 3D-culture format without the need for enzymatic dissociation. 2.4 2D culture- and enzymatic passage-free reprogramming method for blood cells We determined the applicability of the newly-developed method for other somatic cell types. We aimed to generate iPSCs from peripheral blood mononuclear cells (PBMCs). To achieve this, we isolated PBMCs from human blood and performed PBMC reprogramming with a SRV™ iPSC Vector in suspension conditions for 2 h at 37 °C. The transfected cells were cultured in 30-mL spinner flasks using a stirred bioreactor system in StemScale media ( Fig. 4 A). Around day 27, cell spheres began to form. Similar to 3D-iPSC derivation from AdSCs, such blood-derived spheres increased in size and number without the need for single-cell dissociation. GFP-negative spheres did not contain SeV by reverse transcription polymerase chain reaction (RT–PCR; Fig. 4 B). The cells were positive for ALP ( Fig. 4 C) and undifferentiated ( Fig. 4 D). These cells differentiated into all three germ layers through random differentiation via EB formation ( Fig. 4 E) and could form teratomas in vivo ( Fig. 4 F). Induced pluripotent stem cells were established using the same donor cells and viral vectors under 2D conditions ( Fig. S2A–D ). Using the TaqMan hPSC Scorecard assay, we compared PBMC-derived iPSCs in terms of their undifferentiated state and differentiation ability. The Scorecard assay revealed both PBMC-derived iPSCs (2D/3D) had similar characteristics: they were undifferentiated and could differentiate into three germ layers ( Fig. 4 G). In addition, the PBMC-derived 3D-iPSCs maintained a normal karyotype after prolonged cultivation ( Fig. 4 H). It is more convenient and less distressing for patients that blood cells to be used rather than fibroblasts as the cell source for iPSC research. Thus, this newly established iPSC workflow can also be applied to human blood cells. 3 Discussion We established a novel platform for iPSC research that eliminates the need for 2D culture and enzymatic dissociation, thereby providing a streamlined and simplified workflow. The conventional iPSC research workflow has limitations in terms of robustness and scalability, with complex and cumbersome procedures. Reprogramming cells and establishing iPSCs require specialized skills [ 3 ], and maintaining and expanding iPSCs involve enzymatic or mechanical dissociation, even under 3D conditions [ 4,7 ]. This enzymatic dissociation-free strategy overcomes these challenges by reducing technical and manual variability. We observed the growth kinetics of 3D-iPSCs; when the spheres increased in size, the central part became filled with dead cells and spheres cleaved and fragmented into smaller pieces, consistent with previous reports [ 15 ]. The small clusters continued to proliferate, leading to an increase in the number and size of 3D-iPSC spheres. Despite the potential cell stress induced by such growth kinetics, 3D-iPSCs exhibited a normal karyotype, which is advantageous because karyotypic abnormalities can occur under stress conditions and repeated single-cell dissociation [ 1,2 ]. Our 3D culture and enzymatic dissociation–free iPSC workflow eliminates the need for complicated procedures requiring specialized skills. This approach allows for the future automation of iPSC workflows, where fully automated machines can handle reprogramming, maintenance, cryopreservation, and differentiation under 3D conditions. This study outlines the initial steps toward the development of a fully-automated system for iPSC research. We successfully reprogrammed human somatic cells under 3D conditions, demonstrating the feasibility of maintaining, expanding, cryopreserving, and differentiating 3D-iPSCs into neural and cardiac lineages. By eliminating the enzymatic dissociation step, our new iPSC generation and cultivation system may facilitate iPSC research in rare disease studies, regenerative medicine, and other applications. 4 Experimental procedure 4.1 Ethics statements Human peripheral PBMCs were collected after obtaining written informed consent. All experiments were approved by the Institutional Review Board of the National Center for Child Health and Development (NCCHD) of Japan (permit nos. 385 and 396). All experiments involving human cells were performed in accordance with the tenets of the Declaration of Helsinki (revised 2013). The animal protocol was approved by the Institutional Animal Care and Use Committee of the NCCHD (permit no. A2003-002). All animal experiments were based on the three Rs (refined, reduced, and replaced), with animal discomfort minimized and the number of animals used reduced. 4.2 Reprogramming somatic cells using Sendai virus vector We used a commercially available Sendai virus vector, SRV™ iPSC Vector (TOKIWA-Bio Inc., Ibaraki, Japan) and SRV iPS-2 Vector (carrying OCT4, KLF4, SOX2, and C-MYC) for human AdSCs reprogramming, and an SRV iPS-4 vector (carrying OCT4, KLF4, SOX2, C-MYC, NANOG, and LIN28) for PBMC reprogramming according to the manufacturer's instructions. Both SRV vectors encoded the enhanced GFP reporter gene, along with reprogramming factors. Adipose-derived stem cells were from Lonza Bioscience (PT5006; Walkersville, MD, USA). Adipose-derived stem cells were cultured in ADSC basal medium (Lonza Bioscience) in a humidified atmosphere at 37 °C with 5% CO 2 in air and then collected using TrypLE Select enzyme (Thermo Fisher Scientific, Waltham, MA, USA) for reprogramming procedures. Peripheral blood mononuclear cells were separated using a Leucosep™ System (Greiner Bio-One, Kremsmünster, Austria). Each blood sample was poured into a Leucosep tube and centrifuged for 15 min at 1000 × g . After discarding the plasma layer fraction, we harvested the enriched cell fraction into another centrifugation tube and washed it with Dulbecco's phosphate-buffered saline (DPBS, Thermo Fisher Scientific) two times. Adipose-derived stem cells or PBMCs were suspended in a medium containing SRV™ iPSC vector at a multiplicity of transfection of 1 or 3. After a 2-h incubation at 37 °C, the cells were washed to remove Sendai virus. Subsequently, infected cells were suspended in 30-mL spinner flasks (ABLE Biott, Tokyo, Japan) with StemScale™ PSC Suspension Medium (Thermo Fisher Scientific) and cultured in a single-use bioreactor (ABLE Biott). To facilitate the reprogramming process, we added two small molecules: 5 μM DAPT (NOTCH1 inhibitor; R&D Systems, Minneapolis, MN, USA) and 3 μM iDOT1L (Abcam, Cambridge, UK). 4.3 Suspension cultures of human iPSCs Human iPSCs were cultured in a 30-mL single-use bioreactor (ABLE Biott) at 37 °C with 5% CO 2 in air and agitated at 55 rpm. The cells were suspended in 30 mL of StemScale™ PSC Suspension Medium (Thermo Fisher Scientific). Half of the medium was replaced every alternate day. Upon growth, some of the primary spheres were transferred to the next bioreactor as passages. 5 Characterization of 3D-iPSCs 5.1 Alkaline phosphatase staining Three dimensional–induced pluripotent stem cell spheres were placed onto an adhesion culture dish and fixed in 4% paraformaldehyde for 20 min at 4 °C. Fixed cells were stained with BCIP/NBT (5-bromo-4-chloro-3-indolyl-phosphate/nitro blue tetrazolium) solution (Nacalai Tesque, Kyoto, Japan) according to the manufacturer's instructions. Images were acquired using a BZ-X700 microscope (Keyence, Osaka, Japan). 5.2 EB formation for in vitro differentiation assay Human iPSC colonies were washed with DPBS, and cells were collected using TrypLE Select. Dissociated cells were used to seed in Costar® Ultra Low Cluster 96 Well Round Bottom Plate (Corning, Inc., New York, NY, USA) at a density of 1.0 × 10 4 cells/well in DMEM/F12-based medium with 20% fetal bovine serum (FBS), 2 mM L-glutamine, 1 mM sodium pyruvate, 0.1 mM nonessential amino acids, 100 U/mL penicillin, and 100 μg/mL streptomycin (all reagents from Thermo Fisher Scientific). The resulting EB cultures were maintained in 96-well plates for 7 days and then replated onto glass-bottom dishes coated with 0.1% gelatin (Sigma-Aldrich, Darmstadt, Germany) for a further 14 days. 5.3 Quantitative RT–PCR analysis Total RNA was extracted from the cell pellet using an RNeasy Mini kit (Qiagen, Hilden, Germany), and DNA was removed using DNase (Thermo Fisher Scientific). First-strand complementary DNA (cDNA) was synthesized using SuperScript IV VILO (Thermo Fisher Scientific). Olymerase chain reaction or quantitative PCR was performed using TaKaRa Ex Taq DNA Polymerase (Takara, Shiga, Japan) or TaqMan™ Gene Expression Master Mix (Thermo Fisher Scientific) in a ProFlex PCR System or QuantStadio 7 Real-Time PCR System thermal cyclers (Thermo Fisher Scientific). To detect Sendai virus (SeV) by RT–PCR, specific primer sets were used: SeV (500 bp) forward; 5′-ATATGGAGTACGAGAGGACC-3′, reverse; 5′-CCTCAGGTTGGAGAGAGTCA-3′, β-ACTIN (131bp) forward; 5′-TCCCTGGAGAAGAGCTACG-3′, reverse; 5′-GTAGTTTCGTGGATGCCACA-3’. 5.4 Human pluripotent stem cell Scorecard assay The cDNA was prepared using SuperScript IV VILO (Thermo Fisher Scientific). TaqMan® Human Pluripotent Stem Cell (hPSC) Scorecard™ assays were performed according to the manufacturer's instructions (Thermo Fisher Scientific). The hPSC Scorecard assay was used to investigate iPSC pluripotency by assessing expression levels of genes that play a key role in self-renewal, endoderm, mesoderm, and ectoderm development. Gene expression data was analyzed using hPSC Scorecard™ Analysis Software (Thermo Fisher Scientific). 5.5 Teratoma formation for in vivo differentiation assay Approximately 1–5 × 10 7 cells were subcutaneously transplanted into nude mice (BALB/cAJcl-nu/nu; CLEA Japan, Tokyo, Japan). Tumor masses were collected after 2–3 months and fixed with 4% paraformaldehyde, paraffin embedded, sectioned into 5-μm sections, and stained with hematoxylin and eosin. Tumor portions were subjected to histological analysis; the three germ layers were identified based on representative histological features. 5.6 Immunofluorescence staining The 3D-iPSC spheres were fixed with formalin and embedded in paraffin. Sectioned samples were incubated with primary antibodies at 4 °C overnight. After washing with DPBS, samples were incubated for 30 min at 25 °C with secondary antibodies conjugated to Alexa 488 or 546 (Thermo Fisher Scientific). After washing with DPBS, mounting medium containing DAPI was used. Primary and secondary antibodies used are listed in Table S . Images were acquired with a confocal laser microscopy (LSM900; Carl Zeiss, Oberkochen, Germany). 5.7 Neural and cardiac lineage induction from 3D-iPSCs To induce differentiation into neural or cardiac lineages, cells were cultured in an orbital shaker (MaxQ 2000 CO 2 Plus, Thermo Fisher Scientific) at 70 rpm. For neural induction, 3D-iPSC spheres were transferred to 6-well culture dishes and cultured in PSC neural induction medium (A1647801; Thermo Fisher Scientific). On day 6, neural induction medium was replaced with neural expansion medium (A1647801; Thermo Fisher Scientific) and cells were cultured for an additional six days. For cardiac induction, 3D-iPSC spheres were transferred to 6-well culture dishes and cultured using a PSC cardiomyocyte differentiation kit (A2921201; Thermo Fisher Scientific) according to the manufacturer's instructions. The medium was changed every other day. On day 20, spheres were embedded in paraffin, sectioned, and immunolabeled for cardiac markers. 5.8 Karyotypic analysis Chromosomal Q-band analyses of 3D-iPSCs were performed by Chromosome Science Labo. Ltd. (Sapporo, Hokkaido, Japan). At least 20 metaphase spreads were examined for each cell line. Author contributions Conceptualization, H.A.; Methodology, M.T., T.K., and H.A.; Investigation, M.T.; Writing-Original Draft, M.T.; Writing-Review and Editing, M.C.V., A.U., and H.A.; Visualization, M.T. and T.K.; Supervision, A.U. and H.A. Declaration of competing interest The authors have no conflicts of interest to report. Acknowledgments We thank Minoru Ichinose for preparing histological samples. This work was supported by grants from the Japan Health Research Promotion Bureau Research Fund ( 2022-B-02 ) and the Japan Agency for Medical Research and Development (AMED) under grant number 20be0304501h0002 (HA). MT was supported by a research grant from a Grant-in-Aid for JSPS Fellows ( 22J01591 and 22KJ3169 ). Appendix A Supplementary data The following is the Supplementary data to this article. Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.reth.2024.02.005 .
|
[
"ASSOU",
"BAI",
"CASTROVINUELAS",
"FATTAHI",
"GALVANAUSKAS",
"HANSEN",
"HOOKWAY",
"HUANG",
"ISONO",
"LEI",
"LIN",
"MORITA",
"NAGASAKA",
"NAKAGAWA",
"NATH",
"TSANKOV",
"WEI",
"WIEGAND"
] |
0e090b4e861140aabf7f3d7ca9831b5b_Differential DNA Methylation Analysis without a Reference Genome_10.1016_j.celrep.2015.11.024.xml
|
Differential DNA Methylation Analysis without a Reference Genome
|
[
"Klughammer, Johanna",
"Datlinger, Paul",
"Printz, Dieter",
"Sheffield, Nathan C.",
"Farlik, Matthias",
"Hadler, Johanna",
"Fritsch, Gerhard",
"Bock, Christoph"
] |
Genome-wide DNA methylation mapping uncovers epigenetic changes associated with animal development, environmental adaptation, and species evolution. To address the lack of high-throughput methods for DNA methylation analysis in non-model organisms, we developed an integrated approach for studying DNA methylation differences independent of a reference genome. Experimentally, our method relies on an optimized 96-well protocol for reduced representation bisulfite sequencing (RRBS), which we have validated in nine species (human, mouse, rat, cow, dog, chicken, carp, sea bass, and zebrafish). Bioinformatically, we developed the RefFreeDMA software to deduce ad hoc genomes directly from RRBS reads and to pinpoint differentially methylated regions between samples or groups of individuals (http://RefFreeDMA.computational-epigenetics.org). The identified regions are interpreted using motif enrichment analysis and/or cross-mapping to annotated genomes. We validated our method by reference-free analysis of cell-type-specific DNA methylation in the blood of human, cow, and carp. In summary, we present a cost-effective method for epigenome analysis in ecology and evolution, which enables epigenome-wide association studies in natural populations and species without a reference genome.
|
Background DNA methylation is an epigenetic mechanism that is indispensable for animal development ( Reik, 2007 ) and also broadly relevant for plant biology ( Law and Jacobsen, 2010 ). Defects in the DNA methylation machinery are associated with widespread changes in cellular identity and interfere with the developmental potential of stem cells ( Jones, 2012 ). Altered DNA methylation patterns are ubiquitous in cancer ( Baylin and Jones, 2011; Feinberg and Tycko, 2004 ), and they have been observed in numerous other diseases ( Portela and Esteller, 2010; Robertson, 2005 ). Moreover, there is mounting evidence for associations between DNA methylation patterns and environmental factors such as stress, nutrition, toxic exposures, and substance abuse ( Foley et al., 2009; Mill and Heijmans, 2013 ). In humans, epigenome-wide association studies (EWASs) have emerged as a widely used paradigm for linking DNA methylation to environmental exposures and to diseases ( Michels et al., 2013; Rakyan et al., 2011 ). A small number of associations between the epigenome and the environment have also been validated in inbred mouse and rat models, for example, identifying connections between early life exposures and the propensity to subsequently develop certain diseases and behavioral phenotypes. A widely discussed hypothesis posits that epigenetic mechanisms provide a mechanistic link between exposures and diseases, thus contributing to the developmental origins of health and disease in humans ( Gillman, 2005; Waterland and Michels, 2007 ). Furthermore, DNA methylation can be transgenerationally inherited at certain genomic loci ( Feil and Fraga, 2011 ) and may contribute to species evolution ( Jablonka and Raz, 2009 ). There is tremendous potential in studying environmental influences and epigenetic inheritance not only in laboratory animals, but also in natural populations and non-model organisms. For example, animals in the wild are often exposed to complex evolutionary pressures and ecological interactions that cannot be modeled in the laboratory. Initial studies along these lines have suggested a role of epigenetics in the evolution of Darwin’s finches ( Skinner et al., 2014 ) and in speciation among marsupials ( O’Neill et al., 1998 ), and they identified DNA methylation as a potential source of random variation in natural populations of fish ( Massicotte et al., 2011 ) and songbirds ( Liebl et al., 2013; Schrey et al., 2012 ). However, systematic epigenetic studies in natural populations and non-model organisms have been hampered by the lack of methods for high-resolution and high-throughput DNA methylation analysis that work well across a broad range of species. To date, most studies of DNA methylation in ecology and evolution have relied on low-throughput, gel-based assays such as MS-AFLP ( Schrey et al., 2013 ). Much more powerful assays are being used for DNA methylation analysis in human, including the Infinium microarray, whole-genome bisulfite sequencing (WGBS), and reduced representation bisulfite sequencing (RRBS). However, none of these assays is directly applicable for studying DNA methylation in natural populations and non-model organisms: The Infinium assay requires a commercial microarray that is only available for the human genome ( Bibikova et al., 2011 ); WGBS is excessively expensive when studying more than a handful of samples ( Beck, 2010 ), and RRBS suffers from the technical complexity of the original protocol ( Gu et al., 2011 ) and from concerns that the restriction enzyme MspI may not provide good genome coverage in other species. Furthermore, there is a general lack of bioinformatic methods for analyzing sequencing-based DNA methylation data in the absence of a high-quality reference genome and in genetically diverse populations for which existing reference genomes would unduly bias the analysis. Here, we describe an integrated approach for analyzing DNA methylation at single-base-pair resolution in a broad range of species. We combine an optimized high-throughput RRBS protocol with a tailored computational method called RefFreeDMA in order to detect differential DNA methylation without a reference genome. RefFreeDMA constructs a deduced genome directly from RRBS sequencing reads, it maps the sequencing reads to the deduced genome, performs DNA methylation calling, and identifies differentially methylated cytosines and DNA fragments ( Figure 1 ). We validated our method by studying blood cell-type-specific DNA methylation in three species (human, cow, and carp), benchmarking the reference-free analysis against a reference-based analysis using the existing reference genomes. The experimental protocol was also validated in six additional vertebrate species (rat, mouse, dog, chicken, sea bass, and zebrafish). We expect that the described method will be broadly useful for DNA methylation analysis in non-model organisms, for example, to identify and interpret DNA methylation differences between samples (e.g., different cell types) or groups of individuals (e.g., animals that have been exposed to different environments). Results High-Throughput DNA Methylation Mapping in Diverse Animal Species Using RRBS RRBS enables genome-scale DNA methylation mapping at single-base-pair resolution for a fraction of the cost of WGBS ( Meissner et al., 2005 ). It exploits the highly characteristic distribution of DNA methylation in vertebrate genomes, which occurs mainly at CpG dinucleotides. DNA is digested with the restriction enzymes MspI (restriction site: C ∧ CGG) and/or TaqI (restriction site: T ∧ CGA), which are insensitive to DNA methylation at the central CpG, and short size-selected restriction fragments are subjected to bisulfite sequencing ( Figure 2 A). We adapted an existing RRBS protocol ( Boyle et al., 2012 ) and optimized it for genome coverage and sample throughput (see Experimental Procedures for details). The optimized protocol increases the number of covered CpG sites from ∼2.5M to ∼4M (human genome, using the MspI enzyme), and it allows a single person to process up to 192 samples per week. For most vertebrates, good sequencing coverage can be obtained when 6–12 barcoded samples are sequenced on a single lane of Illumina HiSeq, which makes the protocol approximately 10-fold cheaper than WGBS. To validate the assay, we generated RRBS libraries for nine species (human, rat, mouse, cow, dog, chicken, carp, sea bass, and zebrafish). These libraries showed characteristic fragment length distributions, which reflect the distribution of CpG-rich repetitive elements in these species and which provide a convenient metric for assessing the quality of RRBS libraries prior to sequencing ( Figure 2 B). Using our optimized RRBS protocol, we established a DNA methylation dataset for the major nucleated cell populations in peripheral blood of three species (human, cow, and carp), with four biological replicates per cell type and species. The human and cow datasets comprise granulocytes, monocytes, and lymphocytes, whereas the carp dataset also includes nucleated erythrocytes and one additional leukocyte population that morphologically resembles granulocytes and monocytes ( Figure 3 A). In total, the dataset comprises 44 blood cell samples from three species and 789 million sequencing reads ( Table S1 ). All cell types were fluorescence-activated cell sorting (FACS) purified based on forward and side scatter alone, demonstrating the feasibility of separating blood cell types in species that lack suitable FACS antibodies. The purity of the sorted cell populations was assessed visually through cytospins, and it exceeded 95% in all samples. Here, our analysis focuses on DNA methylation differences between these cell populations, but the same sorting strategy can also be used for minimizing the impact of differences in cell composition between individuals, which is a major confounder in human EWAS ( Houseman et al., 2012; Jaffe and Irizarry, 2014 ). RefFreeDMA: Analyzing Differential DNA Methylation without a Reference Genome We devised a workflow for reference-free DNA methylation analysis consisting of six main steps ( Figure 1 ): (1) preparation and sequencing of RRBS libraries, (2) inference of a deduced genome from the RRBS sequencing reads, (3) read alignment to the deduced genome, (4) DNA methylation calling, (5) identification and ranking of differentially methylated CpGs and deduced genome fragments, and (6) functional annotation of differential DNA methylation. RefFreeDMA is implemented as a Linux-based software pipeline, supporting small to moderately sized analyses on a desktop computer (e.g., 40-hr total runtime for 20 samples), whereas large analyses are efficiently parallelized on a computing cluster. A detailed overview of the RefFreeDMA pipeline is provided as a Unified Modeling Language (UML) diagram in Figure S1 . A key aspect of RefFreeDMA is the construction of a deduced genome directly from the RRBS reads. This deduced genome is not based on classical de novo assembly of bisulfite sequencing reads, which is computationally expensive and would require very deep sequencing. Rather, we exploit a specific characteristic of RRBS with its defined fragment start and end positions at MspI restriction sites to simplify the problem. RefFreeDMA constructs the deduced genome by clustering the RRBS reads from all samples in a given species according to their sequence similarity, followed by inference of the consensus sequence for each read cluster. In the consensus sequence, positions with both cytosines (Cs) and thymines (Ts) among the clustered reads are retained as Cs ( Figure 1 ), given that they are likely to reflect genomic cytosines that are methylated and protected from bisulfite sequencing in some but not all samples. We developed an efficient two-step approach in which all quality-filtered, non-duplicate sequencing reads are initially clustered in an approximate and computationally efficient manner, followed by a more precise and computationally demanding finalization step (see Experimental Procedures for details). Finally, all consensus sequences are concatenated with spacer sequences (i.e., stretches of Ns) to facilitate computational processing, resulting in a deduced genome that is specific for a given species and analysis but shared among all samples contributing to the analysis. The subsequent steps of read alignment, DNA methylation calling, and differential methylation analysis are performed in much the same way as for DNA methylation analysis with a reference genome ( Bock, 2012 ). Specifically, we use BSMAP/RRBSMAP ( Xi et al., 2012; Xi and Li, 2009 ) for read alignment and a custom DNA methylation calling script ( Bock et al., 2010 ) for calculating the fraction of methylated reads at each CpG position in the deduced genome. Differentially methylated CpGs and deduced genome fragments between sample groups are then identified using a modified t test statistic as described for the RnBeads software ( Assenov et al., 2014 ). The analysis gives rise to lists with individual CpGs as well as deduced genome fragments ranked by their degree of differential methylation. In a final step, the top-ranking differentially methylated fragments are exported as FASTA/FASTQ files, which provide the basis for biological interpretation by cross-mapping to well-annotated genomes and by reference-free motif enrichment analysis. The principle behind cross-mapping is to link deduced genome fragments in the analyzed species to orthologous regions in well-annotated genomes of other vertebrate species and to use the genome annotations that are available in the latter species (e.g., genes, transcription factor binding sites, histone modifications, and DNase hypersensitivity sites) for cross-species enrichment analysis. This approach is of course limited to genomic regions that are conserved across species; hence, it is most powerful for species that are closely related to well-characterized model organisms. Motif enrichment analysis provides an alternative approach to biological interpretation that is independent of any reference genomes. It is based on the observations that transcription factor binding motifs are highly conserved across all vertebrates ( Nitta et al., 2015 ) and that DNA methylation levels at motif sequences have been shown to correlate with cell-type-specific transcription factor binding ( Bock et al., 2012; Feldmann et al., 2013; Stadler et al., 2011 ). By analyzing motif enrichment among differentially methylated DNA fragments using existing databases (such as JASPAR; Mathelier et al., 2014 ) and software tools (such as AME; McLeay and Bailey, 2010 ), it is possible to gain insight into the regulatory mechanisms that distinguish the studied cell types and sample groups. Validating Reference-Free DNA Methylation Analysis across Three Species and 44 Samples To validate our approach, we performed reference-free analysis of the RRBS blood cell dataset ( Figure 3 A) and compared the results to those obtained by reference-based analysis of the same data (see Experimental Procedures for details). The fraction of aligned reads was in the range of 90% to 98% for the deduced genomes and slightly lower (75% to 95%) for the published reference genome of each species ( Figure 3 B; Table S1 ). The number of covered CpGs was predominantly species specific (3–4 million for human, ∼3 million for cow, and 1.5–2 million for carp) and broadly similar between the reference-based and reference-free analysis. Average DNA methylation levels at CpG sites were also similar for both approaches, whereas the observed C-to-T conversion rates at non-CpG sites were substantially lower in the reference-free analysis ( Table S1 ). This is because ubiquitously unmethylated Cs—which in vertebrates are mostly found in non-CpG context—are counted as Ts by the reference-free analysis (case 4 in Figure S2 ) and therefore do not contribute to high non-CpG conversion rates. To circumvent this potential problem our RRBS protocol uses methylated and unmethylated spike-in controls to monitor bisulfite conversion rates ( Table S1 ), rather than relying on non-CpG conversion rates. The issue can also be avoided altogether by sequencing a single RRBS sample without bisulfite conversion and including it in the analysis. Finally, to assess the comparative performance of our reference-free method, we benchmarked it against simply cross-mapping the RRBS reads for carp to the well-annotated genomes of human, mouse, and zebrafish. The results showed a one to two orders of magnitude higher genome-wide CpG coverage using RefFreeDMA than observed for the basic cross-mapping approach ( Table S2 ). We also compared the alignment of individual reads, the coverage of individual CpGs, and the DNA methylation levels of single CpGs and deduced genome fragments between the two approaches. To that end, the deduced genome fragments were aligned to the corresponding reference genome, allowing us to link most RRBS fragments (human: 1,254,324 out of 1,522,786; cow: 1,276,537 out of 1,521,946; and carp: 455,821 out of 780,757) to their putative position in the reference genome. More than 75% of reads and CpGs in non-repetitive regions where concordantly mapped by both approaches ( Figure 3 C), whereas the agreement was much lower for repetitive regions and reads that map to multiple positions in the genome ( Figure S3 A). We investigated these discrepancies and identified four scenarios in which there may be deviations between the reference-free method and the reference-based method ( Figure S2 ). Most frequently, a sequencing read maps to multiple positions throughout the reference genome, and the aligner randomly assigns it to one of these positions. We indeed observed similarly low concordance rates in repetitive regions when running the reference-based method twice with different random seed parameters ( Figure S3 A). Based on these results, it might even be argued that the clustering and combining of highly similar repetitive reads into a single consensus provide a more appropriate way of handling multimapping reads than their random assignment in the reference-based analysis, and similar approaches have successfully been used for studying epigenetic marks in repetitive regions of the genome ( Bock et al., 2010; Day et al., 2010 ). Finally, despite these special cases, we observed excellent agreement between the two approaches when plotting alignment positions across a representative chromosome ( Figure S3 B), and the DNA methylation values obtained with the two approaches were highly correlated in all samples and all species—with Pearson correlation coefficients above 0.9 across all CpGs and fragments and above 0.95 for those CpGs and fragments that have good sequencing coverage ( Figures 3 D, 3E, and S3 C). Reference-Free Analysis of Differential DNA Methylation between Cell Types of the Blood Importantly, the reference-free method was able to recapitulate the known biological similarities and differences among the different blood cell types in almost perfect concordance with the reference-based method ( Figure 4 A). Many genes with a known role in hematopoietic cells were identified by both methods, as illustrated by the myeloid-specific MPO gene and the lymphoid-specific LAX1 gene ( Figure 4 B). There was also strong correlation (r ≥ 0.95) between the differential DNA methylation ranks obtained with the two methods in all three species ( Figure S4 A). Furthermore, the vast majority of the top-1,000 differentially methylated fragments identified by the reference-free method were also among the top-1,000 or top-5,000 differentially methylated regions based on the reference-based method ( Figure S4 B). The magnitude of the DNA methylation differences calculated by either method were also highly correlated ( Figure S4 C). Furthermore, both methods identified a consistent and biologically interesting trend toward increased DNA methylation levels in lymphoid as opposed to myeloid cells, which was very prominent in human, weaker in cow, and essentially absent in carp ( Figures 4 C and S4 D), suggesting species-specific differences in the genome-wide regulation of DNA methylation in the hematopoietic system. We pursued two complementary approaches for interpreting the identified DNA methylation differences without a reference genome for the target species. First, we cross-mapped the deduced genome fragments obtained in each species to the human and mouse genome, for which extensive functional genomics data exist from projects such as ENCODE ( ENCODE Project Consortium, 2004 ), IHEC ( http://www.ihec-epigenomes.org/ ), and BLUEPRINT ( Adams et al., 2012 ). Cross-species mapping rates were expectedly low, amounting to ∼20% for human and cow and ∼10% for carp at a maximum mismatch rate of 20%. ( Figure S5 A). Nevertheless, for those deduced reference fragments that did map, we were able to perform enrichment analysis relative to the extensive biological annotations of the human and mouse genomes. Fragments that were less methylated in lymphocytes as compared with granulocytes (hypermethylated in granulocytes) were often associated with lymphoid-specific regulatory elements and transcription factor binding mapped by ChIP-seq and similar technologies ( Figures 5 A and S5 B). The enrichment was not always consistent between species, but we found recurrent and biologically meaningful associations. Most notably, the binding sites of two key myeloid transcription factors, CEBPA and CEBPB ( Akagi et al., 2010; Rosenbauer and Tenen, 2007 ), were hypermethylated in both human and cow lymphocytes, and binding sites of MYB, a transcription factor implicated in lymphocyte and erythrocyte development ( Greig et al., 2008 ), were hypermethylated in human and cow granulocytes. In contrast, carp appears to be too evolutionary distant to obtain interesting results by cross-mapping to mammalian genomes ( Figure S5 B). Second, we exploited the fact that transcription factor binding motifs are much more conserved than most regulatory elements ( Nitta et al., 2015 ) and performed alignment-free motif enrichment analysis for those deduced reference fragments that were most differentially methylated between lymphocytes and granulocytes. In all three species, there was a higher ratio of GC-rich and CpG-rich motifs among fragments that are hypermethylated in granulocytes ( Figures 5 B and S5 C), which we corrected for in the motif analysis by using random sequences with matched base composition as controls (see Experimental Procedures for details). Those fragments that were less methylated in lymphocytes (hypermethylated in granulocytes) were enriched for 29 sequence motifs, of which four were shared across two species (EGR2, KLF5, KLF1, and RREB1; shown in Figure S5 D). Those fragments that were less methylated in granulocytes (hypermethylated in lymphocytes) were enriched for 40 sequence motifs, and four motifs were shared between all three species (CEBPA, CEBPB, HLF, and JUN) ( Figures 5 C and S5 D). Three of these transcription factors are well-established regulators of myeloid cell differentiation ( Akagi et al., 2010; Orkin, 1995; Rosenbauer and Tenen, 2007 ), whereas HLF is associated with hematopoietic stem cells ( Gazit et al., 2013 ). Finally, we also searched for motifs that were enriched in lymphocyte-specific as well as in granulocyte-specific differentially methylated fragments ( Figures 5 C and S5 E), and a total of 27 sequence motifs were identified, of which six were shared across all three species (BRCA1, FOXL1, PAX4, RREB1, RUNX1, and RUNX2). Of these, RUNX1 and RUNX2 in particular are known to play a role in both lymphoid and myeloid cell differentiation and function ( Klunker et al., 2009; Liebermann and Hoffman, 2002; Tenen et al., 1997 ). Discussion We present an integrated experimental and computational method for DNA methylation analysis and interpretation in non-model organisms, unsequenced species, and natural populations. Our method addresses a major bottleneck for epigenome studies in the context of comparative genomics, ecology, and evolution, where whole genome bisulfite sequencing is rarely affordable for sufficiently large cohorts and other widely used methods such as MS-AFLP are strongly limited in the information they can provide. On the experimental side, our method uses an optimized 96-well RRBS protocol, which provides an excellent trade-off between single-base-pair resolution, affordable cost, and practical feasibility for studies with hundreds (or even thousands) of individuals. Building upon the track record of RRBS in mouse and human and the popularity of reduced representation genome sequencing assays such as RAD-seq ( Baird et al., 2008 ) and GBS ( Elshire et al., 2011 ) for research in natural populations and non-model organisms, we expect our method to be broadly useful for EWASs in the context of ecology and evolution. The described method should be applicable to any animal and plant species with appreciable levels of DNA methylation, and it is readily adapted to different genome compositions and sequencing depths by selecting an appropriate restriction enzyme (or enzyme combinations). Here we focused on vertebrates, where DNA methylation is largely restricted to CpG dinucleotides and the MspI restriction enzyme is an ideal choice. MspI enriches for CpG islands and gene promoters, while also providing a broad sampling of other genomic regions such as enhancers, gene bodies, CpG island shores, and repetitive elements. Furthermore, every read contains at least one CpG (at the MspI restriction site), which increases cost-effectiveness for vertebrate genomes. Importantly, our method can be used to map not only CpG methylation, as we demonstrate here, but also non-CpG methylation ( Ziller et al., 2011 ), which is widespread among non-vertebrate species and also present in certain vertebrate cell types. On the computational side, we developed the RefFreeDMA method and software to build a deduced genome directly from the bisulfite sequencing reads, to quantify DNA methylation at the level of single CpG sites and deduced fragments, and to detect and rank DNA methylation differences between samples and sample groups. RefFreeDMA overcomes relevant limitations of an existing method that uses de novo assembly of MeDIP-seq reads ( Kaspi et al., 2014 ), namely low resolution, susceptibility to biases, and lack of quantification, and it is more powerful and more widely applicable than read mapping to the genome of a related species ( Weyrich et al., 2014 ), which requires a closely matched genome and a second, unconverted library. Furthermore, we present two approaches (cross-mapping and motif enrichment analysis) for interpreting the identified differentially methylated regions in the absence of a reference genome. To validate our method, we established and analyzed a cross-species DNA methylation dataset comprising multiple blood cell types in two mammalian species (human and cow) and one fish (carp). All cell types were enriched to >95% purity by a sorting strategy that is particularly useful for working with non-model organisms because it does not require any species-specific antibodies. Bioinformatic analysis in the three species with and without the respective reference genomes gave rise to consistent and informative results. For example, we observed that the most differentially methylated fragments in the two mammalian species were predominantly hypermethylated in lymphocytes, whereas no such bias was present in carp ( Figures 4 C and S4 D). We also identified characteristic binding motifs of lineage-specific transcription factors that were consistently enriched among differentially methylated fragments of all three species ( Figure 5 C). Despite the good results that we obtained in our validation of RefFreeDMA, there are several inherent limitations of reference-free DNA methylation analysis that potential users of our method should keep in mind. First, repetitive elements with high sequence similarity can get merged into a single deduced genome fragment, which is why RefFreeDMA tends to report moderately fewer covered CpGs than we obtained using reference-based analysis. Second, cytosines that are unmethylated in all samples of one species will not be represented in the deduced genome (case 4 in Figure S2 ), unless one RRBS sample is sequenced without bisulfite conversion and added to the analysis. Third, our method does not perform de novo assembly of deduced genome fragments, which would require substantially deeper and broader sequencing coverage than is typically affordable. It can therefore happen that the same CpG is included twice in two partially overlapping fragments (case 2 in Figure S2 ). However, based on our analysis of the validation dataset, this type of bias appears to be negligible ( Figure S4 C). In summary, we expect that RefFreeDMA in combination with our optimized RRBS protocol will be useful for researchers who are interested in analyzing DNA methylation in non-model organisms without the need of a reference genome. Apart from assessing cell-type-specific DNA methylation as demonstrated here, other applications of RefFreeDMA may include EWASs for phenotypic differences in natural populations, agricultural research on the epigenetic effect of different feeds, drugs, and rearing conditions, and meta-epigenome studies of DNA methylation in entire ecosystems. Experimental Procedures Sample Acquisition For human, cow, and carp, 5–10 ml of peripheral blood was obtained from two male and two female individuals, anti-coagulated by 2 mg/ml K 2 EDTA and processed within 1 hr after collection. Human blood samples were obtained by venipuncture from healthy donors by a qualified physician. All donors provided informed consent. The study was conducted in accordance with the principles laid down in the Declaration of Helsinki, overseen by the ethics commission of the Medical University of Vienna. Cow blood samples were obtained post-mortem from a slaughterhouse. Carp blood samples were obtained post-mortem from a fish vendor. For the other species (mouse, rat, dog, chicken, sea bass, and zebrafish), purified DNA was provided by the collaborators listed in the Acknowledgments . Cell Purification Leukocytes were isolated from whole blood by removing the erythrocytes through hypotonic lysis. Specifically, 5 ml of whole blood was incubated with 9 ml ddH 2 O for 1 min. The lysis was stopped by adding 1 ml of 10× PBS to the sample. Leukocytes were pelleted by centrifuging for 5 min at 550 g . If the pellet was still red, a second round of lysis was initiated by resuspending the pellet in 1 ml 1× PBS. Subsequently, 4.5 ml of ddH 2 O was added and after 30 s the lysis reaction was stopped by adding 0.5 ml 10× PBS. Leukocytes were pelleted by centrifuging for 3 min at 550 g . Finally, the pellet was washed in 1 ml 1× PBS and then resuspended in 500–800 μl RPMI-1640 medium supplemented with 10% fetal calf serum (FCS). The cell suspension was then filtered into a FACS tube, and cell populations were sorted by FACS based on their forward and side scatter properties. Sorting was performed on a BD FACS Aria 1 with a 70-μm nozzle, which allowed for a maximum sorting speed of 30,000 events per second. For each population, between 500,000 and 3 million cells were obtained. Giemsa stained cytospins were produced for each sorted cell population, and the purity was assessed at 100× magnification. DNA Isolation The Allprep DNA/RNA Mini kit (QIAGEN) was used for DNA isolation. Cells were lysed in 600 μl Buffer RLT Plus supplemented with 1% β-Mercaptoethanol and vortexed thoroughly for at least 5 min. The procedure of isolating DNA and RNA was performed according to protocol. DNA was stored at −20°C. RRBS Library Preparation For RRBS, 100 ng of genomic DNA was digested for 12 hr at 37°C with 20 units of MspI (New England Biolabs, R0106L) in 30 μl of 1× NEB buffer 2. To retain even the smallest fragments and to minimize the loss of material, end preparation and adaptor ligation were performed in a single-tube setup. End fill-in and A-tailing were performed by addition of Klenow Fragment 3′ > 5′ exo- (New England Biolabs, M0212L) and dNTP mix (10 mM dATP, 1 mM dCTP, 1 mM dGTP). After ligation to methylated Illumina TruSeq LT v2 adaptors using Quick Ligase (New England Biolabs, M2200L), the libraries were size selected by performing a 0.75× cleanup with AMPure XP beads (Beckman Coulter, A63881). The libraries were pooled in combinations of six based on qPCR data and subjected to bisulfite conversion using the EZ DNA Methylation Direct Kit (Zymo Research, D5020) with the following changes to the manufacturer’s protocol: conversion reagent was used at 0.9× concentration, incubation performed for 20 cycles of 1 min at 95°C, 10 min at 60°C, and the desulphonation time was extended to 30 min. These changes increase the number of CpG dinucleotides covered by reducing double-strand break formation in larger library fragments. Bisulfite-converted libraries were enriched using PfuTurbo Cx Hotstart DNA Polymerase (Agilent, 600412). The minimum number of enrichment cycles was estimated by qPCR. After a 2× AMPure XP cleanup, quality control was performed using the Qubit dsDNA HS (Life Technologies, Q32854) and Experion DNA 1k assays (BioRad, 700-7107). RRBS libraries were sequenced on the Illumina HiSeq 2000 platform in 50-bp single-read mode. Bisulfite Conversion Controls In order to monitor the efficiency of the bisulfite conversion and to check for underconversion of unmethylated cytosines as well as overconversion of methylated cytosines, custom-designed and synthesized methylated and unmethylated oligonucleotides were spiked into each sample at a concentration of 0.1% of the genomic DNA. For each sample, sequencing reads were aligned to the control sequences using Bismark with default settings ( Krueger and Andrews, 2011 ). Conversion metrics are reported in Table S1 . RRBS Data Preprocessing Sequencing data were processed with illumina2bam-tools v.1.12, and the resulting BAM files were converted to fastq format using SamToFastq.jar (picard-tools v.1.100) with the INCLUDE_NON_PF_READS parameter set to FALSE. All reads were trimmed for adaptor sequences and low-quality sequences using trimgalore v.0.3.3 ( http://www.bioinformatics.babraham.ac.uk/projects/trim_galore/ ) with the following command: trim_galore -q 20–phred33 -a “AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC”–stringency 1 -e 0.1–length 16–output_dir $output_dir $input_fastq. Derivation of a Deduced Genome Based on the trimmed RRBS reads for a given species and analysis, a deduced genome is constructed in six steps: (1) Pre-filtering. To reduce the number of reads that need to be processed, one representative read is kept for each read sequence and sample. Furthermore, reads that stand a high chance of arising from sequencing errors are discarded by requiring that each read occurs at least twice among four samples after converting all Cs to Ts. (2) Preliminary read grouping. To be computationally effective, we perform read grouping initially by exact string matching. Reads that share the same sequence in their fully converted form (all Cs replaced by Ts) are combined into one pre-consensus sequence by assigning a C to each position at which at least 5% of the reads contain a C in their unconverted form. (3) Consensus building. To combine highly similar but not identical fragments into one consensus, the pre-consensus fragments are grouped by sequence similarity using an all-against-all alignment of the C to T converted fragments with Bowtie2 v.2.2.3 ( Langmead and Salzberg, 2012 ) using the following command: bowtie2 -t -q–phred33–end-to-end -N 1 -L 22–norc–n-ceil “L,0,0.2”–mp 3–np 0–score-min “L,-0.6,-0.6” -k 300 -D 3–rdg “20,20”–rfg “20,20” -p 4 -x $reference -U $fastq -S $out_sam . Fragments that match with less than 8% maximum mismatch ratio are merged by assigning them to the largest available group. For each group, a consensus sequence is deduced by assigning the majority base to each position, while assigning Cs to all positions at which at least 5% of the fragments contain a C. (4) Consensus refinement. For those groups in which some fragments exhibit more than 5% mismatches relative to the consensus, the diverging reads are assigned to separate groups, and a new consensus is built for the respective groups. This procedure is repeated until no fragment-to-consensus mismatch rate exceeds 5%. (5) Merging of reverse complements. After bisulfite conversion, reads originating from the two strands of the same DNA fragment are often not identified as reverse complements during the Bowtie2 alignment and are therefore not automatically merged into one consensus. To overcome this problem, all reads that start and end with the RRBS restriction site (MspI: 5′ [CT]GG – [CT][CT]G 3′) are tested for whether they become perfect reverse complements of each other when all Cs are replaced by Ts and all Gs are replaced by As. For each pair to be merged, a consensus is formed by assigning a C to all T positions in the sequence of the forward partner at which the reverse-complement partner shows a C. (6) Concatenation into one deduced genome. In the final step, the merged deduced genome fragments are concatenated into one deduced genome that can be used for alignment, DNA methylation calling, and differential methylation analysis in the same way as a regular reference genome. To avoid creating artificial sequences at the concatenation sites, spacer sequences consisting of 50 Ns (equaling the read length) are added between the deduced genome fragments. Of note, all key parameters in RefFreeDMA have been empirically optimized and can be changed by the user of the software. Mapping and DNA Methylation Calling Bisulfite alignment of the RRBS reads to the deduced genomes and to the reference genomes, as well as the mapping of the deduced genome fragments to the reference genomes was performed using BSMAP v2.74 ( Xi and Li, 2009 ) with the following command line: bsmap -a $input_fastq -d $ref_genome_fasta -o $output_bam -D C-CGG -w 100 -v 0.08 -r 1 -p 4 -n 0 -S 1 -f 5 –u . For cross-mapping and alignment to the deduced genomes, the -D parameter was not set, disabling the RRBS mode to allow mapping of reads independently of restriction sites. Also, for cross-mapping, the maximum allowed error rate ( -v ) was set to 0.2. The human (hg19) and cow (bosTau6) reference genomes were downloaded from the UCSC Genome Browser, and the carp reference genome was downloaded from the European Nucleotide Archive (ENA) project PRJEB7241 assembly GCA_000951615.1. For better handling, the 9,377 scaffolds of the carp genome were concatenated into ten artificial chromosomes using stretches of Ns as separators. DNA methylation calling was performed using the biseqMethCalling.py software ( Bock et al., 2010 ). Differential Methylation Analysis CpG sites exhibiting differential DNA methylation between predefined groups of samples were identified using hierarchical linear models as implemented in the limma R package. Multiple testing correction was performed for CpG sites using the false discovery rate method implemented in R’s p.adjust() function. To assess the significance of differential DNA methylation for entire fragments, multiple testing corrected p values for all CpG sites contained in a fragment were combined using an extension of Fisher’s method ( Makambi, 2003 ) as implemented in RnBeads ( Assenov et al., 2014 ). Differentially methylated fragments were priority ranked based on statistical significance as well as effect size, calculating ranks individually for p value, log fold change, and absolute difference in DNA methylation levels and then selecting the worst of the three ranks as representative for the fragment. This way, fragments that achieve top ranks in all of the measures are favored, whereas fragments that are assigned a bad rank in one or more of the measures are penalized. Software Properties RefFreeDMA is a Linux-based software pipeline that supports the various steps of reference genome independent analysis of differential DNA methylation based on RRBS data. External software requirements are limited to standard command line tools for next generation sequencing analysis, including picardtools, samtools, trimgalore, bowtie2, and bsmap. Runtime and memory usage depend on the number of samples, the number of reads per sample, the RRBS library complexity, and whether RefFreeDMA’s support for parallelization is used. For the presented datasets, which comprise 12 to 20 samples per species with ∼18 million 50-bp single-end reads per sample, one complete run using four cores (Intel Xeon E5-2650 processor) takes about 9 hr (wall-clock time) with parallelization and 40 hr (wall-clock time) without. The peak memory usage is 15 GB during consensus building. Although this study focuses on CpG methylation, our software also supports non-CpG methylation (when the nonCpG parameter is set to TRUE). RefFreeDMA is available as open source under the GPLv3 license: http://RefFreeDMA.computational-epigenetics.org . Comparison between Reference-Free and Reference-Based Analysis Correspondence between the published reference genomes and the deduced genomes is determined by mapping the deduced genome fragments to the corresponding reference genome. The resulting associations between CpG sites in the deduced genome and the reference genome serve as the basis for the validations. Figure S2 depicts the correct match between the two approaches (case 1) as well as four scenarios in which discrepancies between reference-free and reference-based analysis are expected (cases 2 to 5). Comparisons between the reference-free and reference-based approaches are performed at the level of individual CpGs and at the level of deduced genome fragments. Cross-Mapping Analysis In order to establish a connection between deduced genome fragments identified by RefFreeDMA in one species and well-annotated genomes of other species, deduced fragments were mapped to the human genome (hg19) and the mouse genome (mm10) using BSMAP/RRBSMAP with a maximum allowed mismatch rate of 20% as described in Mapping and DNA Methylation Calling . Overlaps between the genomic positions of mapped deduced genome fragments and annotations on the respective genome can then be used to perform enrichment analysis for the deduced fragments. We assessed differentially methylated fragments for enrichment of genomic annotations using LOLA ( Sheffield and Bock, 2015 ). LOLA tests for significant enrichment of overlap between user-defined genomic regions of interest (i.e., the fragment mapping positions) and experimentally annotated genomic regions, which are provided as a database. The matched genomic regions for the differentially methylated fragments (mean coverage > 2 and adjusted p < 0.05) of granulocytes or lymphocytes were used as primary input regions (user set), while the genomic regions of all mapped deduced genome fragments were used as background (universe). The regions database for human (hg19) consisted of region sets downloaded from Cistrome, CODEX, ENCODE, and the UCSC Genome Browser as well as custom sets for DNase hypersensitivity sites ( Sheffield et al., 2013 ). The region database for mouse (mm10) consisted of region sets downloaded from CODEX and ENCODE. Motif Enrichment Analysis Motif enrichment analysis was performed using the command-line version of the AME tool ( McLeay and Bailey, 2010 ) from the MEME package. We used the average odds score as sequence scoring method and the rank-sum test as motif enrichment test. All motifs were obtained from the JASPAR CORE (2014) Vertebrates database ( Mathelier et al., 2014 ). Only enrichments with an adjusted p value lower than 0.05 were reported. In order to find motifs that are differentially enriched among differentially methylated fragments, the top-500 differentially methylated fragments (mean coverage > 2 and adjusted p < 0.05) of one sample group were used as primary input sequences, while the top-500 differentially methylated fragments of the other group were used as background (control sequences). To correct for motif enrichment due to base composition bias ( Figures 5 B and S5 C), we performed the same analysis on random sequences that were constructed to reflect the base compositions of both groups on single nucleotide and dinucleotide level in 50 iterations each. To this end, the base compositions of the original sequences were determined using the fasta-get-markov tool from the MEME package. The 0 th - and 1 st -order Markov models for each group were then used as input for the gendb tool, which constructed 500 random sequences (length ∼50 bases) according to the models. This process was repeated 50 times with different random seeds. Finally, for each iteration AME was run on the shuffled sequences of one group as input and the shuffled sequences of the other group as background. All motifs that were detected as significantly enriched in more than 60% of all iterations were identified as false positives due to base composition bias and removed from the list of differentially enriched motifs identified for the original sequences. Furthermore, to identify motifs that might be enriched in differentially methylated fragments of both groups, we ran AME using the original sequences as input and the respective shuffled sequences as background. Only motifs that were found to be enriched in at least 95% of the iterations were reported as truly enriched in the differentially methylated fragments compared with the randomly shuffled sequences. For each enriched motif, the least significant p value was reported. Author Contributions J.K. and C.B. designed the study. P.D., M.F., and C.B. optimized the RRBS protocol. J.K. acquired and prepared the samples. J.K., D.P., and G.F. performed FACS sorting. P.D. and J.H. made the RRBS sequencing libraries. J.K. developed RefFreeDMA and performed the computational analysis with input from N.C.S. and C.B. J.K. and C.B. wrote the manuscript with input from all co-authors. Acknowledgments We thank the Biomedical Sequencing Facility at CeMM for assistance with next generation sequencing, Fabian Müller for providing the biseqMethCalling.py software, and all members of the Bock lab for their help and advice. We also thank Sylvia Knapp, Denise Barlow, Thomas van Gurp, and Christian Remmele for comments and suggestions, Marc Mößmer (Biofisch GmbH) for providing carp blood, Fleischerei Leopold Hödl for providing cow blood, and the following researchers for providing DNA from additional species: Clarissa Gerhäuser (rat), Vardhman Rakyan (dog), Marcela Hermann (chicken), Kaja H. Skjærven (zebrafish), and Francesc Piferrer (sea bass). This work was performed in the context of the BLUEPRINT project (European Union’s Seventh Framework Programme grant agreement No. 282510) and the ERA-NET projects EpiMark (FWF grant agreement no. I 1575-B19) and CINOCA (FWF grant agreement no. I 1626-B22). It was co-funded by a Marie Curie Career Integration Grant (European Union’s Seventh Framework Programme grant agreement No. PCIG12-GA-2012-333595). J.K. was supported by a DOC Fellowship of the Austrian Academy of Sciences. N.C.S. was supported by a Human Frontier Science Program long-term fellowship (LT000211/2014). C.B. was supported by a New Frontiers Group award of the Austrian Academy of Sciences. Accession Numbers The DNA methylation data reported in this paper have been submitted to the NCBI GEO and are available under accession number GEO: GSE74026 . Supplemental Information Supplemental Information includes five figures and two tables and can be found with this article online at http://dx.doi.org/10.1016/j.celrep.2015.11.024 . Supplemental Information Document S1. Figures S1–S5 Table S1. Summary Statistics for the Reference-Free and Reference-Based Analysis of DNA Methylation in the Blood Dataset, Related to Figure 2 For each of the analyzed samples and biological replicates, this shows the number of total reads, mapped reads, and informative reads (i.e., those that give rise to at least one valid DNA methylation measurement), mean DNA methylation levels of methylated and unmethylated spike-in controls, mean DNA methylation levels across CpG sites, non-CpG conversion rates, as well as the number of CpG measurements, number of covered CpGs, and mean informative sequencing coverage per CpG site. Table S2. Summary Statistics for Direct Cross-Mapping of Carp RRBS Reads to the Human, Mouse, and Zebrafish Genome with Various Choices of Alignment Parameters, Related to Figure 5 For each of the carp samples, this lists the number of mapped reads, the percentage of mapped reads, and the number of CpGs covered using four different mapping approaches with different BSMAP parameters: maximum mismatch rate of 0.08 with multi-mapping reads, maximum mismatch rate of 0.08 without multi-mapping reads, maximum mismatch rate of 0.2 with multi-mapping reads, and maximum mismatch rate of 0.2 without multi-mapping reads. Document S2. Article plus Supplemental Information
|
[
"ADAMS",
"AKAGI",
"ASSENOV",
"BAIRD",
"BAYLIN",
"BECK",
"BIBIKOVA",
"BOCK",
"BOCK",
"BOCK",
"BOYLE",
"DAY",
"ELSHIRE",
"FEIL",
"FEINBERG",
"FELDMANN",
"FOLEY",
"GAZIT",
"GILLMAN",
"GREIG",
"GU",
"HOUSEMAN",
"JABLONKA",
"JAFFE",
"JONES",
"KASPI",
"KLUNKER",
"KRUEGER",
"LANGMEAD",
"LAW",
"LIEBERMANN",
"LIEBL",
"MAKAMBI",
"MASSICOTTE",
"MATHELIER",
"MCLEAY",
"MEISSNER",
"MICHELS",
"MILL",
"NITTA",
"ONEILL",
"ORKIN",
"PORTELA",
"RAKYAN",
"REIK",
"ROBERTSON",
"ROSENBAUER",
"SCHREY",
"SCHREY",
"SHEFFIELD",
"SHEFFIELD",
"SKINNER",
"STADLER",
"TENEN",
"WATERLAND",
"WEYRICH",
"XI",
"XI",
"ZILLER"
] |
0e170bae419540cda45bb15cd00b0140_Solitary fibrous tumor A centers experience and an overview of the symptomatology the diagnostic and_10.1016_j.rmcr.2017.04.007.xml
|
Solitary fibrous tumor: A center's experience and an overview of the symptomatology, the diagnostic and therapeutic procedures of this rare tumor
|
[
"Hohenforst-Schmidt, Wolfgang",
"Grapatsas, Konstantinos",
"Dahm, Manfred",
"Zarogoulidis, Paul",
"Leivaditis, Vasileios",
"Kotoulas, Christophoros",
"Tomos, Periclis",
"Koletsis, Efstratios",
"Tsilogianni, Zoi",
"Benhassen, Naim",
"Huang, Haidong",
"Kosmidis, Christoforos",
"Kosan, Bora"
] |
Solitary Fibrous Tumor of the Pleura (SFTP) is a rare tumor of the pleura. Worldwide about 800 patients diagnosed with this oncological entity have been described in the existing literature. We report our center's 13 year experience. During this time three patients suffering from this rare disease have been treated in our department. All patients were asymptomatic and their diagnosis was initially triggered by a random finding in a routine chest x-ray. The diagnosis was set preoperatively through a needle biopsy under computer tomography (CT) guidance. The tumors were resected surgically though video-assisted thoracoscopic surgery (VATS) or thoracotomy. Because of the lack of specific guidelines due to the rarity of the disease a long-term, systematic follow-up was recommended and performed. Parallel an overview of the diagnostic and therapeutic procedures of the rare tumor is made.
|
1 Introduction The tumors of the pleura are an important nosological entity of the thoracic cavity. The most known tumor of the pleura is the mesothelioma. However, other tumors of the pleura have also been described. A less known and less common tumor is the solitary fibrous tumor of the pleura (SFTP). SFTP is a rare localized mesenchymal tumor which was initially thought to be a mesothelial pleural lesion [1] . Solitary fibrous tumors can arise from visceral organs or mesothelial tissues [1,2] . Solitary fibrous tumors have also been described in other localizations such as the pelvis, abdomen, retroperitoneum, buccal space, maxillary sinus, liver, pancreas, suprarenal region, and kidneys. It is believed that these tumors originate from extrapleural sites of these anatomical cavities and organs [3] . As far as the pleural solitary fibrous tumors are concerned, about 800 cases of SFTP have been described in the literature. Historically several terms have been used to describe this tumor, such as benign mesothelioma, localized mesothelioma, localized fibrous mesothelioma, localized fibrous tumor of the pleura, sub-pleural fibroma, pleural fibroma, localized benign fibroma, and sub-mesothelial fibroma [1,4,5] . The first description of the tumor is chronologically debatable. The first description of this entity is contributed to Lieutaud in 1767, while in other reports suggest that it was first described by Wagner in 1870. The first official description of the tumor's pathology was however made by Klemperer and Rabin. The majority of tumors are benign, but 10–20% of the tumors are malignant [1,4,6] . The tumor often presents no symptomatology and is usually randomly discovered during a routine chest x-ray. During the period of the last six years (2010–2016) three patients with SFTP were treated in the Department of thoracic and cardiovascular surgery in Kaiserslautern. The preoperative diagnosis was made through needle biopsy under computed tomography guidance. The patients underwent surgical excision and a subsequent long term follow-up. In this article we attempt to describe our experience in this field as well as to present a general overview of the existing literature regarding the diagnostic and therapeutic procedures of this rare tumor. 2 Cases presentation 2.1 1st case A-56-year-old female was presented to our outpatient clinic with a mass in the area of the lower lobe of the left lung that was incidentally revealed by a chest x-ray performed due to influenza symptoms. No other symptoms or physical sings implying malignancy existed. Patient's past medical history revealed COPD, hypothyroidism, type 2 diabetes mellitus and heavy smoking of thirty pack years (py). The chest computer tomography (CT) revealed a pleural mass having a size of 4 × 7 × 5 cm. A needle biopsy under CT guidance was performed and histology showed a SFTP. A further staging with bronchoscopy and Positron Emission Tomography - Computed Tomography (PET-CT) revealed no further pathological findings. A left posterolateral thoracotomy was performed. A tumor arising from the lower lobe with no infiltration of the thorax wall was found. A complete tumor resection with atypical lung parenchyma wedge resection was performed. The histopathological examination of the mass revealed a large SFPT of 11 cm diameter with circumscribed subcapsular necrosis, sometimes moderately gradiger nuclear pleomorphism and with 3 mitoses to 10 HPF (high-power field). The examination findings partially fulfilled the England's criteria for the characterization of a malignant SFTP (described below) [7] . For this reason, the tumor was characterized semi-malignant. In addition the histological examination showed tumor free margins of the resected tissue. In immunohistochemical analysis cells were positive for CD34 and negative for CD117. According to Demicco et al. the stratification risk for the patient was 4. According to the literature a metastasis free disease and a disease-specific survival is expected in a percentage of 64% and 93% respectively expected in ten years [8] . The patient was discharged on the 9th postoperative day. The hospital stay was prolonged due to a postoperative pneumonia that was conservatively treated. During the postoperative follow-up in our outpatient clinic, no complications were observed. The patient did not undergo chemotherapy or radiation. After a systematic (on a 6-month basis) two year follow-up the patient appeared with peripheral, rounded nodules of variable size, scattered throughout both lungs. The patient underwent a new full staging examination. All tests which were carried out including abdominal ultrasound, abdominal CT, colonoscopy, gastroscopy and tests for gynaecological malignancy revealed no pathological findings. A diagnostic video-assisted thoracoscopic surgery (VATS) of the right pleural cavity was performed and tissue biopsies where received. Histologically the biopsy showed no signs of metastases. The findings were attributed to interstitial pneumonia. A further long-term follow-up was suggested. 2.2 2nd case A-50-year-old female appeared in our outpatient clinic with a detected mass in the area of the upper lobe of the left lung. This patient's tumor was also revealed incidentally during a chest x-ray examination. She presented no other symptoms. The patient's past medical history revealed hypertension, thrombocytosis and thyroidectomy because of multinodular goiter and smoking. A CT scan revealed the mass (diameter: 3cm) and the final diagnosis of SFTP was also set through a CT needle biopsy. Because of a patient‘s denial for surgical treatment a follow-up was alternatively suggested. After a three years follow-up time and because of a significant increase of the mass diameter in the last CT (5,3cm) a VATS was finally performed. Intraoperatively the tumor presented no infiltration of the chest wall. A tumor excision was performed through a wedge lung parenchyma resection. The finding of a benign SFTP with a diameter of 6cm was showed histologically. In immunohistochemical assay cells were positive for CD34, positive for less than 2% of cells for Ki-67 and negative for CD117, D2-40 and TTF1. The patient was discharged on the 4th postoperative day without having any complications. A systematic follow-up was in this case also recommended. 2.3 3rd case A 77-years-old male patient aged was incidentally diagnosed with a mass in the left hemithorax and a lung nodule in right hemithorax. Patient's past medical history revealed only arterial hypertension. The CT scanning showed a mass of 9cm in diameter in the left hemithorax. The staging procedures with bone scintigraphy and CT showed no sign of metastases. A diagnostic CT guided needle biopsy showed a SFTP. A thoracoscopic wedge resection of the middle lobe showed an old tuberculoma. Because of the size of the tumor an excision through thoracotomy was finally conducted. Intraoperatively the SFTP presented adhesions to the visceral pleura of the lower lobe. The patient‘s postoperative course was uncomplicated. The patient was discharged on 5th postoperative day after the second surgery. Histology revealed a SFTP sized 9 × 5.5 × 4 cm. In immunohistochemical analysis cells was clearly positive for CD34 and negative for CD117. A long-term follow-up was also recommended to this patient as well. 3 Discussion SFTP is a very rare tumor [1,4] , which has gained appropriate recognition in the last two decades as a discrete pathologic entity [9] . It represents 5% of the tumors of the pleura. Only 800 such cases have been described in the literature between 1931 and 2002 [1,4,6,10] . However, Cardillo et al. reported that the number of the SFTP can possibly be about 960. The patients with SFPT are 2.8 per 100,000 hospitalized patients. The number of SFTP seems to increase, but it is significantly smaller than the commoner tumor of mesothelioma [6] . The age of the tumor diagnosis varies from 5 to 87. However, the most common age of diagnosis is the sixth and seventh decade of life [1,5,6] . The tumor also occurs with the same frequency in men and women [1] . Sook et al. however described a light deviation of frequency to the side of the females [11] . There is no evidence of heredity. A case with a tumor in mother and daughter has however been described in the literature [10] . No association with asbestos exposition, nicotine effusion or exposure to another environmental factor has been reported [1,5,6,9,11] . In our three treated cases two of the patients were females and no hereditary among relatives was reported. SFTPs often present no special clinical signs and no special symptomatology that could lead the clinical physician to a secure tumor diagnosis. The most common sign can be clubbing which is unspecific and can appear in many other lung and heart diseases, is therefore not pathognomonic. Clubbing can be accompanied with hypertrophic pulmonary osteoarthropathy (HPO). The etiology of clubbing in SFTP is currently unknown. Clubbing may subside after surgical removal of the tumor [1] . The clinical course of the disease is unpredictable [6] . Often there is no symptomatology. For this reason, the tumor is in most cases randomly diagnosed. Nevertheless, the tumor can appear with various symptoms, but its symptomatology consists of ordinary symptoms of the respiratory tract [1,5,6,12] . For example, cough [6,13] , thoracic pain, fever, dyspnea, weight reduction have been described [1,6,11,12,14] . Hemoptysis und pneumonitis may be observed in some rare cases [1,6,15] . Thoracic pain can occur if for example the tumor arises from or infiltrated the parietal pleura. A large tumor may also press the bronchus, and in that case pneumonitis and atelectasis can consequently occur. A large number of cases of the malignant type of the tumor can be symptomatic [1,5,6,15,16] . Thus, patients with benign tumors may have symptoms in 54–67% of the cases. Symptoms in malignant tumors are oftener and occur in approximately 75% of the cases [1] . In our three cases, no symptomatology was preoperatively reported. Signs which may lead to the suspicion of a tumor malignancy can be the existence of clinical symptoms, a mean tumor's diameter greater than 10 cm, fibrous adherences, pleural effusion and a positive histology for Ki67 10% or greater [5] . Interesting paraneoplastic syndromes have also been described in patients with SFTP [6] . These syndromes are often described in large SFTPs [1] . The hypertrophic pulmonary osteoarthropathy (HPO) is the most common. It occurs in 22% of SFTP [1,5,6,14,17,18] . However, it can also occur in 5% of cases of lung carcinoma. Regarding the clinical symptomatology of HPO it can be described as swelling of the legs and pain along the long bones. The etiology of the syndrome remains unknown [1] . A cause of the syndrome can be the excessive release of hyaluronic acid by the tumor [6] . HPO can be a good indication of the tumor's progression. The symptoms may be drastically reduced after a successful operative care. This postoperative disappearance or reduction of the syndrome can occur within a few hours [1] to three months [6] after surgery. The syndrome can also appear in cases of a tumor recurrence. Based on the findings, the hypothesis that HPO occurs due to hormonal factors (probably somatropin) produced by the tumor is possible [1,6] . An additional interesting paraneoplasmatic syndrome is hypoglycemia (Doege–Potter syndrome). However, paraneoplasmatic hypoglycemia is not often in SFTPs, occurring in 3–4% of the cases [1,5,6,19] . It is more possible that the syndrome may present in cases of tumors with a diameter larger than 20cm [20] . In addition, paraneoplastic hypoglycemia may occur in other tumors, such as leiomyosarcoma, rhabdomyosarcoma or liposarcoma [1] . Possibly its appearance is due to the secretion of insulin-like growth factor II. This paraneoplastic syndrome also appears in larger tumors and malignant tumors [3,19] . A withdraw of the paraneoplasmatic symptoms is also observed after the surgical treatment of the tumor [1,6,19] . Reuvers et al. have reported a case of a Doege-Potter syndrome tumor-associated hypoglycemia being the first sign of the SFTP. The initial approach to diagnosis and subsequently surgical surgery is made after appropriate radiological examination. The preoperative diagnosis of the tumor is a difficult challenge [6] . The definitive diagnosis of SFTP will be however histologically made after the surgical resection of the tumor [3,6] . Chest x-ray has of course a leading role in the first diagnostic approach of the disease, as in most cases of thoracic disease. The chest x-ray will provide as with signs which lead to the initial suspicion of the tumor. The actual size of the tumor may be significantly different compared to that seen in the x-ray. The tumor borders are still clear. The tumors that arise from the thoracic wall and the parietal pleura may form a corner with the lung parenchyma. However the exact percentage of tumors that present such a radiological morphology varies [1] . The information provided through ultrasound examination is limited and its use is not usually reported in analyses of the cases in the literature [6] . The computed tomography (CT) of the thorax is considered to be the most important examination in the diagnostic pathway of the disease. The performance of the CT scan can give the clinicians valuable information concerning the tumor size, the tumor morphology and its relationship with the other organs of the thoracic cavity. In addition, it can also be a great aid in the careful preoperative planning of surgical therapy [1,5,6] . CT scans usually demonstrate a well-defined and occasionally lobulated mass with soft tissue attenuation appearing on the pleural surface, and the displacement of the surrounding structures [6] . Sook et al. reported that in most cases the findings were presented as tumors arising from the pleura. However, there also exists the possibility that they appear as intrapulmonary or mediastinal tumors [11] . The exact position of the tumor arise can also possibly be detected in CT. As a result, the SFTP can be detected to arise from the parietal pleura, the lung fissure or the visceral pleura [1,6] . However, in cases of intrapulmonary tumors the differential diagnosis from lung cancer can be very difficult [6] . This should be taken into consideration during the planning of the operation in order to avoid unpleasant intraoperative surprises. The majority of the tumors mostly arise from the visceral pleura and seldom from the parietal pleura. Intratumoral necrosis, hemorrhage [1,6,20] , neoangiogenesis with increased tumor's blood supply network [6,20] could also be detected in CT scan. These findings may be indicative of malignancy [1,6,20] . A tumor‘s diameter bigger than 10 cm may also imply the malignant character of the tumor. However, Hélage et al. reported that the presence of intratumoral calcifications and maximum post-contrast enhancement value are not significant for the recognition of a benign or a malignant SFTP [20] . In case of a large SFTP or a tumor arising from the mediastinal pleura it can be sometimes difficult to preoperatively distinguish the mass from a mediastinal tumor. In some rare cases SFTP can be seen as multiple nodules in CT. Also, pleural effusion can be detected in CT accompanying a SFTP in up to 12% of the cases. In our case in all patients a CT always followed the detection of the mass in chest x-ray. The diagnostic value of CT can be also seen in the guided needle biopsy of the tumor, which provides us with valuable preoperatively information concerning the tumor‘s nature and may even set the diagnosis. However, this method is controversial. The majority of the investigated literature regarded that a preoperative CT fine needle aspiration (FNA) should not be performed as diagnostic routine examination [6] . In this way, Boddaert et al. suggested that CT-guided FNA does not influence the therapeutic approach to SFTP and should be considered only in patients who require extended procedures, with high surgical risk, or unresectable tumors [5] . Cardillo et al. have recommended a Tru-cut biopsy if the preoperative diagnosis is necessary [6] . Chunlai Lu et al., on the other hand, suggested ultrasonography-guided core needle biopsy combined with immunohistochemical analysis as it might be a safe and rapid method to provide a diagnosis before the planned tumor resection [21] . However, preoperative diagnosis using needle biopsy has been reported in 5 cases by Weynand et al. [22] . In our cases needle biopsy under CT was performed in all three patients and offered us the ability to preoperatively assess the tumor‘s nature. However, in the case of the semi-malignant SFTP it was not able to distinguish the tumor's malignancy that was later histologically diagnosed. Magnetic resonance imaging (MRI) has a limited use in the diagnostic procedure of these tumors [1] . A possible use of thoracic MRI could be to detect if the tumor infiltrates the thoracic wall or the diaphragm [1,5,6] or the detection of the fibrous character of the lesion [5] . A disadvantage of this imaging technic is the inability to distinguish between SFTP malignant and benign tumors [6] . Positron emission tomography–computed tomography (PET-CT) has currently no use in the tumor diagnosis [1,6] , because the tumor exhibits little or no FDG uptake [5] and as a result the examination cannot help the distinction between the tumor's malignant and benign character [6] . In addition, the examination is not performed on a routine basis especially in small benign and resectable SFTP [23] . However, Kohler et al. suggest that large SFTPs with increased FDG uptake preoperative have a high likelihood for malignancy [24] . Possibly in the future new studies could give more information concerning the utility of this examination [23] . Bronchoscopy and bronchoalveolar lavage also have no significant diagnostic utility [1,6] . One possible use could be the exclusion of other lung diseases [6] . Dammad et al. have reported the diagnostic approach of a giant SFTP with medical thoracoscopy and endobronchial ultrasound (EBUS) [25] . In our case PET-CT and bronchoscopy were performed only in the case of the first patient aiming to complete the staging after the findings for semi-malignant SFTP. As far as the genetic background of the disease is concerned NAB2-STAT6 gene has been accused to be involved in the pathogenesis of SFTP [26] . Immunochemistry plays a key role in terms of the distinction of SFTP from mesotheliomas and sarcomas. SFTP were positive for vimentin, but they lacked cytokeratin expression. In addition, the positivity of most SFTP for CD34 helps clinicians to distinguish them from mesothelioma. Both benign and malignant varieties of SFTP are CD34 − , CD99 − , and bcl-2-positive [1] . Concerning the characterization of such a tumor as malignant England et al. have suggested that a SFTP could be characterized as malignant when at least one of the following criteria is met: 1. high mitotic activity; 2. high cellularity; 3. necrosis; 4. hemorrhage; 5. Pleomorphism. Otherwise if none of these criteria is met the SFTP can be considered as a benign tumor [7] . In addition, De Perrot proposed a five-stage classification for the SFTP [1] : stage 0, pedunculated tumor without signs of malignancy [2] ; stage I, sessile or “inverted” tumor without signs of malignancy [3] ; stage II, pedunculated tumor with histologic signs of malignancy [4] ; stage III, sessile or “inverted” tumor with histologic signs of malignancy; and [5] stage IV, multiple synchronous metastatic tumors [2] . The therapy of both benign and malignant types of the SFTP is the complete en bloc surgical resection with free resection margins [1,5,6,11] . The surgical approach has also an indication for diagnostic purposes as long as this thoracic surgical approach has low morbidity and mortality [11] . Preoperatively the surgical treatment can be suitably planned according to radiological findings [1,6] . However, the exact surgical approach will be based on the location and the size of the tumor and not depending on the suspicion that the tumor might be a SFTP [11] . The surgical resection of the tumor can be successfully performed by an experienced surgeon without any complications [1] . These tumors are not primary lung tumors, but pleural tumors and as a result a careful tumor resection must be performed. The lung parenchyma resection should be kept as minimal as possible, but wide enough to ensure free resection margins so that no tumor recurrence occurs [1,6] . For this reason a tumor excision with margins of 1–2 cm of healthy lung parenchyma is recommended. If there are still doubts concerning the R0 resection and the resection margins, an intraoperative frozen section analysis of the margins is suggested [6,27] . The approach of the oncological SFTP resection is the same both for the malignant and the benign subtypes of the disease [1] . If the tumor is benign, of course, a limited lung parenchyma resection should be performed. Smaller tumors can be removed through video-assisted thoracoscopic surgery (VATS) [1,6,9,11,28] . However, a large SFTP could also be resected thoracoscopically. Mazzela et al. have reported the oncological resection of a large SFTP via a single port VATS [29] . Caution is needed so that no spread of tumor cells occurs through the surgical trocar of the VATS. For larger or gigantic tumors a resection through thoracotomy is recommended. A lobectomy or a pneumonectomy could finally be carried out in larger tumors or in intraparenchymal tumors. If the tumor arises from the parietal pleura a thoracic wall resection could be considered [1] . In a case of thoracotomy attention should also be paid to avoid the intraoperative spread of the tumor [6] . In case of malignancies that infiltrate adjacent structures en-block surgical resection could be performed involving the adjacent structures [1] . Lu et al. for instance reported two cases of partial lung resection and thymus resection [9] . In some cases that the mass gave the impression of a thymus tumor a sternotomy was used as surgical approach [9,11,30] . As expected the hospitalization time is longer in case of thoracotomy than VATS [6] . The most often perioperative complication is bleeding [1,6] . In larger tumors the risk of bleeding is greater [6] . The fact that the tumor commonly arises from the visceral pleura should also be taken into consideration as the tumor develops adhesions with the parietal pleura. However these adhesions are not well perfused and can be carefully prepared without causing a significant bleeding. In addition, in order to avoid this risk of intraoperative bleeding every surgeon should always pay attention to the vascular pedicle that can arise from the parietal pleura [27] . In addition, in cases of large tumors a preoperative embolism of the tumor can be performed in order to reduce the intraoperative bleeding risk [6] . The most common postoperative complication is the tumor recurrence. The tumor's perioperative mortality is rather low and ranges from 0 to 1.5% [1] (See Figs. 1–4 ). Due to the rarity of the tumor adjuvant chemotherapy is not widely used [1,6,11] . However, chemotherapy could be useful in some selected cases. For example, it could be used in uncompleted resected tumors, malignant sessile SFTPs or in cases of chest wall invasion and concurrent pleural effusion [6] . Unfortunately, there is still no great experience in this field and only few literature reports because of the rarity of the tumor [1] . However, as far as SFTP recurrence after surgical treatment is concerned the tumor's recurrence is a clear indication for a new operation. In addition there is no data to support the use of neo adjuvant chemotherapy. On the contrary, due to the difficulty and the uncertainty of the preoperative diagnosis neo adjuvant chemotherapy is not recommended [1,6] . Ηyperthermic cisplatin chemotherapy and brachytherapy, which are usually used in the treatment of malignant pleural mesothelioma, could also be applied in cases of SFTP, but their efficacy is still uncertain [3] . Radiotherapy has also been reported to have been performed after incomplete tumor resection. There are however also no data to support its efficacy [1,6] (see Table 1 ). After the surgical treatment of a SFTP an extensive follow up is always recommended. This long term follow up is suggested due to the lack of specific guidelines regarding the treatment of SFTP and the high risk of SFTP recurrence after surgical resection. A postoperative follow-up with computer tomography is also recommended every 6 months for 2 years and then annually. The follow up, however, may be needed to be longer as a SFTP recurrence has been reported fifteen years after the surgical treatment [6] . The malignant and larger SFTP are more likely to develop metastases. For this reason, in high risk cases for postoperative metastases a long-term follow up is necessary [6,11,31] . Metastases from SFTP have been observed to bones, brain, lungs and in intra-abdominal lymph nodes [11] . Ricciuti et al. have reported a thyroid gland metastasis because of a malignant giant SFTP [31] . In addition, Takuya Inoue et al. have reported a patient's case that SFT, in which the disease recurred locally with malignant transformation 2 years after wedge resection of the primary tumor [32] . This could probably also be an additional argument for a long-term follow-up [24,33] . In our case the proposed risk assessment model of Elizabeth Demicco that ranged from 0 to 6 was followed [8] . In addition, in 2012 Tapias et al. proposed a new scoring system for the recurrence of resected SFTP based on the pleural origin, the morphology, the size of the tumor, the presence of hypercellularity, necrosis and mitotic activity ≥4/HPF [34] . In our case a postoperative follow up was recommended to all patients, especially to the patient who initially denied surgery. 4 Conclusion SFTP is a rare tumor of the pleura. These tumors are often asymptomatic. As in most lung diseases the CT provides us with useful information. The preoperative diagnosis of the tumor is a difficult challenge. In our case the diagnosis was made preoperatively through CT-guided needle aspiration. The tumor's treatment is surgical. A long term follow up is in all cases highly recommended.
|
[
"ROBINSON",
"DEPERROT",
"MA",
"PAPADOPOULOS",
"BODDAERT",
"CARDILLO",
"ENGLAND",
"DEMICCO",
"LU",
"JHA",
"SUNG",
"LIU",
"KIM",
"ALIMI",
"TAN",
"CRNJAC",
"LEE",
"JANG",
"MENG",
"HELAGEA",
"CHUNLAI",
"WEYNAND",
"YEOM",
"KOHLER",
"DAMMAD",
"HUANG",
"KAMATA",
"LEE",
"MAZZELLA",
"ALIMI",
"RICCIUTI",
"INOUE",
"RENA",
"TAPIAS"
] |
e7f4a6ce0f16453b9fa39d3d5a6c4387_ZNF143 deletion alters enhancerpromoter looping and CTCFcohesin geometry_10.1016_j.celrep.2023.113663.xml
|
ZNF143 deletion alters enhancer/promoter looping and CTCF/cohesin geometry
|
[
"Zhang, Mo",
"Huang, Haiyan",
"Li, Jingwei",
"Wu, Qiang"
] |
The transcription factor ZNF143 contains a central domain of seven zinc fingers in a tandem array and is involved in 3D genome construction. However, the mechanism by which ZNF143 functions in chromatin looping remains unclear. Here, we show that ZNF143 directionally recognizes a diverse range of genomic sites directly within enhancers and promoters and is required for chromatin looping between these sites. In addition, ZNF143 is located between CTCF and cohesin at numerous CTCF sites, and ZNF143 removal narrows the space between CTCF and cohesin. Moreover, genetic deletion of ZNF143, in conjunction with acute CTCF degradation, reveals that ZNF143 and CTCF collaborate to regulate higher-order topological chromatin organization. Finally, CTCF depletion enlarges direct ZNF143 chromatin looping. Thus, ZNF143 is recruited by CTCF to the CTCF sites to regulate CTCF/cohesin configuration and TAD (topologically associating domain) formation, whereas directional recognition of genomic DNA motifs directly by ZNF143 itself regulates promoter activity via chromatin looping.
|
Introduction CTCF is a key mammalian architectural protein for interphase 3D genome folding. 1 , Specifically, CTCF dynamically recognizes a wide range of genomic sites within the linear 1D sequences known as CBS (CTCF binding site) elements. Their genomic distributions and relative orientations determine the looping specificity of long-distance chromatin interactions. 2 1 , 3 , 4 , 5 , In particular, there is a strong tendency for close spatial contacts between forward-reverse convergent CBS elements. 6 7 , 8 , 9 , Mechanistically, asymmetrical stalling of loop-extruding cohesin complexes at convergent CTCF sites results in their close spatial contacts 10 11 , since oriented CTCF interacts with the cohesin complex via its N terminus but not C terminus. 12 13 , 14 , These convergent tandem CBS elements were recently shown to function as chromatin topological insulators to balance promoter-enhancer selection and to block improper activation of non-cognate promoters by remote enhancers. 15 16 , 17 , 18 , 19 As a paradigm to investigate mechanisms of genome folding, the human clustered protocadherin ( cPCDH ) genes are organized into three sequentially linked clusters of PCDH α , β , and γ , spanning a region of ∼1 M bp ( Figure 1 A). 20 , 21 , This complex locus forms a superTAD (super-topologically associating domain) comprising 22 PCDH α and βγ subTADs, with 15 and 38 ( 16 β and 22 γ ) variable exons, respectively, followed by single sets of three downstream constant exons within each subTAD ( Figure 1 A). Specifically, the PCDHα subTAD contains a repertoire of 13 alternate ( α1 – α13 ) and 2 c-type ( αc1 and αc2 ) variable exons each preceded by a separate promoter. Each of the 13 alternate variable promoters is flanked by two forward-oriented CBS elements, the conserved sequence element (CSE), and exonic CBS (eCBS) ( Figure S1 A), and the αc1 , but not αc2 , ubiquitous variable promoter is preceded by a single CBS element, resulting in a tandem array of 27 forward-oriented CBS elements for the PCDHα cluster ( Figure S1 A). 7 , By contrast, the 23 HS5-1 enhancer element, which is located at the boundary between the PCDH α and βγ subTADs, is flanked by two reverse-oriented CBS elements ( HS5-1a and HS5-1b ). 7 , 16 , Continuous active “loop extrusion” by cohesin complexes anchored by these convergent forward-reverse CBS elements results in long-distance chromatin interactions between the 24 HS5-1 enhancer and target variable promoters. The activation of variable promoters by the HS5-1 enhancer determines the stochastic and allelic cPCDH gene choice in single cells in the brain. 10 , 16 , 25 , 26 , 27 The transcription factor zinc finger protein 143 (ZNF143) attracts increasing attention as a 3D genome modulator due to its ubiquitous expression in diverse tissues and genome-wide colocalization with CTCF and cohesin. 28 , 29 , 30 , ZNF143 is known for its ability to activate transcription of protein-coding and small nuclear RNA (snRNA) genes. 31 32 , 33 , 34 , Specifically, ZNF143 contains two distinct N-terminal activation domains for snRNA or mRNA promoters and a central DNA-binding domain (DBD) with 7 tandem-arrayed C 35 2 H 2 -type zinc fingers (ZFs). It is initially identified in 36 Xenopus as selenocysteine tRNA gene transcription-activating factor (Staf), and hence, its genomic binding site is known as Staf-binding site (SBS). 37 , In addition, two overlapping SBS motifs, SBS1 and SBS2, have been identified in mammalian genomes by chromatin immunoprecipitation sequencing (ChIP-seq). 38 30 , Moreover, a ZNF143-SBS1 interaction model, predicted from chemical interference analysis, shows a broad spectrum of close contacts between ZF1-6 and SBS1. 35 Finally, the role of ZNF143 in chromatin looping has been suggested by Hi-C or Hi-TrAC in conjunction with ChIP-seq. 39 40 , However, the mechanism by which ZNF143 functions in higher-order chromatin organization remains unclear. 41 Here we use ChIP-nexus (chromatin immunoprecipitation experiments with nucleotide resolution through exonuclease, unique barcode, and single ligation) to precisely map genome-wide ZNF143 binding footprints and HiChIP to detect ZNF143-directed chromatin loops in genome architecture at high resolution. In conjunction with genetic and biochemical experiments, we report that direct directional recognition of genomic sites by ZNF143 is required for long-distance chromatin contacts of promoters and/or enhancers and that ZNF143 is crucial for higher-order topological chromatin organization such as genome compartmentalization. Results Directional ZNF143 recognition of SBS elements within PCDH HS5-1 enhancer We performed ChIP-nexus experiments by using a specific antibody against ZNF143 and found that, similar to CTCF, ZNF143 is enriched at most PCDH variable promoters as well as super-enhancers ( Figure 1 A). Specifically, ZNF143 and CTCF are mostly colocalized at the CSE and eCBS elements in the variable promoter region of the PCDHα cluster ( Figures 1 A, S1 A, and S1B). However, in the HS5-1 enhancer region, we found that, in addition to the colocalization of ZNF143 with CTCF at the two CBS elements ( HS5-1a and HS5-1b ), a single ZNF143 peak is localized immediately upstream of HS5-1b ( Figure 1 B). This single ZNF143 ChIP-nexus peak contains two tandem SBS elements, termed HS5-1b1 and HS5-1b2 , both in the reverse orientation separated by a short distance of only 22 bp, suggesting that ZNF143 may bind to both SBS elements ( Figures 1 C, S1 C, and S1D). We performed electrophoretic mobility shift assay (EMSA) experiments using the HS5-1b1 or HS5-1b2 probe ( Figure S1 C) and confirmed the direct binding of ZNF143 to both SBS elements ( Figure 1 D). In addition, we generated two mutations, Mut1 and Mut2, of the HS5-1b1 and HS5-1b2 SBS elements, respectively ( Figure S1 D) and found that they abolish the ZNF143 binding ( Figure 1 E). Finally, we observed two shifted bands using one EMSA probe containing both HS5-1b1 and HS5-1b2 , suggesting that each is recognized by a separate ZNF143 protein ( Figure 1 F). However, the Shift2 band is much weaker than the Shift1 band, suggesting that simultaneous binding of ZNF143 at both HS5-1b1 and HS5-1b2 is infrequent ( Figure 1 F). We conclude that ZNF143 binds directly to the PCDHα HS5-1 enhancer, which contains two tandem SBS elements, where each may be recognized by a single ZNF143 protein dynamically. ZNF143 contains a central DBD with 7 tandem-arrayed C 2 H 2 -type zinc fingers (ZF1-7), which are flanked by N- and C-terminal intrinsically disordered regions (IDRs) ( Figures 1 G and S1E). To investigate the mechanism of ZNF143 binding to its cognate sites, we generated a series of truncated ZNF143 proteins through sequential deletions of ZFs from either the C or N terminus ( Figures 1 H–1J). We performed comprehensive EMSA experiments with both HS5-1b1 and HS5-1b2 SBS probes ( Figures 1 K–1N). We found that C-terminal deletions up to ZF6 (ZF1-5) abolish the binding of ZNF143 to both SBS probes ( Figures 1 K and 1L). In addition, N-terminal deletions up to ZF3 (ZF4-7) or ZF2 (ZF3-7) abolish the binding to HS5-1b1 or HS5-1b2 , respectively ( Figures 1 M and 1N). This suggests that ZF3-6 or ZF2-6 is essential for ZNF143 to bind to HS5-1b1 or HS5-1b2 , respectively. Interestingly, both SBS elements within the HS5-1b enhancer region are in the reverse orientation (the direction of Module1–4 is known as the forward orientation) ( Figure 1 O). To investigate the directionality of ZNF143 binding, we generated two additional mutant probes, Mut3 and Mut4, within Module4 of HS5-1b1 and Module1 of HS5-1b2 , respectively ( Figure 1 O). Remarkably, introduction of mutations into a tri-nucleotide within Module4 from “GCA” to “TAC” did not alter the binding of ZF3-7 but did reduce the binding of ZF1-7 or ZF2-7, suggesting that ZF1-2 recognizes Module4 ( Figure 1 P). By contrast, introduction of mutations into a di-nucleotide within Module1 from “CT” to “AG” did not alter the binding of ZF1-6 but did reduce the binding of ZF1-7, suggesting that ZF7 recognizes Module1 ( Figure 1 Q). These data suggest a directionality of ZNF143 binding to the tandem SBS elements whereby ZF7 recognizes Module1 and ZF1-2 recognizes Module4. Directional ZNF143 recognition of genome-wide SBS elements To further investigate the genome-wide directionality of ZNF143 binding without the interference of CTCF, we performed CTCF ChIP-nexus experiments ( Figure 1 A) and analyzed genome-wide CTCF and ZNF143 colocalization ( Figure S1 F). We found that ∼3/4 of ZNF143 peaks (27,150 out of 37,139) are overlapped with CBS elements ( Figure 1 R). Further sequence analyses of ZNF143 peaks not overlapped with CBS identified three types of SBS motifs ( Figure S1 G). We investigated their recognition directionality and found that ZF7 deletion nearly abolishes the binding of ZNF143 ( Figures 1 S and 1T), suggesting that ZF7 plays a major role in the ZNF143 recognition of Module1. By contrast, N-terminal deletions of the first two ZFs have no effect on the ZNF143 binding ( Figure 1 U). Finally, mutations of single nucleotides within Module1 weaken the ZNF143 binding only if it contains ZF7 ( Figures 1 S and 1V). This demonstrates that ZF7 recognizes Module1. Taken together, these data suggest that ZNF143 directionally recognizes SBS elements in a flexible manner with an anti-parallel orientation ( Figure 1 W). CTCF recruits ZNF143 to PCDH CBS elements in vivo Recent co-immunoprecipitation (CoIP) experiments showed direct interactions between mouse ZNF143 and CTCF. For the large 40 cPCDH gene complex, in addition to ZNF143 enrichments at the two SBS elements of HS5-1b1 and HS5-1b2 within the PCDH HS5-1 enhancer, ZNF143 is also enriched at the PCDH variable regions as well as super-enhancers ( Figure 1 A). Interestingly, ZNF143 and CTCF are colocalized at the CSE and eCBS elements within the PCDH variable regions as well as CBS elements within super-enhancers ( Figure S2 A). However, we could not find SBS motifs around these CBS elements. To see whether ZNF143 directly binds to these CBS elements, we performed comprehensive EMSA experiments using a repertoire of CSE and eCBS probes and found that ZNF143 does not bind these probes directly in vitro ( Figure S2 A). This suggests that CTCF may recruit ZNF143 to these CBS elements in vivo . To this end, we employed an auxin-inducible degron (AID) system to degrade CTCF 42 in vivo . We first generated single-cell clones stably expressing the auxin receptor by targeting the rice OsTIR1 gene to the human AAVS1 (adeno-associated virus integration site 1) locus. We then tagged CTCF alleles with AID in these cells to produce CTCF-AID clones. We confirmed by western blot that CTCF was degraded after 24 h of auxin treatment in CTCF-AID cells ( Figure S2 B). Upon CTCF degradation, there was a significant decrease of ZNF143 enrichments at the CBS elements within the variable and super-enhancer regions of the PCDH clusters ( Figures 2 A –2C and S2 C). However, as internal controls, ZNF143 enrichments at the two SBS elements of HS5-1b1 and HS5-1b2 appear unchanged despite CTCF degradation ( Figure 2 C). This demonstrates that ZNF143 enrichments at the CBS elements within the PCDH clusters are CTCF dependent. CTCF recruits ZNF143 to CTCF sites genome-wide We then analyzed global ZNF143 and CTCF enrichments at their colocalized CBS elements and found that ZNF143 enrichment is strongly correlated with CTCF ( Figure 2 D). We also found that, upon CTCF degradation, there is a significant decrease of ZNF143 enrichments at the colocalized CBS elements genome-wide ( Figures 2 E, 2F, and S2 D). In contrast, there is no decrease of ZNF143 enrichments at the SBS elements ( Figures 2 E, 2F, and S2 D). In conjunction with the recent finding that CTCF interacts with ZNF143 directly, these data suggest that CTCF recruits ZNF143 to its colocalized CBS elements. 40 ZNF143 is located between CTCF and cohesin at their colocalized loop anchors CTCF/cohesin-mediated long-distance chromatin interactions between convergent CBS elements determine the PCDH promoter choice. Since ZNF143 and CTCF proteins are colocalized at these forward and reverse CBS elements, we performed ChIP-nexus experiments with an antibody specifically against RAD21, a subunit of cohesin, and we analyzed the colocalization of RAD21 with CTCF and ZNF143 at the 7 PCDH CBS elements at single-base resolution. Interestingly, we found that ZNF143 is localized between CTCF and cohesin at both the forward and reverse PCDH CBS elements ( Figure 2 G). We then performed CTCF and RAD21 HiChIP and analyzed the global triple colocalizations of ZNF143, CTCF, and RAD21 at chromatin loop anchors. We found that ZNF143 is localized between CTCF and cohesin at loop anchors globally ( Figure 2 H). For example, ZNF143 appears to be located between CTCF and cohesin at both forward and reverse CBS anchors in chromosome 1 ( Figure S2 E). There are three types of CBS elements genome-wide: type1 CBS lacks Module1 and types 2 and 3 contain Module1 but with variable distance between Module1 and Module2 ( Figure S2 F). Further analyses of different types of CBS elements revealed that ZNF143 is localized between CTCF and cohesin for all three types of CBS elements at CTCF/cohesin loop anchors genome-wide ( 43 Figure S2 F). Previous studies revealed that RAD21 is located ∼40 bp downstream of CTCF at loop anchors. To obtain the precise localization of ZNF143 in relation to CTCF and RAD21, we analyzed DNA footprints of ZNF143, CTCF, and RAD21 at single-base resolution on their co-occupied CBS elements. We found that the 5′ borders of the three proteins are the same; however, the 3′ border of ZNF143 is slightly more inner than the 3′ border of CTCF, and the 3′ border of RAD21 is obviously localized at the innermost position ( 44 Figure 2 I). In addition, loop anchors with different types of CBS elements all display the similar patterns of ZNF143, CTCF, and RAD21 footprints ( Figure S2 G). These data demonstrate that ZNF143 is localized between CTCF and cohesin at loop anchors. ZNF143 deletion compromises CTCF/cohesin-mediated chromatin looping We next generated ZNF143-knockout single-cell clones (ΔZNF143) using CRISPR DNA fragment editing with Cas9 programmed by dual sgRNAs. 10 , Western blot and RNA-seq experiments confirmed the absence of ZNF143 protein and mRNA in ΔZNF143 cells ( 45 Figures S3 A and S3B). ZNF143 deletion significantly slows down the cell proliferation ( Figure S3 C) but does not affect the CTCF enrichments in the PCDH clusters ( Figures 3 A–3C ). However, the RAD21 enrichments appear to decrease slightly in the PCDH clusters upon ZNF143 deletion ( Figures 3 A–3C). We then quantified CTCF and cohesin enrichments in the PCDH clusters using DeepTools and found a significant decrease of enrichment levels of RAD21, but not CTCF, at the CBS elements within the 46 PCDH clusters ( Figures 3 D and 3E). We then quantified global CTCF and RAD21 enrichments at their colocalized CBS elements and found a significant decrease of RAD21, but not CTCF, upon ZNF143 deletion ( Figures 3 F and 3G). We next performed HiChIP experiments with a specific antibody against CTCF or RAD21 and found a significant decrease of long-distance PCDH chromatin interactions upon ZNF143 deletion ( Figures 3 H and 3I). We then analyzed global chromatin interactions by aggregated peak analyses and found that the strengths of both CTCF and cohesin loops are weakened upon ZNF143 deletion ( Figures 3 J, 3K, S3 D, and S3E). This weakened strength may be attributed to the architectural role of ZNF143 rather than the secondary transcriptional effect since CTCF/ZNF143-colocalized CBSs are primarily at insulators ( Figure S3 F). Finally, ZNF143 deletion results in a significant decrease of chromatin loop numbers ( Figure S3 G). ZNF143 functions as a “buffer sponge” between CTCF and cohesin We then asked whether ZNF143 deletion affects the relative locations of CTCF and cohesin at their triple colocalized sites ( Figure S3 H). Careful analyses of the CTCF and RAD21 ChIP-nexus data in ΔZNF143 clones revealed that the RAD21 ChIP-nexus peaks appear to be shifted toward CTCF peaks upon ZNF143 removal ( Figures 3 L and S3 I). Further whole-genome analyses revealed remarkably closer proximity between CTCF and RAD21 at triple colocalized CBS elements upon ZNF143 deletion ( Figure 3 M). In addition, this is true for all three types of CBS elements ( Figure S3 J). The distance between CTCF and RAD21 peaks at both left and right loop anchors of CTCF/cohesin-mediated loops also decreased upon ZNF143 deletion ( Figure 3 N). Finally, CoIP experiments showed that ZNF143 interacts with both CTCF and RAD21, respectively ( Figures S3 K and S3L). Together, these data suggest that ZNF143 may act as a buffer sponge and function as an elastic hinge to maintain proper configuration of CTCF and cohesin complex during loop formation. ZNF143 is required for SBS loop formation To investigate the role of ZNF143 in loop formation, we first performed ZNF143 HiChIP experiments and predicted significant chromatin interactions with hichipper ( 47 Figure 4 A). In conjunction with ZNF143 and CTCF ChIP-nexus data, we identified 2,193 SBS-SBS loops (SBS loops: both anchors at SBS elements, Table S1 ), 4,600 SBS-CBS loops, and 2,199 CBS-CBS loops (CBS loops: both anchors at CBS elements) by HiChIP experiments with a specific antibody against ZNF143 ( Figures 4 A and S4 A–S4C). In addition, SBS elements are preferentially located at promoter regions ( Figure S4 D), with over 80% of SBS promoters being active and associated with CpG islands ( 30 Figures S4 E and S4F). Moreover, ∼90% of SBS loops have at least one anchor at gene promoters ( Figures 4 B and S4 G). Furthermore, 88.7% of SBS loop-anchored promoters are active and 11.3% are inactive ( Figure 4 C). Consistently, the SBS loop-anchored active promoters show higher levels of active marks of H3K4me3 and H3K27ac than the inactive ones ( Figure S4 H). Interestingly, the SBS loop-anchored inactive promoters seem to be bivalent due to their enrichments with both repressive mark of H3K27me3 and active mark of H3K4me3 ( Figure S4 H), suggesting they may be in a poised state. Finally, for SBS loops with both anchors at promoter regions, both convergent and divergent promoters could be in close spatial contacts via SBS loops and globally downregulated by ZNF143 knockout ( Figures 4 D, 4E, S4 I, and S4J). To further investigate the role of ZNF143 in SBS loops, we performed in situ Hi-C experiments using our ΔZNF143 cell clones, with wild-type (WT) cells as controls. We analyzed valid pairs of Hi-C reads at SBS loop anchors identified by ZNF143 HiChIP experiments and found a significant decrease of loop strength upon ZNF143 deletion ( Figures 4 F, 4G, and S4 K). Finally, to confirm the role of ZNF143 in SBS looping, we performed high-resolution 4C experiments using an SBS element as the anchor and observed a significant decrease of SBS looping upon ZNF143 deletion ( Figure 4 H). To investigate the functional consequences of ZNF143 deletion, we performed RNA-seq experiments using ZNF143-deletion clones. Although there are both increased and decreased levels of gene expression upon ZNF143 deletion ( Figure S4 L), there is a significant decrease of expression levels of genome-wide SBS loop-anchored promoters ( Figure 4 I). We then performed H3K4me3 ChIP-seq experiments and found a consistent decrease of H3K4me3 marks in SBS-anchored promoters ( Figures 4 D, 4E, 4J, and S4 M). All in all, these data suggest that ZNF143 binds to promoter regions to regulate gene expression via chromatin looping. Aberrant large-sized SBS loops upon acute CTCF degradation We recently found that CTCF topological insulators can block enhancer-promoter (E-P) contacts regardless of whether enhancers and promoters are associated with CBS elements or not. To investigate whether CTCF plays a role in the specificity of ZNF143-mediated E-P or P-P chromatin looping, we performed ZNF143 HiChIP experiments in auxin-treated and untreated CTCF-AID cells. As expected, CBS loop number is substantially decreased upon addition of auxin ( 16 Figure S5 A). 48 , 49 , The CBS loops identified by ZNF143 HiChIP still obey the forward-reverse convergent rule ( 50 Figure S5 B). However, the size of SBS loops appears larger in auxin-treated than -untreated cells ( Figures 5 A and S5 C). In addition, acute CTCF degradation leads to a global decrease of ZNF143 HiChIP loop numbers ( Figure S5 D). Interestingly, both the number and average size of SBS loops are increased ( Figures 5 B and 5C), while the average size of CBS loops identified with ZNF143 HiChIP does not change ( Figure S5 E). Remarkably, there is a strong shift from small-sized SBS loops to large-sized SBS loops upon CTCF degradation ( Figures 5 D–5F and S5 F), accompanied by a global increase in expression levels of genes associated with gained SBS loops ( Figure S5 G). This suggests that CTCF topological insulators have a role in maintaining or restraining proper SBS loop size. To confirm this observation, we performed 4C at the PCDHα and SWT1/TRMT1L loci and found that CTCF depletion results in a significant decrease of small loops but a significant increase of large loops ( Figures 5 G and 5H). Finally, we analyzed valid pairs of Hi-C reads for both CTCF-depleted and un-depleted cells across all three types of ZNF143-HiChIP loops and found a significant increase of both SBS and SBS-CBS loop strengths, but a significant decrease of CBS loop strength, upon CTCF degradation ( Figure 5 I). Overall, these data suggest that CTCF may function as a topological insulator to prevent the aberrant formation of large-sized SBS loops. ZNF143 deletion alters TADs and compartments To investigate the role of ZNF143 in higher-order chromatin organization, we performed in situ Hi-C experiments in our ΔZNF143 single-cell clones. We observed a significant decrease of chromatin contacts on chromosome 5 and the PCDH clusters upon ZNF143 deletion ( Figures 6 A –6C and S6 A). Virtual 4C with HS5-1 as an anchor showed reduced chromatin interactions with PCDHα alternate variable exons ( Figure 6 D). We then deleted the two tandem SBS elements within the HS5-1 enhancer ( Figures 1 and 6 E) and performed 4C experiments with HS5-1 as the anchor. We observed a similar decrease of chromatin interactions with PCDHα alternate variable exons ( Figure 6 F). Global analyses of Hi-C data from ΔZNF143 cell clones revealed decreased TAD number ( Figure 6 G), intra-TAD contacts ( Figure 6 H), and loop strength ( Figure S6 B). In addition, ZNF143 deletion significantly weakens the strength of TAD boundaries ( Figures 6 I, S6 C, and S6D) probably because CTCF/ZNF143/cohesin co-occupied CBS elements tend to be located at TAD boundaries ( Figure S6 E). These data suggest that ZNF143 plays an important role in TAD boundary formation. We next investigated the effect of ZNF143 deletion on chromatin segregation between compartments A and B. Compartmental signals ( Figure 6 J) and Pearson’s correlation heatmaps ( Figure 6 K) suggest that ZNF143 deletion leads to transitions from A to B compartments or vice versa. Further correlation analyses revealed that contact maps ( Figure S6 F) and compartmental signals are altered upon ZNF143 deletion ( Figure 6 L) compared to almost no alteration upon CTCF degradation ( Figure 6 M). In summary, our genetic experiments demonstrate that ZNF143 plays an important role in higher-order chromatin organization. ZNF143 and CTCF orchestrate 3D genome To systematically investigate the integrated role of ZNF143 and CTCF in 3D genome organization, we screened for ZNF143-deleted single-cell clones in CTCF-AID cells using CRISPR DNA fragment editing ( Figure 7 A). We then performed in situ Hi-C using ZNF143-deleted and CTCF-depleted (abbreviated as CTCFd) cells by treatment with auxin ( Figures 7 A and S7 A). We found that deletion of ZNF143 in CTCF-depleted cells results in a further decrease in TAD number ( Figure S7 B), intra-TAD contacts in PCDH clusters and across the entire genome ( Figures 7 B and 7C), loop strength ( Figure 7 D), and boundary strength ( Figures 7 E and S7 C) compared to CTCF depletion alone. This suggests that ZNF143 and CTCF collaborate in forming chromatin loops. We observed a remarkable alteration in chromatin compartmentalization in ZNF143 and CTCF co-depleted cells ( Figure 7 F), in striking contrast to no compartmentalization alteration with acute CTCF degradation ( Figure 7 G). 48 , Pearson’s correlation map revealed a similar alteration of compartmentalization upon ZNF143 deletion but not acute CTCF degradation ( 49 Figure 7 H). Consistently, contact maps and compartment signals are altered upon ZNF143 deletion ( Figures 7 I, S7 D, and S7E) but not acute CTCF degradation ( Figures 7 J, S7 D, and S7E). These data revealed prominent distinctions between ZNF143 and CTCF in genome compartmentalization. Discussion CTCF/cohesin-mediated directional chromatin contacts between remote super-enhancers and target promoters determine promoter choice of clustered Pcdh genes via continuous active “loop extrusion.” 7 , 16 , Developmental regulation of their higher-order chromatin organization in distinct cell types in the brain enables neurons to achieve proper serotonergic axonal tiling and olfactory sensory neuronal axon convergence, as well as neocortical fine spatial arrangement and connectivity. 26 51 , Using the 52 PCDH clusters as model genes, we uncovered that ZNF143 mediates chromatin interactions in both CTCF-dependent and -independent manners. We first mapped the precise locations of SBS elements within the PCDHα HS5-1 enhancer. By comprehensive gel shift experiments, we found that the central ZF domain of ZNF143 directionally recognizes the HS5-1 tandem SBS elements in an anti-parallel manner. In addition, although CTCF directly interacts with the CES (conserved essential surface) of cohesin, 13 , 14 , our data suggest that, at the triple ZNF143, CTCF, and cohesin colocalized CBS elements, the ∼30 CTCF residues between its cohesin-CES-binding and DNA-binding domains are flexible. Furthermore, ZNF143 is located at this strategic position between CTCF and cohesin and functions as a “buffer sponge” because genetic deletion of ZNF143 results in narrowing of the space between them. This is consistent with the flexible linker model for cohesin engaging the N terminus of CTCF and their relative mapping positions. 15 13 , Finally, acute CTCF degradation leads to a decrease of ZNF143 enrichment, while ZNF143 deletion does not alter CTCF enrichment but slightly reduces cohesin. Thus, ZNF143 may function to keep CTCF and cohesin appropriately spaced in 3D geometry and to stabilize cohesin anchoring at CBS elements during loop formation. 44 Interphase genomes are organized into complex higher-order structures including chromatin loop, TAD and subTAD, and chromatin compartment. Among numerous proteins involved in higher-order genome organization, ZNF143 is interesting in that it is largely colocalized with CTCF and is also independently located in gene promoters. 30 , 53 , Hi-C experiments with ZNF143-deleted CRISPR single-cell clones demonstrated that ZNF143 is required for the formation of SBS loops. In addition, ZNF143 HiChIP experiments, in conjunction with acute CTCF degradation, revealed that CTCF depletion results in the formation of aberrantly large-sized SBS loops, suggesting that CTCF topological insulators play a role in maintaining proper sizes of SBS loops. Moreover, genetic deletion of ZNF143, in conjunction with 54 in situ Hi-C experiments, showed that ZNF143 is crucial for chromatin compartmentalization. Previous studies suggested that ZNF143 is mainly associated with E2F-bound promoters and CTCF-bound enhancers. 29 , 30 , 34 , 35 , 38 , 40 , 41 , 53 , 54 , ZNF143 might regulate genome domains of A/B compartments by altering activity of SBS-associated promoters. Finally, we found that ZNF143 and CTCF collaborate to regulate genome topology by a combination of genetic ZNF143 deletion and acute CTCF degradation. 55 Recent studies suggest that ZNF143 mediates the formation of short-range chromatin loops; 40 , however, its DNA recognition mechanism and role in loop formation remain unclear. In particular, Zhou et al. suggested that CTCF and ZNF143 binding sites are 37 bp apart in the convergent orientation. 41 However, almost all of these convergent motifs are within repetitive sequences, such as SINEs, and are residing within neither CTCF nor ZNF143 ChIP-seq peaks. Although ZNF143 and CTCF colocalize at CBS elements, our EMSA experiments suggest that ZNF143 does not directly bind CTCF sites. In addition, our ChIP-nexus data showed that CTCF degradation decreases ZNF143 enrichments at CBS elements, suggesting that CTCF recruits ZNF143 to CBS elements. We propose a looping model for dichotomic ZNF143 functions in genome architecture ( 40 Figure 7 K). At CTCF sites, ZNF143 stabilizes CTCF-cohesin interactions to regulate chromatin contacts via cohesin “loop extrusion.” ZNF143 also directly binds SBS elements in an anti-parallel manner to regulate promoter-promoter/enhancer contacts. Through activating SBS-associated promoters, ZNF143 can influence higher-order A/B compartmentalization. Limitations of the study While our CTCF and cohesin HiChIP experiments demonstrate that ZNF143 deletion weakens the strength of CBS loops and our Hi-C and 4C experiments suggest that ZNF143 deletion abolishes SBS loops between promoters and/or enhancers, we cannot rule out that other factors also play an essential role in SBS looping between promoters and/or enhancers. In addition, although our data suggest that ZNF143 participates in CTCF/cohesin loop extrusion and thus is an important 3D genome architectural protein, we cannot rule out that the alteration of 3D genome organization upon ZNF143 deletion is due to transcriptional changes since ZNF143 is a known transcription factor. Finally, while CTCF was degraded to an undetectable level in western blot, removal of CTCF from chromatin by acute inducible degron system is known to be incomplete because CTCF is central for the survival of cultured cells. The residue CTCF proteins may still present at TAD boundaries and function as topological insulators. STAR★Methods Key resources table REAGENT or RESOURCE SOURCE IDENTIFIER Antibodies anti-ZNF143 Proteintech Cat#16618-1-AP; RRID: AB_2218324 anti-CTCF Abcam Cat#Ab70303; RRID: AB_1209546 anti-RAD21 Abcam Cat#Ab992; RRID: AB_2176601 anti-H3K4me3 Millipore Cat#17–678; RRID: AB_1977250 anti-Actin Abmart Cat#M20011S; RRID: AB_2936240 Goat IgG anti-Mouse-680 Invitrogen Cat#A-21057; RRID: AB_2535723 Goat IgG anti-Rabbit-800 LI-COR Cat#925–32211; RRID: AB_2651127 anti-Myc-Tag Millipore Cat#05–724; RRID: AB_309938 anti-H3K27me3 Cell Signaling Technology Cat#9733; RRID: AB_2616029 Chemicals, peptides, and recombinant proteins 16% Formaldehyde Thermo Cat#28908 Klenow Fragment (3’→5’ exo-) NEB Cat#M0212L dATP Solution NEB Cat#N0440S dNTP Solution NEB Cat#N0446S T4 DNA Polymerase NEB Cat#M0203S Lambda Exonuclease NEB Cat#M0262S RecJf Exonuclease NEB Cat#M0264S Cocktail Thermo Cat#78443 RNase A Thermo Cat#EN0531 Proteinase K NEB Cat#P8107S CircLigase Epicentre Cat#CL4111K FastDigest BamHI Thermo Cat#FD0054 MboI NEB Cat#R0147M DpnII NEB Cat#R0543S Biotin-14-dATP Thermo Cat#19524016 Glycogen Thermo Cat#R0561 Indole-3-acetic acid sodium salt Sigma Cat#I5148-2G DNA Polymerase I, Large (Klenow) Fragment NEB Cat#M0210L USER Enzyme NEB Cat#M5505S T4 DNA Ligase NEB Cat#M0202S T4 DNA Ligase Reaction Buffer NEB Cat#B0202S AMPure XP Beads Beckman Cat#A63881 Dynabeads M-280 Streptavidin beads Thermo Cat#11206D Magna ChIP Protein G Magneti-Beads Millipore Cat#16-662 ChIP agaraose A beads Millipore Cat#16-157 NEBNext Ultra II Q5 Master Mix NEB Cat#M0544S NEBNext End Repair Module NEB Cat#E6050S critical commercial assays NEBNext Ultra II RNA Library Prep Kit NEB Cat#E7770S NEBNext Ultra II DNA Library Prep Kit NEB Cat#E7645S TnT Quick Coupled Transcrition/Translation System Promega Cat#L1170 QIAGEN MinElute Gel Extraction Kit Qiagen Cat#28604 LightShift Chemiluminescent EMSA Kit Thermo Cat#20148 ClonExpress MultiS One Step Cloning Kit Vazyme Cat#C113 TruePrep DNA Library Prep Kit V2 for Illumina Vazyme Cat#TD502- 01 Q5 Site-Directed Mutagenesis Kit NEB Cat#E0554S Deposited data Raw and analyzed data This paper GSE236637 Raw and analyzed data This paper GSE248737 Raw data This paper Mendeley Data: https://doi.org/10.17632/5ynn9zjxw6.1 Experimental models: Cell lines Human: HEC-1-B Guo et al. 10 N/A Human: HEC-1-B CTCF-AID This paper N/A Human: HEC-1-B ZNF143KO_clone1 This paper N/A Human: HEC-1-B ZNF143KO_clone2 This paper N/A Human: HEC-1-B ΔSBS This paper N/A Human: Hec1B CTCF-AID ZNF143KO This paper N/A Oligonucleotides Oligonucleotides are listed in Table S2 Table S2 N/A Recombinant DNA Plasmid: pGL3-U6-sgRNA-PGK-Puro Li et al. 45 https://academic.oup.com/jmcb/article/7/4/284/901042 Plasmid: pGL3-sgZNF143-Exon6-PGK-puro This paper N/A Plasmid: pGL3-sgZNF143-Exon10-PGK-puro This paper N/A Plasmid: pGEM T-Easy AAVS-OsTIR1 This paper N/A Plasmid: pGL3-sgAAVS-insertion_OsTir1-PGK-puro This paper N/A Plasmid: pGEM T-Easy CTCF-AID This paper N/A Plasmid: pGL3-sgCTCF-insertion_AID-PGK-puro This paper N/A Plasmid: pTNT-ZNF143 This paper N/A Plasmid: pTNT-ZNF143 ZF1-7-myc This paper N/A Plasmid: pTNT-ZNF143 ZF2-7-myc This paper N/A Plasmid: pTNT-ZNF143 ZF3-7-myc This paper N/A Plasmid: pTNT-ZNF143 ZF4-7-myc This paper N/A Plasmid: pTNT-ZNF143 myc-ZF1-7 This paper N/A Plasmid: pTNT-ZNF143 myc-ZF2-7 This paper N/A Plasmid: pTNT-ZNF143 myc-ZF3-7 This paper N/A Plasmid: pTNT-ZNF143 myc-ZF4-7 This paper N/A Plasmid: pTNT-ZNF143-3xflag This paper N/A Plasmid: pTNT-CTCF-3xmyc This paper N/A Plasmid: pTNT-RAD21-3xmyc This paper N/A Software and algorithms Bowtie2 v2.3.5.1 Langmead et al. 56 https://github.com/BenLangmead/bowtie2 HiC-Pro v3.0.0 Servant et al. 57 https://github.com/nservant/HiC-Pro Hichipper v0.7.7 Lareau et al. 47 https://hichipper.readthedocs.io/en/latest/index.html Deeptools v3.5.3 Ramirez et al. 46 https://deeptools.readthedocs.io/en/latest/ Bedtools v2.30.0 Quinlan 58 https://bedtools.readthedocs.io/en/latest/index.html Samtools v1.12 Li et al. 59 http://www.htslib.org/doc/ MACS v2.2.7.1 Feng et al. 60 https://docs.csc.fi/apps/macs2/ Cufflinks v2.2.1 Trapnell et al. 61 http://cole-trapnell-lab.github.io/cufflinks/ r3Cseq v1.38.0 Thongjuea et al. 62 https://bioconductor.org/packages/release/bioc/html/r3Cseq.html DESeq2 v1.32.0 Love et al. 63 https://bioconductor.org/packages/release/bioc/html/DESeq2.html Cutadapt v2.10 Martin et al. 64 https://cutadapt.readthedocs.io/en/stable/ MEME v4.12.0 Bailey et al. 65 https://meme-suite.org/meme/ STAR v2.7.3a Dobin et al. 66 https://github.com/alexdobin/STAR Vennerable v3.1.0.9000 Open source https://rdocumentation.org/packages/Vennerable/versions/3.1.0.9000 ggplot2 v3.4.2 Open source https://ggplot2.tidyverse.org/ Juicer tools v1.19.02 Durand et al. 67 https://github.com/aidenlab/juicertools FAN-C v0.9.10 Kruse et al. 68 https://fan-c.readthedocs.io/en/latest/index.html Cooltools v0.5.2 Crane et al. 69 https://cooltools.readthedocs.io/en/latest/ coolpup.py v1.0.0 Flyamer et al. 70 https://coolpuppy.readthedocs.io/en/latest/ Chromosight v1.6.3 Matthey-Doret et al. 71 https://chromosight.readthedocs.io/en/latest/TUTORIAL.html hic2cool v0.8.3 Open source https://github.com/4dn-dcic/hic2cool UCSC Genome Browser https://genome.ucsc.edu/ https://genome.ucsc.edu/ WashU browser https://epigenomegateway.wustl.edu/ https://epigenomegateway.wustl.edu/ Resource availability Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Qiang Wu ( qiangwu@sjtu.edu.cn ). Materials availability All unique/stable reagents generated in this study are available from the lead contact with a completed materials transfer agreement. Data and code availability (1) High-throughput sequencing data and processed files have been deposited in the Gene Expression Omnibus (GEO) database under accession number GSE236637 and GSE248737 . Raw data from and S7 Figures 1 , 3 , 4 , 5 , S2–S5 , were deposited on Mendeley at https://doi.org/10.17632/5ynn9zjxw6.1 . (2) This paper does not report original code. (3) Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request. Experimental model and study participant details Cells and culture conditions Human HEC-1-B cells were cultured in MEM with L-glutamine and Earle’s balanced salts (Hyclone) supplemented with 1 mM sodium pyruvate (Sigma), 1% penicillin-streptomycin (Gibco), and 10% fetal bovine serum (Gibco). HCT116 cells containing the OsTIR1 gene integrated at the AAVS1 locus and the mAID-mClover tag integrated after the last codon of the RAD21 gene (RAD21-mAC HCT116) were cultured in RPMI 1640 medium with L-glutamine (Gibco) supplemented with 1% penicillin-streptomycin and 10% fetal bovine serum. All cells were maintained at 37°C in a humidified 5% CO 42 2 incubator. For degradation of AID-tagged proteins, OsTIR1-expressed cells were treated with 500 μM indole-3-acetic acid (IAA, auxin) for 24 h. For the cell colony forming assay, 1,000 WT or ΔZNF143 HEC-1-B cells were seeded at each well of six-well plates. After one week, the cells were fixed with 4% paraformaldehyde at RT for 30 min, stained with crystal violet solution for 30 min and photographed under a microscope. Method details Plasmid construction To construct the donor plasmid for OsTIR1 expression from the AAVS1 locus, a codon-optimized OsTIR1 gene was amplified from the genomic DNA of RAD21-mAC HCT116 cells by PCR and cloned into the Spe I and Bam H I sites under the control of the CMV immediate-early promoter ( P CMV-IE ) in the pLVX-IRES-Puro plasmid (Clontech). The upstream (AAVS1U) and downstream (AAVS1D) homology arms were amplified from the HEC-1-B genomic DNA by PCR. AAVS1U, P CMV-IE -OsTIR1-IRES-Puro, and AAVS1D were then assembled and cloned into pGEM-T-Easy (Promega) to generate the pGEM-AAVS1-OsTIR1-Puro donor plasmid. To construct the CTCF-AID donor plasmid, a 68-aa mAID tag was amplified from the RAD21-mAC HCT116 genomic DNA by PCR. Homology arms were designed to allow in-frame C-terminal fusion of mAID to the CTCF gene. The upstream homology arm (CTCF-U) corresponds to the upstream sequence of the CTCF stop codon. The downstream homology arm (CTCF-D) corresponds to the sequence downstream of the last CTCF codon. CTCF-U, mAID, and CTCF-D were assembled and cloned into pGEM-T-Easy to generate the pGEM-CTCF-AID donor plasmid. The sgRNA expression plasmids targeting AAVS1 , CTCF, or ZNF143 were generated as previously described. Briefly, target-specific sgRNA oligos were ordered and cloned into the 45 Bsa I site of pGL3-U6-sgRNA-PGK-Puro for sgRNA transcription. All primers used are listed in Table S2 . All plasmid constructs were confirmed by Sanger sequencing. To construct the 3x Myc-tagged CTCF or RAD21 expression plasmid, the coding sequences for 3x Myc tag were amplified by PCR and inserted into the pcDNA3.1 to generate pcDNA3.1-3x Myc. CTCF or RAD21 coding sequences were amplified by PCR and cloned into the Not I/ Kpn I sites of pcDNA3.1-3x Myc to generate the pcDNA3.1-CTCF-3x Myc or pcDNA3.1-RAD21-3x Myc plasmid. To construct the Flag-tagged ZNF143 expression plasmid, coding sequences for ZNF143 and 3× Flag fused peptide were PCR amplified and cloned into the Eco R I/ Bam H I sites of the pcDNA3.1 plasmid to generate the pcDNA3.1-ZNF143-3× Flag plasmid. Preparation of truncated zinc finger domains of ZNF143 Full-length coding sequences (CDS) of human ZNF143 were amplified with PCR from a cDNA library of human cells. Each truncated ZNF143 containing different combinations of zinc fingers (ZF) was fused with Myc-tag sequence at 5′ end (ZF1-7, ZF1-6, ZF1-5, ZF1-4, ZF1-3) or at 3′ end (ZF1-7, ZF2-7, ZF3-7, ZF4-7, ZF5-7) through amplification with PCR from the full length ZNF143 CDS with specific primers ( Table S2 ) and cloned into the Eco R I and Not I/ Xba I sites of the pTNT vector (Promega L5610) under the control of the T7 promoter. Each construct was confirmed by Sanger sequencing and used as DNA template to generate corresponding polypeptide of truncated ZNF143 in vitro using TNT T7 Quick Coupled Transcription/Translation Systems (Promega L1170) according to the manufacturer’s instructions. For each reaction, 200 ng of pTNT plasmid was mixed with 8 μL of TNT Quick Master Mix and 0.2 μL of 1 mM methionine and diluted with H 2 O to a final reaction volume of 10 μL. The reaction mixture was incubated at 30°C for 90 min. The synthesized protein was confirmed by the Western blot using an anti-myc-tag antibody (Millipore 05–724), aliquoted, and stored at −80°C. Electrophoretic mobility shift assay (EMSA) EMSA experiments were performed as previously described 10 , with slight modification. Briefly, each DNA fragment containing probe sequences was amplified from the human genomic DNA with PCR and cloned into the pGEM-T easy vector to generate the template plasmid. The resulting template plasmid was used to generate mutated template plasmid using the Q5 Site-Directed Mutagenesis Kit (NEB E0554S). All of the generated template plasmids were confirmed by Sanger sequencing. Each Probe was generated by PCR using a 5′ biotin-labeled forward primer and a reverse primer 72 ( 10 Table S2 ) and gel-purified. Probe concentration was measured with NanoDrop (Thermo) and adjusted to 100 fmol/μL. The LightShift Chemiluminescent EMSA Kit (Thermo 20148) was then used for the EMSA experiments. The probe was incubated with 1x binding buffer, 0.05 μg/μL poly (dI-dC), 2.5% (v/v) glycerol, 0.1% NP-40, 5 mM MgCl 2 , 0.1 mM ZnSO 4 , and the synthesized protein at room temperature for 20 min. For supershift, an anti-Myc tag antibody was added and the mixture was incubated for an additional 20 min at room temperature. After incubation, the reaction mixture was separated on a 5% nondenaturing polyacrylamide gel which was pre-electrophoresed for 1 h in ice-cold 0.5x TBE buffer (45 mM Tris-borate, 1 mM EDTA, pH8.0). After transferring to a nylon membrane and crosslinking, the membrane was blocked with a 1x blocking buffer for 10 min at room temperature, washed three times with 1x washing buffer, and sequentially incubated with substrate equilibration buffer for 10 min at room temperature and Stabilized Streptavidin-Horseradish Peroxidase Conjugate. ChemiDoc XRS+ system (Bio-Rad) was used to detect the biotin-probe. CRISPR screening of single-cell ZNF143-deletion clones The ZNF143-deletion cell clones were generated using CRISPR/Cas9-mediated DNA-fragment editing 10 , with dual sgRNAs targeting exons 6 and 11 of the endogenous ZNF143 gene. Briefly, HEC-1-B cells were cultured in 6-well plates until ∼80% confluent. Cells were then transfected with 1 μg pcDNA3.1-Cas9 and 1 μg dual sgRNA expression plasmids per well using Lipofectamine 3000 (Invitrogen). Transfected cells were cultured in medium containing 2 μg/mL puromycin for 3 days. After puromycin selection, the cells were dissociated with trypsin, resuspended, diluted, and seeded into 96-well plates at roughly 1 cell per well. After ∼2 weeks of culturing, single cell clones were picked under a microscope and genotyped by PCR using specific primers ( 45 Table S2 ). Positive clones were confirmed by Sanger sequencing ( Figure S8 ). The loss of ZNF143 protein in the knockout cells was also confirmed by Western blot using an anti-ZNF143 antibody (Abcam). We obtained two homozygous ZNF143 deletion clones which grow very slow because ZNF143 is known to regulate cell cycle gene expression. 54 Western blot Cells within the 6-well plate were washed once with PBS and lysed in the RIPA buffer (50 mM Tris-HCl pH 7.4, 150 mM NaCl, 1% Triton X-100, 1% sodium deoxycholate, 0.1% SDS) containing 1x protease inhibitors. The protein lysates were then denatured at 100°C for 10 min. Protein concentration was determined by the BCA protein assay. Equal amounts of protein were separated on SDS-PAGE and transferred to a nitrocellulose membrane. After blocking with 5% fat-free dry milk in the PBST (PBS containing 0.1% Tween 20) buffer for 1 h at room temperature, the membrane was washed three times with PBST buffers and incubated with the primary antibody at 4°C overnight with slow rotation. After washed three times with the PBST buffer, the membrane was finally incubated with the Fluorescent-dye conjugated secondary antibody for 2 h at room temperature, washed four times with the PBS buffer, and scanned by the Odyssey System (LI-COR Biosciences). ChIP-nexus ChIP-nexus experiments were performed as previously described with slight modification. For each ChIP-nexus experiment, 1x 10 43 7 cells were crosslinked with 1% formaldehyde and quenched with glycine. Crosslinked cells were lysed twice with 1 mL of the ChIP buffer (10 mM Tris-HCl pH 7.5, 1 mM EDTA, 1% Triton X-100, 0.1% sodium deoxycholate, 150 mM NaCl, 1x protease inhibitors) and sonicated for DNA fragmentation. After centrifugation at 12,000 rpm for 10 min, the supernatant was transferred into a new tube and incubated with specific antibody overnight at 4°C with slow rotation. Magnetic protein A/G beads were added to enrich the targeted chromatin. The beads were washed to remove the nonspecific binding. The enriched DNA was eluded and purified for library preparation. The DNA ends were blunted using NEBNext End Repair Module (NEB E6050S), added with dATP using Klenow exo − (NEB M0212S), and ligated to specific adaptors (Nex_adapter_U Bam H I and Nex_adapter_BN5 Bam H I, Table S2 ). The 5′ overhang of the adaptor-ligated DNA was filled using Klenow exo − (NEB M0212S) and T4 DNA polymerase (NEB M0203S). The generated blunt-end DNA was digested with Lambda Exonuclease (NEB M0262S) and Rec Jf exonuclease (NEB M0264S). After reverse-crosslinking at 65°C overnight, DNA was extracted using phenol/chloroform and precipitated with ethanol containing sodium acetate and glycogen. The purified DNA was denatured to generate ssDNA, which was self-circularized with ssDNA Ligase (Epicenter CL4111K). The circular DNA was annealed with an oligonucleotide (Nex_cut_ Bam H I, Table S2 ) containing the Bam HI site. After digestion with Bam HI and re-precipitation with ethanol, the DNA was used for construction of the ChIP-nexus library. The DNA library was purified using gel-purification and sequenced on an Illumina platform. Co-immunoprecipitation (Co-IP) 293T cells were grown to ∼80% confluence in a 6-well plate and transfected with pcDNA3.1-proteinA-3× Flag and pcDNA3.1-proteinB-3x Myc, or pcDNA3.1 (control) plasmids using Lipofectamine 3000 (Invitrogen) according to the manufacturer’s instructions. For each transfection, 1 μg of pcDNA3.1-ZNF143-3× Flag and 1 μg of pcDNA3.1-CTCF-3x Myc or pcDNA3.1-RAD21-3x Myc plasmids were mixed with 5 μL of P3000 reagent and 7.5 μL of Lipofectamine 3000 reagent in 200 μL of DMEM. Two days after transfection, the cells were washed once with cold PBS, lysed with 500 μL of cold lysis buffer (20 mM Tris-HCl pH 7.5, 150 mM NaCl, 1 mM EDTA, 1% Triton X-100, and protease inhibitor) on ice for 10 min, and spun at 14,000 g for 15 min. The supernatant was transferred, mixed with10 μL of anti-Flag antibody and 30 μL of magnetic protein G beads (Millipore 16–662), and incubated at 4°C overnight with slow rotation. The beads were washed 6 times with ice-cold washing buffer (10 mM Tris-HCl pH 7.5, 650 mM NaCl, 0.15 mM EDTA, and 0.5% Triton X-100). The immunoprecipitated protein was denatured with 20 μL of 2x protein loading buffer at 100°C for 10 min, and used for detection of the Flag or Myc signal by Western blot. The IgG was used as a negative control for immunoprecipitation. CRISPR screening of single-cell CTCF-AID clones The CTCF-AID HEC-1-B cells were generated in two steps using CRISPR/Cas9-mediated editing. In the first step, an OsTIR1 expression cassette ( P CMV-IE -OsTIR1-IRES-Puro) was integrated into the AAVS1 safe harbor locus of the HEC-1-B cells to generate the parental cell line. In the second step, an AID cassette was fused to CTCF upstream of the stop codon in the CTCF gene of the OsTIR1-expressing parental cell. For each editing step, cells were cultured in 6-well plates until ∼80% confluent. Cells were then transfected with 1 μg of pcDNA3.1-Cas9, 0.5 μg of the donor plasmid, and 0.5 μg of sgRNA expression plasmid per well using Lipofectamine 3000 (Invitrogen). Transfected cells were cultured in medium containing 2 μg/mL puromycin for 3 days. After puromycin selection, the cells were dissociated with trypsin, resuspended, diluted, and seeded into 96-well plates. After ∼2 weeks of continuous culturing, single-cell clones were picked under a microscope and genotyped by PCR using specific primers ( Table S2 ). Positive clones were confirmed by Sanger sequencing ( Figure S8 ). RNA-seq For each RNA-seq experiment, about one million cells were washed twice with PBS, lysed with 1 mL Trizol (Invitrogen 15596026) for 15 min at room temperature, supplemented with 0.2 mL chloroform, vortexed, incubated for 2–3 min at room temperature, and centrifuged at 12,000 rpm for 15 min at 4°C. Aqueous phase was transferred to a new tube, mixed with 0.5 mL isopropanol, incubated at room temperature for 10 min, and centrifuged at 12,000 rpm for 10 min at 4°C to obtain the total RNA pellet. To remove salts, the pellet was washed with 75% ethanol. Finally, the pellet was dissolved in 100 μL of nuclease-free water. The total RNA was purified using the RNeasy kit (QIAGEN 75142) to remove residual DNA. One μg purified RNA was used to extract mRNA with NEBNext poly(A) mRNA Magnetic Isolation Module (NEB E7490S). RNA-seq library was constructed using NEBNext Ultra II RNA Library Prep Kit and sequenced on an Illumina platform. ChIP-seq For each ChIP-seq experiment, 10–20 million cells were collected and washed twice with PBS, digested with trypsin, resuspended in 10 mL medium. Formaldehyde (Thermo 28908) was added to a final concentration of 1% for cross-linking at room temperature for 10 min. The glycine was added to a final concentration of 125 mM and incubated at room temperature for 5 min to quench the cross-linking reaction. Cross-linked cells were centrifuged at 2,500 g for 10 min at 4°C. The cell pellets were washed with ice-cold PBS. Cells were lysed twice using 1 mL ice-cold ChIP buffer 1 (10 mM Tris-HCl pH 7.5, 1 mM EDTA, 1% Triton X-100, 0.1% sodium deoxycholate, 150 mM NaCl, 1x protease inhibitors) at 4°C for 10 min with slow rotation, spun at 2,500 g for 5 min at 4°C to obtain cell nuclei. The isolated nuclei were resuspended in 0.7 mL ChIP buffer 1 followed by incubation on ice for 10 min and were sonicated in a non-contact manner with Bioruptor Plus Sonicator (Diagenode) at high intensity for 30 rounds of 30 s on/30 s off to generate 100-10,000 bp DNA fragments. The sonicated samples were spun at 14,000 g for 10 min at 4°C. The supernatants were transferred to a new tube and precleared with 50 μL agarose protein A beads (16–157 Millipore). The primary antibody was added and incubated overnight at 4°C with slow rotation for immunoprecipitation. The agarose protein A beads (50 μL) were added and incubated at 4°C with slow rotation for 3 h. The samples were spun at 2,000 g for 1 min and followed by sequential washing with ChIP buffer 1, ChIP buffer 2 (10 mM Tris-HCl pH 7.5, 1 mM EDTA, 1% Triton X-100, 0.1% sodium deoxycholate, 400 mM NaCl), ChIP buffer 3 (10 mM Tris-HCl pH 7.5, 1 mM EDTA, 1% Triton X-100, 0.1% sodium deoxycholate), and ChIP buffer 4 (50 mM HEPES pH 7.5, 1 mM EDTA, 1% NP-40, 0.7% sodium deoxycholate, 500 mM LiCl). The washed antibody/protein/DNA complexes were eluted twice with 100 μL elution buffer (50 mM Tris-HCl pH 8.0, 10 mM EDTA, 1% SDS) by incubation at 65°C for 30 min with vortexes. The 200 μL eluted solutions were mixed with 200 μL TE buffer, de-cross-linked at 65°C overnight with vortexes, and sequentially digested with 2 μL RNase A at 37°C for 2 h and with 8 μL proteinase K at 55°C for 2 h. The DNA was purified with 400 μL phenol/chloroform, precipitated, and resuspended in 20 μL nuclease-free water. DNA concentration was measured by PicoGreen reagents. 10 ng DNA was used for library construction using NEBNext Ultra II DNA Library Prep Kit for Illumina (NEB). Libraries were sequenced on an Illumina HiSeq platform. HiChIP We performed HiChIP experiments as described recently. Briefly, for each HiChIP experiment, ∼10 million cells were collected, spun down, and resuspended in 10 mL fresh medium after centrifugation at 500 g. The cells were crosslinked with 1% formaldehyde for 10 min at room temperature with slow rotation and the crosslinking reaction was quenched with 125 mM glycine. The crosslinked cells were spun down at 800 g for 5 min, washed once with ice-cold PBS, and lysed twice with 1 mL ice-cold lysis buffer (10 mM Tris-HCl pH 8.0, 10 mM NaCl, 0.2% NP-40, 1x protease inhibitors) for 15 min at 4°C with slow rotation to obtain nuclei. The samples were spun down at 2,500 g at 4°C for 5 min. The pellets were resuspended in 100 μL of 0.5% SDS solution and incubated at 62°C for 10 min 285 μL of H 27 2 O and 50 μL of 10% Triton X-100 were added and incubated at 37°C for 15 min 50 μL of 10 × NEBuffer 2 and 300 U of Mbo I were then added and rotated at 37°C for 2 h. To fill in the restriction fragment overhangs and label the DNA ends with biotin, 29.5 μL of mixture (15 μL of 1 mM biotin-14-dATP, 1.5 μL of 10 mM dGTP, 1.5 μL of 10 mM dCTP, 1.5 μL of 10 mM dTTP and 50 U of the Large Klenow Fragment of DNA Polymerase I) was added and rotated at 37°C for 1 h. The ligation mix (660 μL of H 2 O, 150 μL of 10 × T4 DNA ligase buffer, 125 μL of 10% Triton X-100, 3 μL of 50 mg/mL BSA and 400 U of T4 DNA ligase) were then added and rotated at room temperature for 4 h. The nuclei were pelleted at 2,500 g for 5 min at room temperature and the supernatant was removed. The samples were sonicated with a high energy setting at a train of 30 s sonication with 30 s interval for 15 cycles using a Bioruptor Sonicator. After removing the insoluble debris, the cell lysate was pre-cleared with 40 μL of protein A-agarose beads (Millipore) for 2 h at 4°C with slow rotations. The immunoprecipitated DNA were enriched as described above in ChIP-seq. The immunoprecipitated DNA was dissolved in 20 μL of 10 mM Tris-HCl pH 7.5 and then sonicated with a high energy setting at a train of 30 s sonication with 30 s interval for 5 cycles using a Bioruptor Sonicator. Vazyme DNA library preparation kit was used to construct a HiChIP library with modifications. After end repair and DNA adaptor ligation steps, 10 μL of washed streptavidin beads were resuspended with 100 μL of 2 × Biotin Binding buffer (10 mM Tris-HCl pH 7.5, 1 mM EDTA and 2 M NaCl). The streptavidin beads were added to the ligated DNA and rotated for 15 min at room temperature to enrich biotin-labeled DNA. The captured beads were washed twice with 1 × Tween Washing buffer (5 mM Tris-HCl pH 7.5, 0.5 mM EDTA, 1 M NaCl and 0.05% Tween 20). The washed beads were resuspended in 20 μL of H 2 O and amplified by PCR (95°C, 3 min; 98°C, 20 s, 60°C, 15 s, 72°C, 30 s for 13 cycles; and a final extension at 72°C, 5 min). After DNA purification, all HiChIP libraries were sequenced on an Illumina platform. 4C 4C experiments were performed as previously described with slight modification. Briefly, ∼2 million cells were fixed with 2% formaldehyde and permeabilized with ice-cold permeabilization buffer (50 mM Tris–HCl pH 7.5, 150 mM NaCl, 5 mM EDTA, 0.5% NP-40, 1% Triton X-100, and 1 × protease inhibitors). The permeabilized cells were digested with 16 Dpn II (NEB R0543S). The digested DNA fragments were ligated with T4 DNA ligase (NEB M0202S) and de-crosslinked. The ligated DNA was purified by phenol/chloroform extraction and ethanol precipitation, and sonicated to 200–600 bp fragments. Targeted fragments were linearly amplified using a 5′ biotin-labeled primer specific to the anchor region ( Table S2 ). The amplified ssDNA was enriched with Streptavidin (ThermoFisher 11206D), ligated with adapters (Adaptor-upper and Adaptor-lower, Table S2 ), and amplified by PCR with anchor-specific P5-forward primers ( Table S2 ) and indexed P7-reverse primers. The amplified libraries were sequenced on an Illumina platform. In situ Hi-C In situ Hi-C experiments were performed as previously described with slight modification. Briefly, for each Hi-C experiment, 5 million cells were cross-linked with 1% formaldehyde at room temperature for 10 min, quenched with 0.125 M glycine, and lysed with 250 μL of ice-cold lysis buffer (10 mM Tris-HCl pH8.0, 10 mM NaCl, 0.2% NP-40) twice to obtain nuclei. The nuclei were then incubated with 50 μL 0.05% of SDS at 62°C for 10 min, quenched with Triton X-100, and digested with 100 units 10 Mbo I (NEB R0147M) in a 250 μL volume overnight at 37°C. After heat inactivation of Mbo I at 62°C for 20 min, The Mbo I DNA ends were filled and labeled with biotin by adding 50 μL fill-in mix containing biotin-14-dATP (Thermo 19524016), dCTP, dTTP, dGTP, and DNA Polymerase I, Large (Klenow) Fragment (NEB M0210L) and incubation at 37°C for 1 h. After proximity ligation at room temperature for 4 h using T4 DNA ligase (NEB M0202S), the samples were reversely cross-linked at 68°C overnight. DNA was precipitated with ethanol and sonicated for fragmentation. 300–500 bp DNA fragments were selected using AMPure XP beads (Beckman A63881). The streptavidin beads (Thermo 11206D) were then used to enrich the biotin-labelled DNA fragments. The on-bead biotin-DNA was washed stringently for a train of twice 2-min vortices at 55°C and used for library construction. NEB end-repair module (NEB E6050S) was used to blunt DNA ends and Klenow exo − (NEB M0212S) was used to add an “A” at the 3′ ends. Illumina U-type adaptors were ligated to both DNA ends. The ligated adaptors were cleaved with the USER enzyme (NEB M5505S). After 10–12 cycles of PCR amplification with Illumina P5/P7 primers, the DNA products were purified with the AMPure XP beads (Beckman A63881) and paired-end sequenced on an Illumina NovaSeq platform. Data analysis of ChIP-nexus Raw reads were trimmed using Cutadapt (v2.10) to remove the first 10 bp barcode and adaptor sequences. All reads at least 20 bp in length after trimming were aligned to the human reference genome GRCh37/hg19 using Bowtie2 (v2.3.5.1). 64 Reads from repeated samples were merged, sorted, and indexed using Samtools (v1.12). 56 Narrow peaks were called using MACS2 (v2.2.7.1) 59 with a q-value threshold of 0.001. Bedtools (v2.30.0) 60 was used to select overlapping peaks and the R package Vennerable (v3.1.0.9000) was used to generate Venn diagrams. The genomecov function of Bedtools was used to calculate the coverage of the mapped reads. The summits of read coverage in peaks around the forward or reverse CTCF motifs were collected to generate violin plots and boxplots with ggplot2. 58 Read counts were normalized to reads per kilobase per million mapped (RPKM) using the bamCoverage module of Deeptools (v3.5.1) with a bin size of 20 bp, and converted to bedGraph format for visualization in the UCSC genome browser. Heatmaps were generated using the plotHeatmap module of Deeptools. Scatterplots were generated by a Python script with peak read counts calculated by Bedtools. 46 The coverage of the 5′ ends of the positive or negative strand was calculated using the genomecov module of Bedtools. Footprint density profiles of CTCF, ZNF143, and RAD21 for the forward and reverse CBS elements were generated using ggplot2. 58 DNA motif analysis The MEME suite (v4.12.0) 65 was used for DNA motif analysis. For the SBS element, ZNF143 ChIP-nexus narrow peaks not overlapped with CTCF peaks were used for motif searching. The 1 kb upstream regions of peaks were used to establish a background model using fasta-get-markov. Two thousand random peaks were used for motif finding with the following parameters: -revcomp -w 20 -mod zoops -env 0.0001. The top motif was used as the core motif. The 20 bp-regions flanking the core motif were used for motif finding again with a threshold of 0.001. Motifs were combined together to form different types of motifs using the SpaMo module of MEME. The MEME FIMO module was used to find the binding positions and orientations of ZNF143 binding sites. All motif types were scanned sequentially in all peaks. Once a motif was found, the peak was excluded from subsequent scans. Data analysis of ChIP-seq Reads from ChIP-seq were mapped to the human reference genome GRCh37/hg19 using Bowtie2. Peaks were called using MACS2 with default parameters. All mapped SAM files were sorted and indexed using Samtools 60 and normalized to RPKM using the bamCoverage module of Deeptools. The bedGraph file generated from Deeptools was uploaded to the USCS genome browser for visualization. 59 Data analysis of RNA-seq RNA-seq raw FASTQ files were aligned to the GRCh37/hg19 reference genome using STAR (v2.7.3a) with default parameters. The BAM files were analyzed using Cufflinks (v2.2.1) 66 to calculate expression levels of transcripts in fragments per kilobase of exon per million fragments mapped (FPKM). The raw counts were used to identify differential expression genes using DESeq2 61 with parameters of abs(log2FoldChange) > 1 and p value < 0.05. The volcano plot that displays differential expressed genes was generated by ggplot2. BAM files were converted to the bedGraph format using Deeptools for visualization in the UCSC genome browser. 63 Data analysis of 4C 4C FASTQ raw reads were aligned to GRCh37/hg19 using Bowtie2 with default parameters. Reads per million (RPM) interaction values were calculated using the r3Cseq program (v1.38.0). The generated bedGraph files were used for visualization in the UCSC genome browser. The interaction files from r3Cseq were used to calculate the interaction differences. 62 Data analysis of Hi-C Raw reads were mapped to GRCh37/hg19 to generate contact maps using HiC-Pro (v3.0.0). The produced allValidPairs file was transformed to a hic format file using the hicpro2juicebox script from HiC-Pro for further analyses with the Juicer tools. 57 The hic file was converted to the cool format using hic2cool (v0.8.3). The Hi-C contact matrix was balanced using the Knight-Ruiz (KR) method and normalized to a depth of 100 million contacts. 67 Loops with 1D genomic distance of more than 4 kb were detected using the detect function of Chromosight (v1.6.3) with the parameters --min-dist 40,000 -p 1e-5. Aggregate peak analysis (APA) for loops was performed using cooltools(v0.5.2) 71 and coolpup.py (v1.0.0). 69 70 TAD domains were called using the HMM (hidden Markov models) method as previously described. Briefly, the sparse matrix was transformed into a dense matrix, which was used to calculate the directionality index (DI) score with a bin size of 10 kb and a window size of 2 Mb. It is assumed that each bin has a hidden state which marks the upstream boundary of TADs, the downstream boundary of TADs, or not a boundary. The DI score was used to predict the hidden states of all bins using hidden Markov models. Consecutive bins with the same state were merged into regions. We filtered out regions composed of less than 3 bins or with a median posterior bin probability lower than 0.99. If the upstream and downstream regions of a filtered region had the same bin state, then the filtered region transforms all bins to that state. Otherwise, all bins were assumed to be non-boundary ones. Finally, TADs are defined as the area both downstream to an upstream boundary region and upstream to a downstream boundary region. Aggregate domain analysis (ADA) for TADs was calculated using a custom Python script. 73 The DI score was converted to bedGraph format for visualization in the UCSC genome browser. The aggregated DI values around TAD boundaries were calculated using a custom Python script. Hi-C cis- Eigenvector 1 values and Pearson’s correlation matrix were computed using the eigenvector and Pearson’s function of Juicer, respectively, at 100-kb resolution. The cis- Eigenvector 1 values were converted to bedGraph format for visualization in the UCSC genome browser. The density scatterplot and Pearson’s R between different samples were calculated using a custom Python script. Virtual-4C profiles were generated using Fan-C at 10-kb resolution. 68 Insulation scores were called as previously described. Briefly, the sparse matrix was transformed into a Dekker format dense matrix. For all bins at 10 kb resolution, we calculated the mean interaction strength between the upstream 50 bins and downstream 50 bins as the insulation score. The aggregated insulation scores around TAD boundaries were calculated using a custom Python script. 74 Data analysis of HiChIP HiChIP raw data were aligned to GRCh37/hg19 to generate contact maps using the HiC-Pro pipeline. The aligned reads from HiC-Pro and the peaks from ChIP-nexus data were used to call loops with the hichipper software (v0.7.7). Loops with less than 20 kb in length or with fewer than 2 paired-end tags were filtered out. 47 Loops with both anchors overlapped with ZNF143 peaks but not CTCF peaks were defined as the SBS loops. Loops with both anchors overlapped with CTCF peaks but not ZNF143 peaks were defined as the CBS loops. Loops with one anchor overlapped with CTCF peak and the other anchor overlapped with ZNF143 peaks were defined as the SBS-CBS loops. Loops with both anchors overlapped with promoters were defined as the promoter-promoter (P-P) loops. Loops with both anchors overlapped with enhancers were defined as the enhancer-enhancer (E-E) loops. Loops with one anchor overlapped with a promoter and the other anchor overlapped with an enhancer were defined as the promoter-enhancer (P-E) loops. The loop data were converted into the interacting format for visualization as arcs in the UCSC Genome Browser or the WashU Epigenome Browser. To validate the SBS-SBS, SBS-CBS, and CBS-CBS loops generated by HiChIP in Hi-C data, the HiChIP loop anchors were refined to the closest Mbo I restriction fragments and further extended 12 fragments on both sides. The counts of Hi-C valid pairs generated by HiC-Pro in each loop were normalized by the total valid-pair counts and then multiplying by 100 million. The p-value was calculated using a paired Student’s t-test. Quantification and statistical analysis ChIP-nexus, HiChIP, Hi-C, RNA-seq, ChIP-seq, and 4C experiments were performed with at least two biological replicates. All statistical tests were calculated using R and python scripts. Data were expressed as mean ± standard error of the mean (SEM) or confidence interval (CI). Statistical significance values were calculated using an unpaired Student’s t-test. p ≤ 0.05 was indicated as ‘ ∗ ’. p ≤ 0.01 or 0.001 was indicated as ‘ ∗∗ ’ or ‘ ∗∗∗ ’, respectively. Acknowledgments This work was supported by grants from the National Natural Science Foundation of China ( 32330016 ), the National Key R&D Program of China ( 2022YFC3400200 ), and the Science and Technology Commission of Shanghai Municipality ( 21DZ2210200 ). Author contributions Q.W. conceived the research. M.Z. performed experiments. M.Z., J.L., and H.H. analyzed data. M.Z., H.H., and Q.W. wrote the manuscript. Declaration of interests The authors declare no competing interests. Supplemental information Supplemental information can be found online at https://doi.org/10.1016/j.celrep.2023.113663 . Supplemental information Document S1. Figures S1‒S8 Table S1. SBS loops identified by ZNF143 HiChIP, related to Figure 4 Table S2. Oligonucleotides used in this study, related to Figures 1 and 4–7 Document S2. Article plus supplemental information
|
[
"ROWLEY",
"SCHOENFELDER",
"HANSEN",
"WU",
"DAVIDSON",
"DEWIT",
"GUO",
"RAO",
"DEWIT",
"GUO",
"SANBORN",
"FUDENBERG",
"LI",
"NORA",
"PUGACHEVA",
"JIA",
"HUANG",
"AMANDIO",
"LIANG",
"WU",
"MOUNTOUFARIS",
"WU",
"WU",
"MONAHAN",
"ESUMI",
"CANZIO",
"ZHOU",
"XIE",
"HEIDARI",
"BAILEY",
"HUNING",
"SCHAUB",
"MYSLINSKI",
"ANNO",
"NGONDOMBONGO",
"SCHUSTER",
"SCHUSTER",
"MYSLINSKI",
"SCHAUB",
"ZHOU",
"LIU",
"NATSUME",
"YIN",
"TANG",
"LI",
"RAMIREZ",
"LAREAU",
"NORA",
"WUTZ",
"HSIEH",
"SCHMIDT",
"KIEFER",
"MICHAUD",
"PARKER",
"JAMESFARESSE",
"LANGMEAD",
"SERVANT",
"QUINLAN",
"LI",
"FENG",
"TRAPNELL",
"THONGJUEA",
"LOVE",
"MARTIN",
"BAILEY",
"DOBIN",
"DURAND",
"KRUSE",
"OPENC",
"FLYAMER",
"MATTHEYDORET",
"TANG",
"DIXON",
"CRANE"
] |
bd15c7f9243249b4b61a68a807e985e4_Viral mitochondriopathy in COVID-19_10.1016_j.redox.2025.103766.xml
|
Viral mitochondriopathy in COVID-19
|
[
"Chen, Tsung-Hsien",
"Jeng, Tien-Hsin",
"Lee, Ming-Yang",
"Wang, Hsiang-Chen",
"Tsai, Kun-Feng",
"Chou, Chu-Kuang"
] |
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which causes coronavirus disease 2019 (COVID-19), disrupts cellular mitochondria, leading to widespread chronic inflammation and multi-organ dysfunction. Viral proteins cause mitochondrial bioenergetic collapse, disrupt mitochondrial dynamics, and impair ionic homeostasis, while avoiding antiviral defenses, including mitochondrial antiviral signaling. These changes drive both acute COVID-19 and its longer-term effects, known as “long COVID”. This review examines new findings on the mechanisms by which SARS-CoV-2 affects mitochondria and for the impact on chronic immunity, long-term health risks, and potential treatments.
|
1 Introduction The coronavirus disease 2019 (COVID-19) pandemic, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), had a global impact, with diverse clinical outcomes ranging from asymptomatic infection to severe respiratory failure and death [ 1 ]. Severe COVID-19 often causes pneumonia and acute respiratory distress syndrome, which can lead to sepsis, organ failure, or death [ 2 ]. The severity varies according to age, existing health conditions, and immune responses. In addition to respiratory involvement, mitochondrial dysfunction may be a central pathological feature of COVID-19 that contributes to systemic inflammation, organ failure, and long-term sequelae [ 3 ]. SARS-CoV-2, a positive-sense single-stranded RNA virus [ 4 ], produces three types of proteins: structural (spike glycoproteins, membrane proteins, and envelope proteins), nonstructural (NSPs, 16 in total) [ 5 ], and accessory proteins ( Fig. 1 ). These proteins interact with host organelles, particularly the mitochondria and endoplasmic reticulum (ER). In the ER, open reading frame (ORF)3a and ORF8 disrupt normal functions and cause stress, whereas the spike protein undergoes processing [ 6 ]. In the mitochondria, ORF9b localizes to the mitochondria-associated membrane (MAM), causing the fragmentation and inhibition of immune signaling [ 7 ]. ORF3a disrupts calcium homeostasis, and the nucleocapsid protein affect mitochondrial metabolism [ 8 ]. These interactions compromise mitochondrial integrity by targeting oxidative phosphorylation (OXPHOS) complexes, inducing calcium dysregulation, and modulating mitochondrial permeability transition pores [ 9 ]. Specific viral components, including NSP6, ORF3a, and ORF9b, disrupt mitochondrial architecture and function, promoting viral replication at the expense of host cell viability [ 9 ]. Additionally, SARS-CoV-2-induced mitochondrial dysfunction drives systemic metabolic reprogramming and interferes with antiviral immune pathways [ 10 ], notably the mitochondrial antiviral signaling (MAVS) pathway [ 11 ]. Long-term impairment of mitochondrial function and chronic inflammation, which may persist for weeks or months after the acute phase, are associated with long COVID [ 12 ]. This review explores how mitochondrial perturbations contribute to the acute severity and chronic aftermath of COVID-19, including the development of long COVID. 1.1 Disruption of mitochondrial function Mitochondria regulate cellular metabolism, immune responses, and apoptotic pathways. SARS-CoV-2 infection perturbs these functions by inducing structural and molecular alterations in mitochondrial membranes, disrupting mitochondrial DNA (mtDNA) integrity, and impairing ATP production [ 13 ]. In patients with severe COVID-19, circulating mitochondrial proteins, such as cytochromes and ribosomal subunits, are markedly elevated, indicating systemic mitochondrial injury [ 14 ]. SARS-CoV-2 infects the alveolar epithelial cells, causing mitochondrial dysfunction and calcium uniporter damage. Calcium signaling controls several key processes, including: viral replication, protein synthesis, metabolism, and cell death. SARS-CoV-2 disrupts calcium homeostasis and affects the infection progresses [ 15 ]. The spike protein of the virus binds to angiotensin-converting enzyme 2 (ACE2), which activates Piezo1 and Orai1 channels, leading to increased intracellular calcium levels [ 16 ]. However, high calcium levels reduce the fusion activity of the spike proteins [ 17 ]. ORF3a in A549 pulmonary epithelial cells increases intracellular calcium levels 18-fold after 48 h [ 18 ]. In human embryonic kidney 293 T cells, ORF3a increases mitochondrial calcium via the MCUi11-sensitive uniporter and triggers nucleotide-binding oligomerization domain (NOD)-, leucine-rich repeat (LRR)-, and pyrin domain-containing protein 3 (NLRP3) inflammasome activation through mitochondrial reactive oxygen species (ROS; mtROS)-induced mtDNA release [ 19 ]. ORF3a also promotes interleukin (IL-1β) secretion through ion disruption and ROS production, activating the NLRP3 inflammasome [ 20 ]. 1.2 COVID-19 in different mitochondrial groups SARS-CoV-2 affects mitochondria in a tissue-specific manner, therefore, liver and brain mitochondria respond differently to infection due to their inherent metabolic functions and sensitivity to viral proteins and inflammation. Mitochondria exhibit structural and functional differences in different organs based on tissue-specific metabolic needs [ 21 ]( Fig. 2 ). Liver mitochondria excel in metabolic processes, such as gluconeogenesis and detoxification, requiring flexibility and calcium buffering [ 22 ]. Brain mitochondria focus on ATP production for neuronal energy needs and have stricter calcium regulation to prevent excitotoxicity, while being more vulnerable to oxidative stress [ 23 ]. Brain mitochondria in COVID-19 are highly sensitive to oxidative stress and calcium imbalance [ 24 ]. SARS-CoV-2 proteins impair mitochondrial function by disrupting ER-mitochondrial calcium signaling, increasing ROS production, and altering neuronal mitochondrial dynamics (fission/fusion) [ 24 , 25 ]. These disruptions cause neuroinflammation and neurodegeneration, potentially contributing to “long COVID” cognitive symptoms, and neuronal energy failure, particularly in high-energy demand regions, such as the cortex and hippocampus [ 25 ]. Viral infection and systemic inflammation lead to mitochondrial swelling, cristae disruption, impaired β-oxidation, reduced ATP production, and increased susceptibility to lipotoxicity and steatosis [ 26 ]. Cytokine storms and hypoxia can further damage hepatocyte mitochondria. The consequences range from mild to severe liver damage and changes in serum transaminases (ALT/AST), which are possibly related to metabolic dysfunction-associated fatty liver disease [ 27 ]. 1.3 Metabolic alterations SARS-CoV-2 infection increases ROS levels, alters calcium flux, and modifies mitochondrial morphology [ 28 ]. Initially, the virus enhances mitochondrial respiration to support its replication [ 10 ]. However, prolonged mitochondrial engagement leads to OXPHOS collapse [ 29 ], ATP depletion [ 30 ], and oxidative damage [ 31 ]. These disruptions impair immune resolution and promote persistent inflammation, contributing to multiorgan involvement and long COVID. SARS-CoV-2 reprograms host metabolism through coordinated disruption of mitochondrial bioenergetics and lipid flux [ 32 ] ( Fig. 3 ). SARS-CoV-2 orchestrates widespread metabolic reprogramming that promotes viral replication. One prominent alteration is the shift from OXPHOS to glycolysis (Warburg effect), particularly in endothelial and immune cells [ 33 ]. This shift enhances the production of inflammatory mediators and fuels viral biosynthesis [ 34 ]. Peripheral immune cells from patients with COVID-19 exhibit elevated glycolytic flux and mitochondrial dysfunction, which are correlated with disease severity [ 35 ]. Increased lactate dehydrogenase (LDH) activity, reflecting anaerobic metabolism, is associated with poor prognosis [ 36 ]. Notably, lactate accumulation suppresses type I interferon (IFN–I) production, reducing antiviral response. The virus also perturbs lipid metabolism, inducing lipid droplet biogenesis and co-opting lipid trafficking machinery to support virion assembly. Spike proteins enhance ATP production and mitochondrial respiration by increasing the expression of fatty acid transport regulators and inducing a more negative mitochondrial membrane potential [ 37 ]. Additionally, disruptions in amino acid metabolism, particularly in tryptophan pathways [ 38 ], alter immune signaling and neurotransmitter synthesis. These cumulative changes result in a metabolic environment that favors viral persistence and immune dysregulation [ 39 ]. Inhibiting fatty acid synthase and restoring lipid catabolism through the activation of AMP-activated protein kinase (AMPK) can impede SARS-CoV-2 replication [ 40 ]. SARS-CoV-2 proteins alter cellular metabolism by increasing the levels of pyruvate kinase muscle isoform 2. This promotes the accumulation of advanced glycation end-products (AGEs). The abnormally produced AGEs bind to their receptors (RAGEs), activating pro-inflammatory genes, such as IL-1b and IL-6 , which worsen hypoxia and induce aging [ 41 ]. In patients with severe COVID-19, CD14 + CD16 − monocytes activate the NLRP3 inflammasome. This leads to increased IL-1β production through caspase-1/apoptosis-associated speck-like protein containing a caspase recruitment domain (ASC) speck formation, along with abnormal mitochondrial superoxide production, and lipid peroxidation [ 42 ]. 1.4 Abnormalities mitochondrial dynamics Mitochondrial dynamics regulate the mitochondrial shape and function through fusion, fission, biogenesis, and mitophagy. SARS-CoV-2 infection disrupts mitochondrial function by causing membrane depolarization and increasing ROS release [ 43 ]. This dysfunction leads to organ failure, inflammation, and mortality in patients with COVID-19 and may explain the persistent fatigue observed in long COVID [ 12 , 44 ]. SARS-CoV-2 disrupts energy production by increasing the expression of mitochondrial dynamics proteins: PINK1, Parkin, and MFN2 [ 10 , 43 , 45 ]. Patients with lung complications exhibited elevated PINK1, DNM1L, and MFN2 levels [ 46 ]. ORF3a triggers mitochondrial fission and cell death [ 47 ], while ORF9b promotes mitochondrial fusion and cell survival [ 48 ]. 1.5 Programmed cell death and release of damage-associated molecular patterns SARS-CoV-2 infection activates mTORC1, which triggers the IRE1/JNK pathway and regulates apoptosis in distinct ways during the early and persistent infection stages [ 48 , 49 ]. Viral proteins influence cell death through several mechanisms: they increase apoptosis-inducing factor expression, activate caspase 7 [ 10 ], and alter mitochondrial processes by interacting with BOK proteins in the cell membranes [ 50 ]. The nucleocapsid protein promotes cell survival by enhancing MCL-1 anti-apoptotic activity [ 51 ], while the spike protein prevents mitochondria-driven apoptosis by binding to α7 nAChR [ 52 ]. The virus disrupts mitochondrial function by blocking mitophagy through inhibition of SQSTM1 and microtubule-associated protein 1A/1 B-light chain 3 (LC3) binding [ 43 ]. ORF-9b alters mitochondria and helps evade immune responses [ 48 ], whereas ORF10 triggers mitophagy by interacting with NIX (also known as Bcl2-interacting protein 3-like protein) [ 53 ]. Additionally, the spike protein impairs mitochondrial function and causes cell death by activating mitochondrial permeability transition pore [ 54 ]. Damaged mitochondria increase ROS and pro-inflammatory cytokines, releasing damage-associated molecular patterns (DAMPs) into the cytoplasm, which triggers cell death and inflammation. This process leads to widespread damage, including oxidative stress, hyperferritinemia, and thrombosis. When released by damaged cells, DAMPs bind to pattern recognition receptors on immune cells [ 55 ], activating inflammatory pathways through Toll-like (TLRs), RIG-I-like, and NOD-like receptors. This binding activates NF-κB and inflammasome pathways, which release pro-inflammatory cytokines [ 55 ]. SARS-CoV-2 infection causes cell death, releasing DAMPs, such as HMGB1 and heat shock proteins [ 56 ], while viral RNA and host DNA serve as additional inflammatory triggers. NSP4 and ORF9b damage the mitochondria by forming membrane pores and releasing mtDNA-containing vesicles [ 57 ]. NSP4 creates these pores through interactions with BAK, whereas ORF9b inhibits MCL-1 (an anti-apoptotic member of the BCL2 protein family), leading to mtDNA release [ 57 ]. ORF9b supports viral replication and helps the virus evade immune responses, whereas circulating mtDNA influences COVID-19 severity [ 58 , 59 ]. Nucleocapsid proteins interfere with DNA recognition and IFN-I signaling by disrupting cGAS [ 60 ]. These combined effects impair mitochondrial function, resulting in increased inflammation and excessive responsiveness [ 61 ]. 1.6 MAVS pathway interference Mitochondria serve as critical hubs for innate antiviral defense, particularly through the MAVS pathway [ 62 ]. Upon detection of viral RNA by RIG-I-like receptors (RLRs), MAVS aggregates on the outer mitochondrial membrane, triggering TANK-binding kinase 1 (TBK1) and interferon regulatory factor 3 (IRF3) activation, and IFN-I and IFN-III production ( Fig. 4 ). 1.7 MAVS pathway MAVS plays a crucial role in RLR signaling within the mitochondria, peroxisomes, and MAM. This protein detects viral RNA and activates pattern recognition receptors. When RIG-I binds to viral RNA, it triggers MAVS aggregation and activation with the assistance of Riplet, a K63-linked E3 ligase that mediates RIG-I polyubiquitination. MAVS then initiates IFN production by activating signaling pathways involving TBK1, a nucleic acid-sensing kinase; IRF3; and inhibitor of NF-κB (IκB) kinase (IKK)–NF-κB ( Fig. 4 ). Tumor necrosis factor (TNF) receptor-associated factor 3 (TRAF3)-interacting protein 3 (TRAF3IP3) accumulates in the mitochondria, promotes TRAF3 recruitment, and facilitates MAVS recruitment, which activates TBK1-IRF3 signaling. TRAF3IP3 serves as a crucial link in RIG-I-MAVS signaling and enhances the antiviral response. In MAM, MAVS interacts with glutamine-fructose-6-phosphate aminotransferase, the first rate-limiting enzyme in the hexosamine biosynthetic pathway. MAVS signaling forms at the MAM, recruits TRAF6 and TRAF2, and links RLR signaling to glucose metabolism [ 63 ]. Specific regions of MAVS activate the transcription factors, IRF3 and NF-κB, which recruit TRAF for downstream signaling. These factors trigger the production of IFN-I (including IFN-α and IFN-β), IFN-III (including IFN-λ), and other antiviral cytokines. This process establishes an antiviral state, strengthens the immune response against infections, and prevents excessive inflammation [ 64 ]. Finally, MAVS can directly interact with LC3 through its LC3 binding motif, thereby maintaining mitochondrial homeostasis through mitophagy. 1.8 SARS-CoV-2 disrupts MAV SARS-CoV-2 modifies the mitochondrial dynamics and targets MAVS to evade host immunity and facilitate replication [ 65 ]. Mitochondria-localized ORF9b enhances the interaction between poly (rC)-binding protein 2 (PCBP2) and HECT domain E3 ligase (AIP4), thereby inhibiting the MAVS signaling pathway. SARS-CoV-2 membrane protein prevents MAVS protein aggregation, whereas ORF-6 disrupts MAVS activation [ 61 ]. Additionally, the ORF3c binds to MAVS and phosphoglycerate mutase 5 (PGAM5), triggering MAVS cleavage via caspase-3 [ 66 ]. These combined effects promoted viral replication and impaired mitochondrial function [ 67 ]. SARS-CoV-2 infection induces elevated inflammatory cytokine levels and, paradoxically, results in very low levels of IFN-I and IFN-III [ 68 ]. Several viral proteins, including ORF3, ORF3b, ORF6, ORF7a, ORF7b, ORF8, and ORF9b, inhibit IFN responses to varying degrees [ 69 ]. ORF3c, which localizes to mitochondria, suppresses innate immunity by limiting IFN-β production without affecting NF-κB activation or Janus kinase (JAK)-STAT signaling. ORF7a disrupts IFN-I responses, manipulates the host ubiquitin system to enhance and counteract IFN-I responses [ 70 ], reduces antigen presentation, and triggers pro-inflammatory cytokine responses [ 71 ]. ORF8 binds to MHC-I, activates the IL-17 pathway, and promotes pro-inflammatory factors [ 72 , 73 ]. ORF10 binds to NIX, mediates mitophagy-driven degradation of MAVS, and inhibits the IFN-I signaling pathway [ 53 ]. Both ORF7a and ORF9b, located in the mitochondria, inhibit RIG–I-MAVS-dependent signaling [ 12 , 74 ]. ORF9b further disrupts MAVS and interferon signaling by promoting the degradation of MAVS, TRAF3, and TRAF6 [ 75 ]. SARS-CoV-2 disrupts MAVS signaling through several mechanisms. ORF9b causes MAVS breakdown via PCBP2-AIP4, whereas ORF3c and ORF10 alter its localization and affect mitophagy. These mechanisms suppress IFN-I responses and help viruses evade detection. The viral proteins ORF6, ORF7a, and ORF8, evade immune responses by blocking interferon production and antigen presentation. 1.9 ORF9b disrupts HSP90–TOM70 interaction: impairment of mitochondrial antiviral response and metabolism ORF9b plays multiple roles in viral replication and immune system evasion [ 8 ]. This disrupts MAVS, inhibits IFN production, and promotes mitochondrial fusion, thereby preventing apoptosis. Translocase of the outer mitochondrial membrane 70 (TOM70) is essential for mitochondrial energy metabolism and its dysfunction can lead to lactic acidosis. ORF9b targets TOM70 specifically to suppress IFN-I response [ 75 ]. This interaction is regulated by ORF9b phosphorylation at serine residue 53, which interacts with the TOM70 glutamate at residue 477 in infected cells [ 76 ]. By binding to the C-terminus of TOM70 (residues 235–608), ORF9b impairs MAVS function [ 6 , 75 ], and triggers both IFN-I inhibition and lactate production. HSP90 interacts with TOM70 and is crucial for TOM70-mediated IFN-I activation. ORF9b competes with HSP90 for TOM70 binding. When ORF9b occupies the C-terminal domain of TOM70, it reduces the Glu-Glu-Val-Asp motif-binding affinity of HSP90 to the N-terminal domain of TOM70 by approximately 29-fold [ 77 ]. ORF9b blocks the interaction between HSP90 and TOM70, a key component of IFN signaling and mitochondrial balance, through the MAVS pathway. This disruption weakens early immune responses, which may explain the prolonged viral presence and inflammation observed in long COVID. 1.10 Mitochondrial damage induces inflammation SARS-CoV-2 induces mitochondrial dysfunction and oxidative stress [ 78 ]. In severe cases, elevated mROS levels, and NADPH oxidase activity impair immunity and cause inflammation [ 79 ]. The virus increases ROS levels via inflammatory pathways, leading to cytokine storms [ 80 ]. Inflammatory cytokines, such as TNF-α and IL-6 boost mtROS [ 81 ], while increasing viral loads decrease CD14 + monocytes [ 82 ]. The virus impairs antioxidant responses via hypoxia-inducible factor (HIF)-1α [ 83 ]. The initial infection causes cell death and endothelial damage, triggering inflammation that can lead to organ failure. Severe cases show neutrophil activation and excess pro-inflammatory cytokines that disrupt mitochondrial function [ 84 ]. 1.11 Persistent inflammatory response Residual viral particles or RNA can trigger persistent immune responses or symptoms, leading to long-term adverse effects. Mitochondrial dysfunction may cause chronic inflammation, potentially spreading to multiple organs, thus increasing COVID-19 severity. In patients with severe COVID-19, HIF-1α regulates glycogen phosphorylase L in neutrophils, enhancing glycolysis and glycogen accumulation. This results in higher concentrations of plasma neutrophil extracellular trapmarkers, including myeloperoxidase (MPO), elastase, and MPO-DNA complexes [ 85 ]. Key neutrophil function regulators, such as CCL2, CXCL10, CCL20, IL-18, IL-3, IL-6, granulocyte colony-stimulating factor (CSF), granulocyte-macrophage CSF (GM-CSF), TNF-α, and IFN-γ, are significantly increased in the plasma of patients with severe COVID-19 [ 46 , 85 ]. A moderate correlation exists between RAGE and TNF-α levels, as well as between DNM1L and IFN-α levels [ 46 ]. These alterations promote inflammatory response. Moreover, GM-CSF and prostaglandin E2 levels are elevated, while IFN-α is decreased in these patients [ 86 ]. SARS-CoV-2 activates TLRs and the NLRP3 inflammasome, triggering the production of various inflammatory molecules, including IL-6, IL-1β, IL-18, IL-8, IL-10, TNF-α, and LDH detoxification [ 83 , 87 ]. This activation leads to significant cellular changes, such as membrane rupture, cytoplasmic swelling, and chromatin disorganization [ 87 ]. Nicotine exacerbates these effects by increasing the inflammatory cytokines levels, causing severe cell damage [ 87 ]. In SARS-CoV-2-infected endothelial cells, the expression of ACE2 and TMPRSS2 results in mitochondrial dysfunction, TLR9 activation, reduced endothelial nitric oxide synthase (eNOS) activity, and inhibition of calcium responses. Notably, blocking TLR9 signaling further reduces IL-6 release and prevented eNOS decline, contributing to the severity of COVID-19 symptoms [ 88 ]. 1.12 Additional factors influencing the inflammatory response COVID-19 mortality rates are higher in men than in women [ 89 ]. Women exhibit stronger immune responses [ 90 ], because of differences in sex chromosomes and hormones, such as estrogen, progesterone, and testosterone [ 91 ]. Additionally, estrogen enhances antioxidant enzymes, such as manganese superoxide dismutase and GSH-peroxidase, which play crucial roles in mitigating ROS-induced damage [ 92 ]. Mitochondria also play a role in sex-based immune variations, which are crucial for effective immune responses [ 93 ]. Severe respiratory infections are the primary cause of death, particularly among elderly patients. This increased mortality largely stems from the hypoxic and hyperinflammatory states linked to COVID-19-related sepsis [ 94 ]. Melatonin enhances both humoral and cell-mediated immune responses by promoting the production of inflammatory cytokines and Type 1 T helper (Th1) cells. It shows antiviral activity by inhibiting inflammatory mediators, such as IL-6, IL-1β, and TNF-α, which are released during severe COVID-19 lung injury [ 95 ]. By inducing the expression of the circadian gene Bmal1 and inhibiting the pyruvate dehydrogenase complex (PDC), melatonin counteracts the viral inhibition of Bmal1/PDC. PDC helps convert pyruvate to acetyl-CoA in the mitochondria, supporting the tricarboxylic acid cycle, OXPHOS, and ATP production. These alterations increase intestinal permeability and dysbiosis, suppress short-chain fatty acids, such as butyrate, and increase circulating lipopolysaccharide (LPS) levels. Pineal-produced melatonin inhibits and mitigates these effects, preventing a “reset” of the circadian rhythm in mitochondrial metabolism [ 96 ]. Disruptions in butyrate and LPS levels may boost viral replication and worsen host symptoms by interfering with the melatonin pathway [ 96 ]. 1.13 Limitations and future directions SARS-CoV-2 proteins (such as ORF9b, NSP4, and membrane proteins) localize to mitochondria and disrupt their function [ 57 , 61 ]; however, the mechanism by which these changes cause chronic inflammation is unclear. The current knowledge mainly comes from laboratory studies and samples from acute cases. We lack long-term patient data showing how mitochondrial problems during infection relate to ongoing inflammation, particularly in long COVID [ 12 , 44 ]. Additionally, standard research models, such as lab-grown cells or mice, may not accurately represent human mitochondria and immune systems. Further research is required to determine mitochondrial responses in the brain, heart, and lungs. mtDNA damage and disease severity are associated [ 13 ], though a clear cause-and-effect relationship in humans remains to be established. Mitochondrial responses vary based on individual factors, such as age, sex, and underlying health conditions. The influence of pre-existing conditions, such as metabolic syndrome or mitochondrial diseases, on viral infection and inflammation remains unclear. The interplay between mitochondria and other cellular processes (including autophagy, ER stress, and inflammasome activation, such as NLRP3) during in SARS-CoV-2 infection also remains unclear [ 20 ]. Additionally, whether mitochondrial changes continue after the initial infect, such as in long COVID, is unknown. Future studies should clarify the causal link between mitochondrial injury and long COVID, validate targeted interventions in clinical settings, and explore individual variations in mitochondrial responses to infection. A deeper understanding of these pathways may reveal precise therapies for both COVID-19 and other mitochondria-related diseases. Targeting mitochondrial pathways offers a promising avenue for reducing excessive inflammation by improving mitochondrial balance. Several strategies may help restore metabolic equilibrium and reduce long-term complications: mitochondria-targeting antioxidants [such as mitoquinone (MitoQ) and visomitin (SkQ1)] [ 97 , 98 ], compounds that trigger mitophagy (such as PINK1-Parkin pathway activators), metabolic regulators (such as AMPK activators and PPAR agonists) [ 99 ], and MAVS pathway stabilizers. Although these mitochondria-focused treatments are promising, more evidence is required to confirm their safety and effectiveness during infection. 2 Conclusion Mitochondrial dysfunction plays a key role in SARS-CoV-2 pathogenesis, connecting the fields of virology, immunology, and metabolism. The virus hijacks host mitochondria, using them to boost replication, while disrupting immune responses and causing lasting cellular damage. This dysfunction contributes to the development of severe, acute COVID-19 symptoms and long-term complications. Several promising treatments, such as antioxidants, mitophagy modulators, and MAVS stabilizers, target mitochondrial pathways. However, further research is needed to confirm the effectiveness of these treatments and understand how mitochondrial damage affects post-COVID conditions. CRediT authorship contribution statement Tsung-Hsien Chen: Writing – review & editing, Writing – original draft, Supervision, Project administration, Formal analysis. Tien-Hsin Jeng: Writing – review & editing, Writing – original draft, Formal analysis. Ming-Yang Lee: Writing – review & editing. Hsiang-Chen Wang: Writing – review & editing. Kun-Feng Tsai: Writing – review & editing, Validation, Supervision, Project administration. Chu-Kuang Chou: Writing – review & editing, Validation, Supervision, Project administration. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
[
"BERLIN",
"XIA",
"CAO",
"KESHEH",
"GORDON",
"FU",
"SINGH",
"RAMACHANDRAN",
"ARCHER",
"FRASER",
"CHEN",
"SHIN",
"CHEN",
"SULTAN",
"YANG",
"SINGH",
"SCHLEISS",
"GUARNIERI",
"AMBROZEKLATECKA",
"MONZEL",
"MORIO",
"MOSHAROV",
"HINGORANI",
"WANG",
"AKBARI",
"WU",
"PRASADAKABEKKODU",
"MILLER",
"MANDO",
"YU",
"CHEN",
"ZEKRINECHAR",
"ICARD",
"AJAZ",
"GUPTA",
"HUYNH",
"ESSEX",
"YUE",
"TANNER",
"ALLEN",
"LAGE",
"SHANG",
"ALALY",
"MOTTA",
"SIEKACZ",
"JIAO",
"MADEDDU",
"MULLEN",
"YANG",
"PAN",
"KALASHNYK",
"LI",
"PILEGGI",
"TIAN",
"LUBKOWSKA",
"FAIZAN",
"MAHMOODPOOR",
"VALDESAGUAYO",
"CAI",
"BHOWAL",
"SAXENA",
"HE",
"REFOLO",
"MEHRZADI",
"STEWART",
"HARTSELL",
"BLANCOMELO",
"STUKALOV",
"CAO",
"ZHOU",
"ZHANG",
"LIN",
"WU",
"JIANG",
"BRANDHERM",
"GAO",
"DELACRUZENRIQUEZ",
"MORRIS",
"FAJGENBAUM",
"DELVALLE",
"ROMAO",
"ANWAR",
"SALEH",
"BORELLA",
"LEAVIS",
"SANSONE",
"COSTA",
"CONTI",
"GHOSH",
"VEMURI",
"REGITZZAGROSEK",
"KLOC",
"SHENOY",
"MUBASHSHIR",
"ANDERSON",
"LIU",
"MAO",
"MARINO"
] |
356bdf47099f4a649d1481ba0a061161_Effect of potatoes as part of the DASH diet on blood pressure in individuals with and without type 2_10.1016_j.hnm.2023.200225.xml
|
Effect of potatoes as part of the DASH diet on blood pressure in individuals with and without type 2 diabetes: A randomized controlled trial
|
[
"Galyean, Shannon",
"Sawant, Dhanashree",
"Childress, Allison",
"Alcorn, Michelle",
"Dawson, John A."
] |
This randomized controlled trial evaluated different cooking methods of potatoes as part of the DASH diet on blood pressure (BP) and anthropometrics in people with and without type 2 diabetes (T2D). Participants were randomized into DASH-FP (fried potatoes), DASH-NFP (non-fried potatoes) or DASH-NP (no potatoes) groups. BP, weight, waist circumference and body composition were measured.
Change outcomes from baseline to 6 weeks showed no significant difference in the study outcomes, including diastolic BP (p = 0.12), systolic BP (p = 0.26), body weight (p = 0.11), waist circumference (p = 0.86) and body composition (p = 0.57) within study groups. A significant group T2D status interaction was found for waist circumference (p = 0.036). Results from pairwise comparisons between the groups for all outcomes were not significant; however, a positive trend was seen in DASH-NFP and DASH-FP diet groups in BP and anthropometrics.
Individuals with and without T2D that consumed potatoes and the DASH diet did not significantly change BP and anthropometrics by six weeks. Slight improvements in BP and anthropometrics were seen in non-fried and fried potato groups. This helps future investigations of popular foods for people with chronic conditions that can be incorporated in a healthy eating pattern.
Clinical Trials.gov ID: NCT05589467; 9/16/2022.
|
1 Introduction People with type 2 diabetes (T2D) have a high prevalence of hypertension (HTN), and T2D is more common in individuals who have HTN [ 1 ]. Obesity, especially central/visceral obesity, is linked to HTN and a predisposing factor to the development of T2D [ 2 ]. Because obesity and HTN are significant risk factors for T2D, the conditions often coexist. Evidence shows that the combination of these chronic conditions increases long-term complications [ 3–5 ]. A better understanding of the interaction of HTN and T2D on health parameters such as blood pressure (BP) will help develop the best treatment strategies for chronic conditions. Diet is the single most effective approach for managing chronic conditions [ 6–8 ]. The Dietary Approaches to Stop Hypertension (DASH) diet has been shown to improve HTN and T2D and has significantly improved BP in adults with and without HTN [ 8 , 9 ]. The recommendations for HTN management include lifestyle modifications with a healthy diet as part of that strategy [ 10 , 11 ]. The DASH diet is a “combination” diet rich in fruits, vegetables and low-fat dairy products, which have been used to lower BP in nonhypertensive and hypertensive individuals [ 12 ]. The challenge is developing and implementing compatible meal patterns for individuals that lead to sustained improvement in diet quality. Nutrients such as potassium, found in fruits and vegetables, are directly linked to BP reduction [ 13 ]. White potatoes are an excellent potassium source, one of the designated nutrients of concern in the 2020 Dietary Guidelines for Americans [ 14 ]. Popular processed potato products are often high in fat and sodium and are a significant source of carbohydrates [ 15 ]. Contrary to past claims, potatoes have a high glycemic index but are not detrimental to a person’s cardiac health. In fact, white potatoes have more potassium per serving than any other vegetable. Potatoes, a popular food, combined with a healthy eating pattern such as the DASH diet, could contribute to adherence to a quality diet that improves health. Therefore, the purpose of this study was to test the hypotheses that persons with and without T2D, who consume potatoes as part of the DASH diet, will show a greater reduction in BP and have greater improvement in anthropometric measurements than those who do not consume potatoes by the end of the six weeks study period. Considering the combined interaction, which occurs in T2D and HTN, it is speculated that the response of participants with T2D to the intervention may differ from participants without T2D. Thus, the primary objective of the present study was to examine the effects of potatoes, as part of the DASH diet, on BP and anthropometric measurements in participants with and without T2D. A secondary objective was to determine whether the response of groups with and without T2D to the treatments was similar. 2 Methods 2.1 Participants Participants were recruited starting October 2019 to August 2022 from the surrounding community by advertisements in local newspapers, social media, in private and public medical clinics, and community events and divided into strata based on diabetes status to evaluate the effectiveness of potatoes as part of a healthy eating pattern (DASH diet) on BP and anthropometric measurements. The inclusion criteria required all participants to be between 18 and 65 years old and have controlled T2D, as defined as a HgA1c level ≤8%, which was managed by diet and exercise alone or any diabetes medication other than insulin. In addition, any participant with HTN had to be well-controlled as defined as systolic blood pressure (SBP) < 140 mm Hg and diastolic blood pressure (DBP) < 90 mm Hg, which was managed by not more than one anti-hypertensive medication. Twelve participants (6 participants with T2D and 6 participants without T2D) were on a blood pressure-lowering medication before starting the study and were allowed to continue their drug regimen throughout the study period. Exclusion criteria included the use of tobacco; self-reported history of hepatic or renal diseases; evidence of severe diabetic complications (such as proliferative retinopathy or diabetic nephropathy); use of oral steroids, hormone replacement therapy; individuals with blood pressure ≥160/100, or HbA1c ≥ 8%; allergy to potatoes; individuals already following other types of diets (for example, low carb diet) and not willing to discontinue them, pregnant or lactating; alcohol or drug dependence. The study was approved by The Texas Tech University Institutional Review Board IRB2019-880, and all participants gave written informed consent prior to participation. 2.2 Study design This experimental intervention study had a study duration of six weeks and data were collected at Texas Tech University Nutrition and Metabolic Health Initiative in Lubbock, TX. At baseline, participants in each group were counseled to follow the DASH diet for two weeks. At two weeks, using simple randomization, participants in each group were assigned to either the DASH-FP (fried potatoes), DASH-NFP (non-fried potatoes) or the control DASH-NP (no potatoes) group by the lead PI and research assistant. Next, participants were instructed to follow the assigned diet for four weeks. BP and anthropometric data were collected at baseline, at two weeks and post-intervention at six weeks. The information collected was analyzed and used to compare pre and post-intervention data. 2.3 Anthropometric and body composition measurement Height was measured to the nearest 0.1 cm using a stadiometer, and weight was measured to the nearest 0.1 pound (lb). Body composition (fat and lean body mass) was measured by bioelectric impedance using a Tanita scale (model SC-331S Tanita, Tokyo, Japan) which is a valid and non-invasive tool used in clinical studies [ 16 ]. Before proceeding with this analysis, participants were asked to confirm that they were fasting for at least 2 h prior to the visits, had not exercised in the past 12 h or had alcohol within the past 24 h. Before testing, they were asked to remove any metal jewelry, watch, keys, and shoes. A certified non-stretch tape measure measured waist circumference (WC). 2.4 Dietary intake data Dietary intake was recorded using the Automated Self-Administered 24-h dietary recall (ASA24), a self-administered, web-based tool developed by the National Institute of Cancer. Participants were trained to record diet intake using this tool during their visits and were asked to enter three 24-h diet recalls every week for six weeks [ 17 ]. 2.5 Blood pressure measurement BP was measured with a digital blood pressure monitor using an arm cuff suitable for the body size. Three readings were taken at 5-min intervals, and the three measurements' average was calculated. All the measuring tools and methods mentioned above were used at baseline, at two weeks and on completion of the intervention study at six weeks in all groups. 2.6 Dietary intervention and counseling All the participants were randomized to one of the assigned dietary patterns, including the DASH diet with no potatoes as a control diet (CN), the DASH diet with only pan-fried potatoes (FP), or the DASH diet with only non-fried potatoes (NFP). The control diet was the standard DASH diet but without potatoes. FP group participants were counseled to include five servings of unpeeled pan-fried potatoes each week. NFP groups were counseled to include five servings of unpeeled non-fried potatoes, such as baked, grilled or boiled potatoes each week. A 7-day menu cycle from the DASH-Sodium study [ 18 ] for each dietary pattern at different energy levels according to the caloric requirements of each participant was used as the basis for the recommended diets [ 19 ]. Participants in the intervention groups were given a bag of russet potatoes at the second visit and were advised on the recommended portion sizes and cooking methods for including them in their diet for four weeks. 2.7 Cooking video demonstrations During the second study visit, participants were instructed on the DASH diet and different cooking methods and were asked to independently maintain the broad requirements of the DASH diet. In addition, pre-recorded cooking demonstration videos were shared with the participants using YouTube links to promote a better understanding of the cooking methods and DASH diet recipe inclusion. A culinary educator developed these videos at the department of Hospitality and Retail Management at the College of Human Sciences, Texas Tech University. The demonstrations provided detailed cooking instructions, serving sizes, and amounts of ingredients to be used. 2.8 Statistical data analysis For each of the primary outcomes, change scores from pre-to post-intervention were calculated for all participants. Linear models were used to determine whether these change scores significantly differ from zero (within-group comparisons) or significantly differ by diet group (head-to-head-to-head or any-potato versus no-potato), adjusting for important covariates such as age, sex, and BMI. The underlying assumptions of these tests (e.g., Normality, homoscedasticity) were checked using the usual diagnostics, and any serious violations were ameliorated either by Box-Cox transformation or robust regression protocols. The Bonferroni multiple comparisons adjustment procedure was used to determine which differences, if any, exhibit more than nominal statistical significance. As the DASH diet was not particularly strict and did not require any “special” foods, all participants showed good adherence. We yielded six to twelve participants in each group for power calculation considerations. With these participant numbers, we were 80% powered to see effect sizes exceeding 0.9 in magnitude, which is moderately large and not an unreasonable assumption, given the literature in this area. Findings after adjustment for multiple comparisons needed to be a little stronger, either at least 1.4 for head-to-head-to-head comparisons or at least 1.0 for any-potato versus no-potato comparisons. 3 Results and discussion 3.1 Subject characteristics The baseline characteristics of study participants are shown in Table 1 . Thirteen participants were in the T2D group (7 male and 6 female, 47.1 ± 10.0 years), and 13 were in the Non-T2D (3 male and 10 female, 44.6 ± 16.3 years). There were no differences among the groups except an unexpectedly significantly lower DBP in the T2D group when compared to the Non-T2D group. 3.2 Anthropometric, body composition, and blood pressure measurements in response to intervention The mean changes in the study outcome measurements from baseline to 6 weeks are demonstrated in Table 2 . Body weight measurements were not significantly different between groups at baseline. There was no evidence of a group T2D status interaction for weight change. Additionally, there was no evidence of HTN status interaction on post-treatment weight. Similarly, there were no significant changes in body weight across groups or treatments. WC measurements were not significantly different between groups at baseline. There was no evidence of a group HTN status interaction for WC change. Similarly, there were no significant changes in WC across groups or treatments. However, a nominally significant (p = 0.036) group T2D status interaction was noted on post-treatment WC values. Non-T2D mean WC changes are as follows: CN -0.38; FP -0.92; NFP -2.13. T2D group mean changes were the following: CN -2.04; FP -0.083; NFP +2.42. Participants on the CN diet had more WC change if they had T2D. In contrast, participants on FP or NFP had more WC change if they did not have T2D, with the biggest driver of the interaction being the discrepancy between the Non-T2D and T2D on the NFP diet. Percent body fat (BF) measurements at baseline were not significantly different between groups. In addition, there were no significant changes in % BF across groups or treatments. Moreover, there was no evidence of group T2D status or HTN status interactions for % BF. Average SBP measurements at baseline were not significantly different between groups. Also, there were no significant changes in average SBP across groups or treatments. In the same way, there was no evidence of group T2D status or HTN status interactions for average SBP. Average DBP measurements in the T2D group were significantly lower than the Non-T2D group at baseline. However, there were no significant changes in average DBP across groups or treatments. Furthermore, there was no evidence of group T2D status and HTN status interactions for average DBP. In this study, overall anthropometric and BP changes were favorable in both participants with and without diabetes. There was evidence of a significant group T2D status interaction on post-treatment WC values; the T2D participants had higher mean WC values than the Non-T2D participants. Potato consumption as part of the DASH diet led to decreases in body weight, % BF, WC, SBP, and DBP in T2D and Non-T2D participants. Although this overall trend in anthropometrics and BP in the participants who consumed potatoes is favorable, it was not statistically different from the values in the participants who followed the CN diet. These effects may have been due to the current study being a diet-only intervention [ 20 ] and not addressing multiple behavior health changes that would help the outcomes measured in participants [ 21 ]. In general, the results of this study agree well with those of earlier studies of various designs using the DASH diet, which showed reductions in anthropometrics and BP [ 22–26 ]. The intervention involved potato groups both fried (pan fried potatoes with skin in no more than 75 ml of canola oil) and non-fried (cooking methods other than frying, including baking, roasting, grilling or boiling potatoes with skin in no more than 1–2 tsp of canola oil) that were added for five days a week to the controlled diet during the intervention period to determine these treatment effects among people with and without T2D. Although Pokharel et al. [ 27 ] reported that plain potatoes have little effect on body weight and diabetes risk, they did find that potatoes such as fries, mashed potatoes cooked with butter and other ingredients, and potato chips increase the risk of diabetes. The effect when combining potatoes cooked in a controlled amount of healthy fat as part of the DASH diet in this study differed from that used in previous studies. Moreover, previous studies that involve potatoes among people with diabetes suggest it increases the risk, also include potatoes that were processed, with high fat, and without the skin [ 28–30 ]. The degree of improvements for anthropometrics and BP might be expected to be different as the current study required the subjects to eat the potato skins, provided portion sizes for fat, and whole sources of potatoes, which will all make a difference when including potatoes as part of the diet among people with and without diabetes. The decrease in SBP from baseline was only seen in the NFP groups for the T2D participants after six weeks. The same results were seen for DBP, a decrease from baseline only seen in the NFP groups for the T2D participants after six weeks. On the other hand, a decrease from baseline in SBP and DBP was seen in all diet groups for the Non-T2D participants after six weeks. This result, which was not consistent with the results of other studies that examined the effects of potatoes on BP, was perhaps due to features of the study design, including potatoes combined with other carbohydrate-based foods [ 31 ] or including only total potato consumption, both fried and non-fried [ 32 ]. The results of the Borch et al. [ 33 ] study did not show evidence to suggest an association between intake of potatoes and risks of obesity, T2D, or cardiovascular disease, but that french fries may be associated with increased risks of obesity and T2D. This study showed that only the NFP group for the T2D participants had a decrease in SBP and DBP but not the FP group. This study also examined the effects of fried and non-fried potatoes, with healthy preparation and eating pattern recommendations, on anthropometrics. There was an overall improvement in body weight, WC, and % BF. These results disagree with the recent work of Baygi et al., which showed that daily potato consumption was significantly associated with higher anthropometric measures in a cross-sectional study of children and adolescents. However, there was no differentiation of potato cooking methods, and the authors discussed weight gain being associated with an increased daily intake of potato chips, potato, sugar-sweetened beverages, and processed meat, as compared with those who increased their daily intake of vegetables, whole grains, fruit, yogurt, and nuts [ 34 ]. The current study also had different results than Moholdt et al., which looked specifically at boiled potato intake and the effects on anthropometrics and BP and found that people who consumed boiled potatoes more than four times per week had a slightly higher mean BMI and waist circumference [ 35 ]. Although the authors specified the preparation method of potatoes, the eating pattern of participants was not analyzed, which accounts for a significant component that can also impact anthropometrics. Incorporating a popular food like potatoes as part of a healthy eating pattern could help with long-term adherence to improve anthropometrics and BP. The current study demonstrated similar results to Agarwal et al. investigating adolescents eating patterns and suggested that encouraging potato consumption, preferably without a lot of extra fat/sodium, may be an effective strategy for improving intakes and adequacy of vegetables and certain nutrients and achieving a healthier dietary pattern [ 36 ]. However, there are inconsistencies with food patterns that include potatoes when comparing the associated diet quality and association with biomarkers to a food pattern without potatoes [ 37 ]. Significant increases and decreases in diet quality scores have been found between dietary patterns with potatoes and those without potatoes. The difference in many of these studies is related to the foods clustered in with potato consumption, i.e., potato chips, refined grains and/or added sugars, including sugar-sweetened beverages, burgers, meats, etc. [ 38 ]. The current study is in agreement with the previous studies that differentiate between the form of potatoes consumed, and that also attributes diet quality to the presence or absence of other food categories in the dietary patterns. 3.3 Diet treatment comparisons among T2D and Non-T2D A pairwise comparison was conducted to note trends for diet treatments among groups on the anthropometric and BP outcome measurements. See Fig. 1 . The FP treatment seemed to have the least improvement in outcome measures for the T2D group when comparing diet treatments among the T2D and Non-T2D groups. The T2D group had the most improved outcomes overall when following the NFP diet treatment (i.e., body weight, SBP and DBP measurements) when compared to CN and FP treatments. The Non-T2D group had the most improvement in outcomes overall when following either NFP or FP diet treatments when compared to the CN treatment. Although not significant, there seems to be a trend with greater improvements in the NFP groups in both people with diabetes and without diabetes and the outcomes measured. However, the FP group showed improvements among people without T2D. These results could imply that both types of potatoes, fried and non-fried, could be included in a healthy diet pattern, such as DASH, for people without diabetes and non-fried potatoes could be included as part of a healthy diet pattern, such as DASH, for people with T2D to help control weight, WC, % BF, and BP. One of the objectives of this study was to examine differences among people with and without T2D when consuming potatoes as part of the DASH diet and their anthropometrics and BP. The current study is in agreement with Devlin et al., who compared consumption of boiled, roasted or boiled then cooled potato-based meals among people with T2D and found they were not associated with unfavorable postprandial glucose responses or nocturnal glycemic control and can be considered suitable for individuals with T2D when consumed as part of a mixed-evening meal [ 39 ]. However, the authors of this study did not include individuals without T2D. An 8-year longitudinal cohort study found that fried potato consumption more than doubled the risk of death, independently of several other confounders, among those who consumed fried potatoes >2 times/wk. However, those authors analyzed fried potatoes and french fries, which are typically high amounts of dietary fat (including trans fat) and added salt [ 40 ]. The current study did not use the same preparation method as the fried potatoes in that study and included a controlled portion of healthy fat and skin of the potato along with the DASH diet. Two studies found that consuming boiled potatoes, fried potatoes or french fries was not associated with total mortality risk (37) or cardiovascular disease outcome [ 41 ]. However, their participants did not include anyone with a history of chronic conditions. Therefore, more investigation is warranted to understand whether higher consumption of fried potatoes is associated with higher chronic disease risk. 3.4 Limitations We acknowledge that there were limitations in this study. First, participants self-reported their medical conditions, which could have introduced some level of bias. Biochemical markers could not be assessed, which could impact the association between potato consumption and the outcomes measured. Finally, nutritional intake always risks bias from the selective and potentially inaccurate recall that may have influenced our results. 4 Conclusions It has been a common misconception that due to their effect on blood glucose, potatoes may not be part of a healthy dietary pattern for persons with T2D. However, research has shown that starchy foods, including potatoes, can be part of a healthy meal plan. The results of the present study showed a trend that potatoes consumed as part of a healthy diet pattern, such as the DASH diet, can improve anthropometric and BP measurements in both people with and without T2D. Knowing this role may lead to improved dietary advice, specifically the regular inclusion of potatoes in the diet, for persons with T2D. Further follow‐up investigations with large populations of people are warranted to evaluate the consumption of fried and non-fried potatoes with healthy recommendations and the adherence to diet and long-term effects on anthropometric and BP outcomes in people with or without diabetes. Authors' contributions Shannon Galyean: Conceptulization, Methodology, Investigation, Resources, Data Curation, Writing – Original Draft, Writing – Review & Editing, Visualization, Supervision, Project Administration, Funding Acquisition Dhanashree Sawant: Investigation, Resources, Data Curation, Writing Allison Childress: Conceptulization, Methodology, Writing – Review & Editing, Funding Acquisition Michelle Alcorn: Resources, Writing – Review & Editing, Funding Acquisition John A. Dawson: Formal Analysis, Writing – Review & Editing, Funding Acquisition. Funding source Alliance for Potato Research and Education (APRE) . Grant ID: 23A820 . Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
[
"SOWERS",
"SOWERS",
"SALMAN",
"ANDERSON",
"CHEN",
"DEVRIES",
"LOCKE",
"FILIPPOU",
"WILLIAMS",
"WHELTONPAUL",
"APPEL",
"SAVICA",
"KING",
"FURRER",
"HOUTKOOPER",
"SUBAR",
"SVETKEY",
"HELLER",
"SCHLIEMANN",
"SMITH",
"HASHEMI",
"HASHEMI",
"PAULA",
"JURASCHEK",
"NOWSON",
"POKHAREL",
"QUAN",
"HALTON",
"MURAKI",
"MATHE",
"CRUIJSEN",
"BORCH",
"BAYGI",
"FULGONI",
"ALJURAIBAN",
"DEVLIN",
"VERONESE",
"LARSSON"
] |
8255e5a66a004771a3d509c659bcb763_Central airway squamous metaplasia following radiation therapy mimicking local tumour recurrence_10.1016_j.rmcr.2023.101942.xml
|
Central airway squamous metaplasia following radiation therapy mimicking local tumour recurrence
|
[
"Arulanantham, Jonathan",
"Chelvarajah, Revadhi",
"Ismail, A Kasim",
"Bray, Victoria J.",
"Vinod, Shalini K.",
"Williamson, Jonathan P."
] |
Radiation therapy can result in injury to the lung parenchyma and central airways; the latter is less well documented in the literature. Here, we describe a 65-year-old Caucasian male, who developed focal endobronchial nodules and right main bronchial stenosis suggesting tumour recurrence, 32 months following curative intent concurrent chemoradiation therapy for Stage 3B squamous cell carcinoma of the lung. Computed tomography and positron emission tomography results are detailed. Flexible bronchoscopy with bronchial biopsies revealed squamous metaplasia rather than malignant tumour recurrence, with ongoing observation planned.
|
1 Introduction Radiation therapy can cause delayed injury to the lung parenchyma resulting in a spectrum of changes from radiation pneumonitis to radiation fibrosis [ 1 ]. Clinical manifestations range from asymptomatic, mild and rarely severe respiratory symptoms [ 1 ]. Central airway injury secondary to radiation therapy is less well documented, but can present as tracheobronchial squamous metaplasia, which may impair mucociliary clearance and rarely itself undergo neoplastic transformation [ 1–4 ]. Radiation exposure may also result in the formation of tracheobronchial strictures and stenosis [ 5 , 6 ]. We present the initial evaluation of a patient with suspected tumour recurrence who, after bronchoscopic biopsies, was found to have squamous metaplasia with the formation of a bronchial stricture; both outcomes are likely secondary to radiation therapy. 2 Case report A 65-year-old Caucasian male, diagnosed with Stage 3B (T3 N3 M0) squamous cell carcinoma of the right upper lobe (RUL) three years earlier, was found on surveillance sequential Computed Tomography (CT) scans to have an interval increase in a residual right hilar mass and regional lymph nodes. He had been experiencing worsening exertional dyspnoea and intermittent haemoptysis over several months. He denied constitutional symptoms. Clinical examination revealed a mild-moderate bilateral expiratory wheeze and oxygen saturations within normal parameters (SpO2 = 96 % on room air). 32 months earlier, a thoracic cancer multi-disciplinary team had recommended curative intent concurrent chemoradiation therapy for a right hilar mass with a large right upper lobe endobronchial tumour deposit diagnosed at bronchoscopy. Treatment involved a 6-week daily course of radiation therapy at 66 Gy (Gy) in 33 fractions (2Gy per fraction, 10 fractions per fortnight) to the right lung and mediastinum ( Fig. 1 A and B). The chemotherapy regime included two cycles of cisplatin and etopophos, followed by maintenance 2-weekly durvalumab, for 12 months. The patient tolerated treatment generally well, with episodes of productive cough and pyrexia. Radiographic investigations showed a stable post treatment hilar mass and local nodes. There was no evidence of subacute radiation pneumonitis or fibrosis over 18 months of clinical follow up. Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) at 21 and 27 months follow up showed stable findings, when compared with pre-chemoradiation scans ( Fig. 1 C). There was a decrease in the right hilar region activity (SUV max 5.0 and 4.5, respectively from 9.0) and the right lower paratracheal node (SUV max 3.5 and 3.7, respectively from 4.3). There was an increase in the uptake at the contralateral aortic arch node (SUVmax 5.9 and 6.4, respectively from 5.1), most likely due to reactive or inflammatory change. However, CT findings at 32-month review showed that the right suprahilar mass had increased in size to 70 × 30mm (previously 62 × 27mm) with narrowing of the RUL bronchus and right bronchus intermedius ( Fig. 1 D). Local tumour recurrence was suspected and flexible bronchoscopy with ultrasound guided endobronchial biopsy was performed. Bronchoscopy prior to chemoradiation had shown an endobronchial RUL lesion with white nodules around the entrance of the RUL ( Fig. 2 A). The more recent bronchoscopy showed generalised hyperaemia and neovascularisation of the trachea ( Fig. 2 B) and RMB ( Fig. 2 C). The right bronchus intermedius was narrowed and slit-like from apparent extrinsic compression ( Fig. 2 C). The RUL also appeared to have a pinpoint orifice stricture ( Fig. 2 D). The distal bronchus intermedius, RML and RLL appeared normal. Two endobronchial nodules, one in the mid trachea and another in the bronchus intermedius ( Fig. 2 B and C), were thought to be tumour deposits and were biopsied. However, the histopathology showed squamous metaplasia without overt dysplasia and altered stroma without obvious infiltration or keratinisation. Some fragments of mild chronic inflammation and fibrinous exudate were seen, without clear neoplastic change ( Fig. 3 ). For persistent dyspnoea, insertion of a tracheobronchial silicon stent was attempted, but abandoned due to extreme friability of the bronchial mucosa during rigid bronchoscopy. Local balloon dilatation of the bronchus intermedius provided minimal symptomatic relief and an attempt at canulating and dilating the RUL stricture was unsuccessful. The patient continues undergoing 3 monthly clinical follow-ups with stable respiratory symptoms at the time of writing. 3 Discussion Radiation therapy resulting in lung parenchymal injury is well documented [ 1 ]. Through the development of free radicals, ionising radiation can damage cell membranes and chromosomal DNA, contributing to epithelial cell dysfunction and death [ 1 ]. The lung's response to injury exists within a spectrum of acute to chronic changes [ 1 ]. Radiation pneumonitis describes focal or diffuse inflammation to lung parenchyma, occurring 4 weeks to 6 months following radiation exposure [ 1 , 7 , 8 ]. Patients may be asymptomatic or present with dyspnoea, cough, low-grade fever and/or chest discomfort. Symptoms may self-resolve, but more often require treatment with corticosteroids. In more persistent cases, the lung architecture undergoes remodelling and permanent radiation fibrosis may result [ 1 ]. This can occur 6 months after treatment, and involves connective tissue deposition in place of normal lung tissue [ 1 , 7 ]. Patients may present with worsening dyspnoea, persistent dry cough or rarely with symptoms associated with cor pulmonale [ 1 ]. Radiation fibrosis is usually a clinical diagnosis based on history, signs and radiological findings and is often identified whilst investigating for post treatment symptoms, suspected infection or less commonly bronchiectasis or spontaneous pneumothoraces [ 1 , 7 ]. Risk factors associated with these complications include the dose and fractionation of radiation therapy, volume of lung irradiated, re-irradiation, the choice of chemotherapy agents and concurrent timing of chemotherapy with radiation, utilisation of immune therapy and abrupt corticosteroid withdrawal [ 1 , 8 ]. Management involves the consideration of these risk factors, using means such as dose-volume organ at risk constraints to guide optimal radiation dosing [ 9 ]. Whilst steroids may be effective for symptomatic radiation pneumonitis, these are ineffective for established fibrosis [ 1 , 8 ]. Central airway complications from radiation therapy are rarely reported in the literature. Squamous metaplasia of the tracheobronchial epithelium is a protective mechanism, involving the conversion of normal pseudostratified mucociliary epithelium to stratified squamous epithelium, and radiation exposure has been identified as a potential cause [ 1 , 2 ]. Miyamoto et al. (1987) reported two patients with pulmonary adenocarcinoma and squamous cell carcinoma respectively, who both developed squamous metaplasia following radiotherapy and anticancer medication [ 10 ]. The development of squamous metaplasia following radiation therapy for patients with squamous cell carcinoma of the mouth and oropharynx and breast carcinoma, have also been documented [ 11 , 12 ]. Interestingly, the pulmonary bronchi and bronchial epithelium are considered radioresistant [ 1 ]. Superimposed bacterial infection may lead to airway alteration, but its role in the development of squamous metaplasia is unclear [ 1 ]. COPD and smoking are contributing factors to the development of squamous metaplasia [ 3 , 13 ]. Worsening COPD severity, impaired mucociliary clearance, and neoplastic transition can result in patients with untreated squamous metaplasia [ 3 , 4 , 13 ]. Prognostic studies in this area are limited but smoking cessation is the only available management strategy for squamous metaplasia with proven efficacy [ 3 , 4 ]. Continual abstinence from smoking will be protective for our patient. Radiation therapy and bronchial trauma may also result in bronchial strictures, as in our patient, which describes narrowing of the bronchial airways and tracheobronchial stenosis, potentially leading to post-obstructive pneumonia and symptomatic respiratory insufficiency [ 5–7 ]. Wang et al. (2020) showed the risk of developing bronchial strictures and atelectasis following radiation exposure is dose dependent [ 14 ]. Management options for strictures and stenoses include balloon dilatation and stent placement [ 5 ]. Cho et al. (2014) showed mean symptoms improvement of 61.9 ± 16 months with balloon dilatation and stent placement in patients with bronchial strictures following radiation therapy [ 6 ]. In the authors experience, stent insertion should be considered very cautiously as the risks of stent related complications in non-malignant conditions are high. The need for further endobronchial treatments for our patient will need to be assessed with regular clinical surveillance reflecting a balance between symptoms and the risk of treatment related complications. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declaration of competing interest Appropriate written informed consent was obtained for publication of this case report and accompanying images.
|
[
"DAVIS",
"GRAY",
"RIGDEN",
"LAPPERRE",
"SHIN",
"CHO",
"HANANIA",
"MOISEENKO",
"MIYAMOTO",
"GINTER",
"FRIEDMAN",
"ARAYA",
"WANG"
] |
680ea78619a0432fadde1b1721ad3ab1_Integration of physical information and reaction mechanism data for surrogate prediction model and m_10.1016_j.gce.2024.06.002.xml
|
Integration of physical information and reaction mechanism data for surrogate prediction model and multi-objective optimization of glycolic acid production
|
[
"Zhang, Zhibo",
"Wang, Yaowei",
"Zhang, Dongrui",
"Zhao, Deming",
"Shi, Huibin",
"Yan, Hao",
"Zhou, Xin",
"Feng, Xiang",
"Yang, Chaohe"
] |
With the continuous development of the chemical industry, the concept of advocating green development has become increasingly popular. Glycolic acid (GA), serving as the monomer for biodegradable plastic polyglycolic acid, plays a crucial role in combating plastic pollution and fostering an eco-friendly society. The selective oxidation of ethylene glycol (EG) to produce GA represents a novel green production technology. Controlling reaction parameters to achieve multi-objective optimization of product distribution and direct CO2 emissions is crucial for scaling up the process. With the advent of the big data era, the integration of the chemical industry with artificial intelligence to achieve engineering scale-up is an important trend. This study proposes a neural network model for production prediction and optimization. The model is trained using experimental data, reaction mechanism data, and physical information, enabling rapid prediction of GA production. After validating with 40% of experimental data and 16% of reaction mechanism data, the model's prediction error was within ±5%, and the linear correlation coefficient R2 between the predicted values and actual values was 0.998. Furthermore, this study integrated a multi-objective optimization algorithm based on the model, enabling surrogate optimization of reaction parameters during production. After optimization, the direct CO2 emissions were reduced by over 99% and overall greenhouse gas emissions were reduced by 4.6%. The research paradigm proposed in this research can offer guidance and technical support for the optimized operation of EG selective oxidation to GA.
|
1 Introduction In recent years, chemical and new materials industries have developed rapidly [ 1–3 ]. With the continuous improvement of national environmental protection requirements and the steady advancement of carbon peak and carbon neutrality goals, it is crucial to reduce pollutant emissions and CO 2 emissions during the chemical production process [ 4–8 ]. Green chemical technology aims to achieve high conversion rates, excellent selectivity, and energy efficiency, thereby minimizing the generation of pollutants at the source [ 9–11 ]. Currently, there are still some production technologies in chemical industry that lag in terms of economic efficiency and result in significant environmental pollution [ 12 ]. Therefore, implementing and promoting green chemical technology production is a major trend in the development of the chemical industry [ 13 , 14 ]. Glycolic acid (GA) is an important chemical and organic synthesis intermediate, widely used in personal care products, adhesives, dyeing, metal cleaning, and textiles [ 15 ]. The polymerization of GA to produce polyglycolic acid plastics is noteworthy for its excellent biodegradability. This characteristic is crucial for combating white plastic pollution, promoting the advancement of degradable plastics, and fostering ecological sustainability [ 16 , 17 ]. However, existing GA production processes often involve toxic and harmful raw materials, harsh reaction conditions, numerous by-products, and difficulties in separation and purification, which do not meet the requirements for environmentally friendly production [ 18 , 19 ]. The selective oxidation of ethylene glycol (EG) to produce GA offers a pathway with low-cost and readily available raw materials, mild reaction conditions, and high product selectivity, aligning with the principles of green chemistry [ 20 ]. Realizing the industrial development of this technology is of great importance for promoting the growth of China's green chemical industry. Regrettably, very few have advanced toward industrialization despite the abundance of studies on new green chemical technologies. This is primarily due to the significant experimental costs required to scale up industrial processes. Therefore, it is of great significance for the development of new technologies to conduct industrial-scale process simulations, optimize parameters based on laboratory technology, and identify potential issues during scale-up. However, chemical production in industry is often determined not only by the performance of the catalyst. Operating parameters such as reaction temperature and residence time directly affect the distribution of products. Multi-objective optimization of reaction parameters during the process scale-up plays a crucial role. Taking into account product distribution as well as related economic and environmental issues can maximize the overall efficiency of the process. In the production of GA from EG, there are direct CO 2 emissions. The engineers must optimize carbon emissions from the reaction process while considering product yield. Therefore, optimizing reaction parameters to achieve optimal product distribution and minimize emissions is of great significance for the process design. However, traditional mechanistic models are slow in computation and suffer from poor convergence, often resulting in optimization failures or low efficiency when integrated with multi-objective optimization algorithms. With the profound advancement of the digital age, the utilization of artificial intelligence has significantly enhanced social production efficiency [ 21–24 ]. Achieving digital transformation and integrated development in the chemical industry has become an important trend to promote efficient industry growth [ 25–27 ]. Machine learning has shown significant advantages in solving nonlinear problems, especially deep learning [ 28 ]. And it has already been widely applied in industrial production. Due to the efficient computational performance of machine learning, it has been widely applied in areas such as real-time prediction and optimization. For instance, Costa et al. [ 29 ] proposed a machine learning approach, constructing a deep neural network model as a predictive model operating as a soft sensor in the process of foam flotation for iron ore. For conventional methods, it takes hours from taking a laboratory sample to obtaining the analysis results. While the proposed model can produce accurate and real-time predictions, eliminating the delay of laboratory analysis. Deng and Bao et al. [ 30 , 31 ] successfully applied machine learning methods to highly non-linear chemical process real-time detection, effectively identifying faults and finding the cause variables of the faults. Faults that occur during chemical production often require complex manual analysis, while the AI model can predict the type of fault in real-time. This is of great significance for safe chemical production. Ba et al. [ 32 ] established a deep belief network model to rapidly predict the total content of aromatics in diesel and replace the time-consuming near-infrared spectroscopy method. In addition, machine learning has applications in real-time prediction in thermodynamic calculations, material design, and chemical process control, among other fields [ 33–36 ]. In addition, machine learning has broad real-time prediction application prospects in membrane material performance [ 37 ], energy systems [ 38 ], and so on [ 39 , 40 ]. However, the success of these applications mainly depends on a wealth of industrial data. For technologies that have not yet been industrialized, laboratory data is scarce and valuable, making it difficult to establish reliable data-driven models. In situations where data is scarce, researchers have identified two effective methods: data augmentation and physical information methods [ 41–44 ]. Data augmentation techniques involve creating surrogate models to generate additional data for supplementation. Separation and purification processes often have many nonlinear control variables, making their optimization and control quite challenging. Liu et al. [ 45 ] used Aspen Plus to establish a mechanistic model for the extractive distillation process and then utilized this mechanistic model to construct a data-driven surrogate model. The method combining data-driven and surrogate models effectively reduced the input in the experimental process while obtaining accurate results. On this basis, optimizing the design parameters of the extractive distillation process can effectively reduce carbon emissions. Ullah et al. [ 46 ] also adopted a strategy of establishing mechanistic modeling in Aspen, combined with a neural network model, applied to the bio-oil production process. The results showed that the model has high predictive performance, with a linear correlation coefficient of 0.95. Mehrani et al. [ 47 ] established a mechanistic model of a nitrification reactor to obtain predictive data from the model and combined this data with experimental data for machine learning. The model developed based on these rates achieved good results in predicting liquid yields during the reaction process. The physical information method, or Physics-Informed Neural Networks (PINNs), integrates specific physical laws or equations into the loss function. This integration constrains the optimization process of the neural network parameters, enhancing the model's generalizability with minimal data. The PINN approach significantly enhances the structural flexibility of neural networks and has led to improved results. In chemical transformation processes where temperature and pressure fluctuations are significant, maintaining stable high-temperature and high-pressure operating conditions is challenging. Dong et al. [ 48 ] applied a method combining kinetic models with data-driven models to chemical process control, achieving real-time control of the transformation reaction, which helps realize intelligent production. Zhou et al. [ 49 ] proposed a method combining physical information with data-driven modeling techniques, which has been applied in the estimation of the average particle size of colloids. This model incorporates constraints from prior process knowledge, setting higher standards for the optimization of model parameters and significantly improving model generalization. The results show that this method is effective, especially when there are fewer samples available. Yang et al. [ 50 ] applied a method combining data-driven models with kinetic models in the petrochemical industry to predict the product distribution of fluid catalytic cracking. The model's data came from industrial process monitoring, and the results showed excellent predictive performance. The model demonstrates that the hybrid modeling method of kinetic mechanisms also has great performance for rapid prediction and optimization in complex reaction processes. Hwang et al. [ 51 ] proposed a method combining thermodynamic models with data-driven modeling techniques, applying it to predict the refrigerant charge amount in electric heat pump systems. Compared to ordinary deep neural network models, the hybrid model greatly improves prediction accuracy and has strong generalization performance. This model can be used for the design of efficient and energy-saving electric heat pump systems. This study aims to develop an accurate and highly generalizable neural network prediction model combining both data augmentation and physical information methods for the oxidation of EG to GA, using a limited amount of experimental data. By leveraging the fast computation and high predictive accuracy of the neural network model, we aim to substitute traditional mechanistic models with multi-objective optimization algorithms to achieve surrogate optimization. This approach is expected to enhance the success rate and computational efficiency of the optimization process. On this basis, reaction parameters will be optimized to achieve a coordinated optimal balance between economic and environmental benefits. In the team's preliminary work, a series of PtMn/MCM-41 catalysts were synthesized. These catalysts are capable of efficiently converting EG to GA under mild conditions. Additionally, an analysis of the catalytic principle was conducted [ 52 ]. Based on this, a conceptual design model for the GA production process was established. A life cycle assessment of the process revealed that it offers significant economic, social, and environmental benefits [ 53 ]. In addition, several machine learning models were developed in the preliminary work to conduct correlation analysis on the variables during the reaction process, thereby uncovering the factors that have a greater influence on the reaction mechanism [ 54 ]. Building on previous research, this study developed a dependable neural network prediction model for the oxidation of EG to GA. The model was based on experimental data from the high-performing PtMn/MCM-41(In-70) catalyst. By integrating multi-objective optimization algorithms, this model facilitates effective optimization. Compared to previous work, the innovation of this study lies in the modeling approach that integrates mechanistic data with physical law constraints. On one hand, Aspen HYSYS is used to develop an industrial-scale mechanistic model to generate a significant amount of mechanistic data, addressing the scarcity of experimental data. The data produced by the mechanistic model significantly expands the scale of training data, making the model's predictive ability more reliable. On the other hand, incorporating physical laws imposes greater requirements on adjusting neural network parameters, thereby improving the model's generalizability. On this basis, a comparative analysis was conducted on the training results obtained using different network structure parameters to determine the optimal network structure parameters. Combining the established efficient prediction model with multi-objective optimization algorithms can achieve efficient optimization of industrial parameters, providing a higher success rate compared to traditional mechanistic models. The ideas and methods reflected in this study have significant guiding implications for online control, prediction, and optimization in the industrial application process of selectively oxidizing EG to GA. 2 Methodology 2.1 Overall framework The process framework for establishing the model in this study is illustrated in Fig. 1 . The first step is the generation of mechanical data. A reaction kinetics model is developed using experimental data. By adjusting reaction parameters, the conversion rates of reactants and concentrations of key products are determined at various reaction temperatures and residence times. The second step is data preprocessing. A data access program is developed to label and normalize the data. The third step involves determining the appropriate network structure parameters and constructing a neural network model. Deep learning programs are designed to integrate physical law constraints into the loss function, which is then followed by model training, validation, and testing. The fourth step involves combining multi-objective optimization algorithms to determine the optimal reaction parameters. Additionally, a carbon emission analysis is conducted on the processes both before and after optimization. 2.2 Mechanism model Fig. 2 presents a conceptual design flowsheet for the production of GA from EG. O 2 is compressed and heat-exchanged to reach the specified reaction temperature and pressure, then mixed with EG before entering a stirred tank reactor for the reaction. After the reaction is complete, the product first passes through a flash drum to separate the gases, and then enters a continuous vacuum distillation column to separate the formic acid (FA) water solution, glycolaldehyde (GAD) water solution, GA, and EG, respectively. The unreacted EG is ultimately drawn off at the bottom of the tower for recycling. A steady-state simulation was established in Aspen HYSYS using the NRTL property method to calculate phase equilibrium. Since a rigorous kinetic model was employed in simulating the reactor, it allows for the prediction of different scenarios with varying reaction temperatures and residence times. To minimize the loss due to thermal decomposition of EG oxidation products during separation, a continuous vacuum distillation was used for product separation, keeping the temperature at the bottom of the tower below 100 °C. 2.3 Database generation Selective oxidation of EG into GA was performed using the PtMn/MCM-41(In70) catalyst, resulting in a total of 10 sets of experimental data. The reaction network that occurred is shown in Fig. 3 . The oxidation of EG is a sequential reaction: it first oxidizes to GAD, then to GA, further to FA, and finally to CO 2 . Due to the scarcity of experimental data, it is necessary to expand the existing data by establishing a mechanistic model and generating mechanistic data. Aspen HYSYS, with its comprehensive database, reactor, and tower modules, as well as accurate simulation results, has become a commonly used tool for steady-state simulation. Utilizing its case analysis tool enables the simulation of product production under various conditions, thereby obtaining a significant amount of mechanistic data. Case analyses were conducted in Aspen HYSYS, setting the reaction temperature range from 50 °C to 90 °C and the residence time range from 2 h to 16 h, obtaining a total of 525 pieces of mechanistic data. The computer configuration is as follows: CPU: 12th Gen Intel(R) Core(TM) i5-12500. A total of 535 data points were utilized for machine learning, encompassing reaction conditions, conversion rates, and product selectivity. 2.4 Modeling and training Deep neural network models are feedforward neural networks that consist of multiple hidden layers, and their general structure is illustrated in Fig. 4 a. Both the input layer and output layer perform linear processing on the data, while the intermediate hidden layers use nonlinear activation functions for processing. The working principle of the neural network can be understood from Fig. 4 a. After the training data is input into the network model, the output values are obtained through the complex nonlinear effects of the hidden layers. These output values are then compared with the training data labels, i.e. , the true values, to analyze the error, which is referred to as the loss function. The loss function can be calculated in various ways, with the mean square error (MSE) being commonly used. Once the loss function is obtained, gradient calculation is performed for each node of the neural network, and then the optimal parameters are sought through gradient descent. Through continuous adjustments, the loss function will keep decreasing until it meets the research requirements. The PINN model is based on this foundation with modifications to the loss function, and its general structure is shown in Fig. 4 b. The loss function of PINN is divided into two parts: the data part and the physics part. The data part is the MSE between the predicted value and the true label value, while the physics part is the loss caused by the predicted value in the physical equation. The sum of both parts is used as the total loss value of PINN for gradient descent optimization. In this way, the loss function includes not only the error between the output values and the label values but also takes into account the deviation produced by the output values to comply with physical laws. This implies that the optimization of the neural network node parameters has even more stringent conditions. During the training process, the number of hidden layers in the neural network, the number of nodes in the hidden layers, and the split ratio of the dataset all have impacts on the fitting effect of the neural network. In general, the structure of a neural network should not be too broad. To prevent overfitting, the dataset is typically divided into three parts: a training set, a validation set, and a test set, with corresponding ratios of 80%:10%:10%. In this research, a neural network model was developed using PyTorch. Four pieces of experimental data and 84 pieces of mechanistic data were taken to assess the effects of changing model parameters, with the remaining data used as training data, which was then divided into a training set, validation set, and test set according to the ratio of 8:1:1. The neural network structure established in this study is shown in Fig. 4 c, where the conservation law of carbon element has been incorporated into the loss function, as detailed in Eq. (1) . Where (1) C 0 C num X = ∑ i t S i C 0 C i _ num X C 0 represents the initial concentration of the reactant EG, mol/L; X denotes the conversion rate of EG, %; S indicates the reaction selectivity for product i i , %; C num and C refer to the amount of substance of C atoms in 1 mol of molecules, mol/mol. i_ num In Fig. 4 c, we trained the neural network using a dataset generated by the Aspen HYSYS mechanistic model. The input variables of PINN represent two operating conditions: reaction temperature and residence time. There are five output variables representing reaction conversion and the selectivity of GA, GAD, FA, and CO 2 , respectively. For the non-physical part, its loss function, Loss , is as Eq. y (2) shows, taking the MSE of each predicted value and labeling the value as the loss function. Regarding the physical aspect, the constraints of the physical equations are essentially unsupervised training. The conservation equation for element C needs to be transformed into an implicit equation, as shown in Eq. (3) . Subsequently, the predicted values are substituted into the implicit equation to verify if it was satisfied. Therefore, the calculation of the loss for the physical part is not the same as Eq. (2) . Its specific loss function, Loss , is as shown in Eq. f (4) . (2) Loss y = 1 n ∑ i = 1 n y i − y ˆ i 2 (3) F X , S i = C 0 C num X − ∑ i t S i C 0 C i _num X Where (4) Loss f = 1 n ∑ i = 1 n F u , v , p , m , n 2 u , v , p , m , and n are five output variables of the non-physical part, representing reaction conversion and the selectivity of GA, GAD, FA, and CO 2 , respectively. 2.5 Optimization and prediction Genetic algorithms are designed based on the evolutionary principles of organisms in nature, simulating the natural selection and genetic mechanisms described by Darwin's theory of evolution [ 55 , 56 ]. They are computational models for the process of biological evolution, a method that searches for optimal solutions by simulating the process of natural evolution [ 57 ]. The purpose of this study is to utilize the advantages of high-speed prediction of neural network models to replace traditional mechanistic models, combined with multi-objective optimization algorithms to achieve surrogate optimization, thereby improving the success rate and computational efficiency of the optimization process [ 58 ]. In this study, the NSGA-II algorithm program was written using Python's Pymoo toolkit, which is an open-source multi-objective optimization algorithm package. The NSGA-II multi-objective optimization program was coupled with the established neural network prediction model. This coupling allows for the surrogate optimization of objective functions by leveraging the rapid computation and high accuracy of the neural network prediction model. The computational flowchart for multi-objective surrogate optimization is shown in Fig. 5 . Initially, the optimization range for reaction parameters is specified by users. Populations are randomly generated within this range and correspond to different reaction parameters. These parameters are input into the neural network model for rapid prediction under the guidance of the program to obtain the results of the objective function. The NSGA-II algorithm is executed to perform fitness comparisons, thereby facilitating the survival of the fittest within the population. The surviving population then undergoes crossover and mutation to form a new population for subsequent iterations. After n iterations, the predicted results are compared to determine whether they converge. If they do, an optimized Pareto solution set is obtained. If not, the optimization fails. 2.6 Carbon emissions assessment In the process of engineering scale-up, the environmental impact caused is a widespread concern within the industry. During the oxidation of EG to produce GA, there is a generation of CO 2 as a byproduct, leading to direct CO 2 emissions. At the same time, during the scale-up process, the consumption of fuels, steam, and electricity also results in indirect CO 2 emissions. This study statistically analyzes greenhouse gas (GHG) emissions during the process. By optimizing process parameters through multi-objective optimization algorithms to maximize the production of the target product, that is, improving the yield of GA and reducing the selectivity of CO 2 , the direct emissions of CO 2 during production can be effectively reduced. This research compares and analyzes the GHG emissions before and after the process optimization. Total carbon emissions are divided into two parts, direct and indirect. Direct emissions refer to the CO 2 produced during our reaction process. Indirect emissions refer to those resulting from the utility usage within the process. Indirect emissions are calculated based on the energy consumption of the entire process. The specific calculation steps are as follows: (1) Using the mechanism model in Aspen HYSYS, we can calculate the energy consumption required for the entire process. The energy consumption is converted into standard coal according to national standards GB/T 2589–2020. (2) The GHG emissions caused by the combustion of standard coal are then calculated accordingly. More details can be found in Supporting Information. 3 Results and discussion 3.1 Data analysis The experimental data for the selective oxidation of EG to GA over the PtMn/MCM-41(In70) catalyst is shown in Fig. 6 . In Fig. 6 a, the residence time was fixed as 8 h. It can be observed the conversion rate of EG increases with the reaction temperature. When the reaction temperature exceeds 80 °C, EG is essentially fully converted. As the reaction temperature increases, the selectivity for GA and FA initially rises and then falls, the selectivity for GAD gradually decreases, and the selectivity for CO 2 rapidly increases. When the reaction temperature reaches 90 °C, most of the EG is converted into CO 2 . In Fig. 6 b, the reaction temperature was 60 °C. The conversion rate of EG gradually increases with residence time. After the residence time exceeds 8 h, the conversion rate of EG remains largely unchanged. With increasing residence time, the selectivity for GA first increases and then decreases, the selectivity for GAD gradually reduces, and the selectivity for FA and CO 2 gradually increases. However, the overall change in selectivity for each product with residence time is not significant, and the selectivity for GA can reach above 80%. A case study was conducted in Aspen HYSYS, setting the reaction temperature range from 50 °C to 90 °C and the residence time range from 2 h to 16 h. The mechanism data obtained from the case study were used to create contour plots as shown in Fig. 7 a–e. The contour plots, with reaction temperature and residence time as the axes, present a clearer trend of the changes in EG conversion rate and selectivity of each product. From Fig. 7 a, when the reaction temperature exceeds 60 °C and the residence time is over 8 h, the conversion rate of EG can reach above 90%. Both an increase in reaction temperature and residence time contribute to a higher conversion rate of EG. From Fig. 7 b and e, it is evident that the trends in selectivity for GA and CO 2 are opposites. While the change with residence time is not significant for both, the selectivity for GA decreases with an increase in reaction temperature, whereas the selectivity for CO 2 increases. From Fig. 7 c, it can be observed that the overall selectivity for GAD is not high; when the reaction temperature is above 60 °C and the residence time exceeds 8 h, the selectivity for GAD is essentially less than 5%. From Fig. 7 d, the selectivity for FA presents a volcano surface, with the maximum value occurring within the range of reaction temperatures from 65 °C to 75 °C and residence times from 6 h to 12 h. 3.2 Training and validation In this study, NN refers to a general neural network model, that is, the loss function does not incorporate carbon element conservation, whereas PINN represents a neural network model that integrates physical information, meaning carbon element conservation is added to the loss function. The initially constructed neural network model employed a structure with 6 layers and 6 nodes. The research utilized a combination of the first-order optimizer Adam and the second-order optimizer LBFGS to effectively prevent overfitting that may occur after excessive training with a single optimizer. The training process involved optimizing with the Adam optimizer for 2000 iterations followed by another 2000 iterations with the LBFGS optimizer. In this study, the MSE between the predicted results and the actual results was used as the loss function to evaluate the training effectiveness of the model. A comparative analysis was conducted on the variations in the loss function for the training set, validation set, and test set. From Fig. 8 a, it can be seen that after 4000 iterations, both NN and PINN have converged, and the loss function is at a low value. Compared to NN, the loss function for PINN is smaller across the training set, validation set, and test set. It can be concluded that by adding the constraint of carbon element conservation, PINN demonstrates higher generalizability in model prediction. To obtain a model with better predictive performance, it is necessary to optimize the structural parameters of PINN. Initially, the number of hidden layers was fixed at 6, and neural network models with different numbers of nodes—6, 8, 10, 12, 14, and 16—were trained. Fig. 8 b displays the training results obtained from the models. It was observed that as the number of hidden layer nodes increased, the training loss for the training set gradually decreased. When the number of nodes was less than 14, the training losses for both the validation and test sets also continued to decrease. However, when the number of nodes reached 16, the training losses for both the validation and test sets suddenly increased, while the training loss for the training set kept decreasing. This suggests that when there are 16 nodes, the NN model has begun to overfit. With the number of hidden layer nodes determined to be 14, training was then conducted on NN models with different numbers of hidden layers—4, 6, 8, 10, and 12. Fig. 8 c shows the results of the model training. It was found that when the number of hidden layers exceeded 6, the training losses for both the validation and test sets began to rise gradually. Therefore, it was determined that the optimal structural parameters for the PINN neural network model established in this study are: 6 hidden layers and 14 nodes per hidden layer. A grid search method was provided in the Supporting Information for hyperparameter optimization. The method uses an exhaustive comparison approach to score and compare each combination of hyperparameters, enabling the selection of the optimal hyperparameter set. The relevant data are shown in the “gridSearch_cv_results.xlsx” file. Using the well-established model to predict the outcomes for 89 pieces of data not involved in the training, Fig. 8 d presents the results obtained. The error between the predicted results and the true results is within ±5%, with a linear correlation coefficient R 2 of 0.998. It indicates that the model has excellent predictive performance and strong generalization capabilities. In the process of the selective oxidation of EG to GA, this model can accurately predict the conversion rate of the reaction and the selectivity of the products. Moreover, the prediction process is almost instantaneous, which greatly assists in the implementation of fast optimization in combination with multi-objective optimization algorithms. In other studies in the field of chemical engineering, PINN has also demonstrated superior performance. Currently, many researchers have adopted PINN as a tool for solving partial differential equations, achieving applications in the internal flow field solutions of units such as tubular reactors [ 59–61 ], stirred tanks [ 62–64 ], and heat exchangers [ 65 ]. These results are essentially consistent with those obtained in this study. By incorporating physical confidence constraints into the neural network, the predictive performance of the neural network can be significantly improved [ 66 , 67 ]. Unlike these studies, this research utilizes PINN as a surrogate for mechanistic models to optimize process parameters. Examining reactor design issues from the environmental perspective of CO 2 emissions fully demonstrates the novelty and practical application value of our research. 3.3 Multi-objective optimization The open-source NSGA-II algorithm programmed in Python was used for multi-objective optimization. The reaction temperature and residence time were set as decision variables, with the maximization of GA yield and minimization of CO 2 selectivity as the objective functions. The GA yield can be computed according to Eq. (5) . The ranges for the decision variables are defined as shown in Eqs. (7) and (8) . Eqs. (9) and (10) represent the material balance and energy balance in the process, respectively. (5) O b j e c t i v e f u n c t i o n : min − GA yields = − EG conversion × GA selectivity (6) O b j e c t i v e f u n c t i o n : min CO 2 selectivity (7) C o n s t r a i n t s : 50 ° C ≤ Reaction temperature ≤ 90 ° C (8) 2 h ≤ Residence time ≤ 16 h (9) ∑ η = 1 η F η in = ∑ μ = 1 μ F μ out (10) ∑ ν = 1 ν E ν in = ∑ τ = 1 τ E τ out Due to the use of multi-objective functions, it was necessary to set weights for the objectives to highlight their importance during the optimization process. Since there may be competing relationships between these objectives, finding a solution that optimizes all objectives to their best is very challenging. Therefore, in practical applications, decision-makers or algorithm designers may assign different weights to the various objective functions based on the specific circumstances or preferences of the problem. More detailed information about the setting of weights can be found in the Supporting Information. A population size of 20 and a genetic generation count of 300 were configured, yielding a Pareto solution set as shown in Fig. 9 . The Pareto optimal solution adopted in this study is a reaction temperature and residence time of 55.9 °C and 7.45 h, respectively. Under these conditions, a process simulation was established for the statistical analysis of product distribution and carbon emissions before and after optimization. After determining the reaction parameters, the case analysis function of Aspen HYSYS was used to optimize the operating conditions of the distillation column and meet the separation requirements. The corresponding detailed operations and parameters before and after optimization are described in detail in the Supporting Information. The GA yield and amount of CO 2 generated during the reaction process before and after optimization are listed in Table 1 . After optimization, the direct emission of CO 2 decreased from 164.74 kg/h to 0.07 kg/h, a reduction of 99.5%. The primary reason for this is the decrease in the reaction temperature and residence time, which slows down the consecutive reactions and reduces the conversion of FA to CO 2 . However, owing to the lower reaction temperature, the conversion rate of EG also decreased, leading to a slight decrease in the total production of GA. Compared to the traditional process of using commercial software in conjunction with Python or MATLAB for multi-objective optimization [ 68 ], utilizing a neural network model as a surrogate model for optimization exhibits strong novelty and superiority. Most researchers have adopted heuristic algorithms such as NSGA-II to optimize the key CO 2 emissions in chemical engineering processes by finding the optimal values of operating conditions [ 69–71 ]. While, a neural network model was used to proxy and equate the mechanism models established by Aspen, similarly reducing CO 2 emissions in the process. This demonstrates that employing a neural network model as a surrogate for traditional mechanistic models in optimization can not only achieve the desired optimization but also significantly improve computational efficiency. These results also highlight the novelty and feasibility of the research. 3.4 GHG emissions assessment The calculation of GHG emissions before and after process optimization is shown in Fig. 10 . Because the GA yield changed after the optimization, GHG emissions were calculated and compared based on producing one ton of GA. As shown in Fig. 10 a, the overall GHG emissions of the optimized process decreased by 4.6% compared with the original process, with the reduction primarily reflected in CO 2 emissions. Fig. 10 b shows the GHG emissions from the breakdown of the process. It is evident that the product separation process in the selective oxidation of EG to GA produces a significant amount of GHG, followed by reaction consumption. Additionally, it was noted that direct GHG emissions were virtually eliminated after optimization. It indicates that the neural network prediction model and multi-objective optimization coupling model established in this study showed good results during the optimization process. It has certain benefits for reducing carbon emissions during production and contributing to the goal of achieving greener chemical engineering. More specific calculation data can be found in the table documents in the Supporting Information. From the optimization results, the selectivity of by-product CO 2 in the process of oxidizing EG to make GA is basically 0 after optimization. This result is better than the current GA oxidation production process, whether under alkaline [ 72–75 ] or non-alkaline [ 76 ] conditions. Under the conditions of electrocatalysis, EG can co-produce with water electrolysis to reduce the production of by-product water, thereby improving atomic utilization. However, there is also a reaction pathway for generating CO 2 . Moreover, the conversion rate and selectivity of ethanol acid are both lower than thermal oxidation [ 77 , 78 ]. The preparation process of GA through biomass fermentation, as reported in current literature, has a reaction time that is significantly longer than that of the oxidation process, and its selectivity is also poorer [ 79–82 ]. Some researchers have proposed using enzymes found in microalgae to produce GA, a method that can absorb CO 2 through photosynthesis, thus achieving negative CO 2 emissions. However, the current experimental yields of GA are still far below the theoretical values, making the path to industrialization a long one [ 83 ]. Therefore, optimizing the parameters of the existing feasible process for producing GA from the oxidation of EG to achieve greener and lower carbon emissions is of significant practical importance. This also reaffirms the novelty of the mixed-agent model optimization framework proposed in this study, offering valuable reference for scaling up the current viable process to industrial levels. 4 Conclusion In this study, a neural network model was developed to offer surrogate prediction and optimization for the eco-friendly production process of GA. Specifically, due to the lack of experimental data, a reaction mechanism model was constructed in Aspen HYSYS to generate a substantial amount of data under various conditions. The training of the neural network was also constrained by the carbon element conservation equation. The results showed that the neural network model with the carbon conservation constraint had better predictive performance for reaction conversion rates and product selectivity. After validating with 40% experimental data and 16% mechanistic data, the neural network structure with 6 hidden layers and 14 nodes achieved the best predictive performance. The error between the predicted values and the actual values was within ±5%, and the linear correlation coefficient R 2 was 0.998, meeting the production process requirements for product distribution prediction. To achieve fast optimization and reduce direct CO 2 emissions during the production process, the NSGA-II multi-objective optimization algorithm was integrated into the neural network model. This integration enables the optimization of reaction temperature and residence time. After analyzing GHG emissions before and after process optimization, the results showed that for every ton of GA produced, overall GHG emissions were reduced by 4.6%, with direct CO 2 emissions reduced by over 99%. Overall, the neural network prediction and optimization model established in this study represents an effective application of artificial intelligence in the engineering scale-up process of green chemical technology. It holds significant value in aiding and guiding the rapid development of green chemical industry. Current research results indicate that efficient catalyst design has achieved remarkable success in terms of catalytic conversion rates and selectivity. Therefore, the focus of future work will shift towards industrializing these design achievements and their practical application. Concurrently, in the research of process scale-up, the design of innovative processes and system integration remains at the core of investigations. Most importantly, with the continuous emergence of new tools and algorithms, utilizing these advanced technologies to tackle real-world issues such as catalyst industrial design and chemical process integration will become a paramount priority in future scientific research. CRediT authorship contribution statement Zhibo Zhang: Writing – original draft, Methodology, Conceptualization. Yaowei Wang: Investigation, Funding acquisition, Formal analysis. Dongrui Zhang: Investigation, Formal analysis. Deming Zhao: Investigation, Formal analysis. Huibin Shi: Investigation, Formal analysis. Hao Yan: Investigation, Formal analysis. Xin Zhou: Validation, Supervision, Funding acquisition. Xiang Feng: Supervision, Investigation, Formal analysis. Chaohe Yang: Supervision, Investigation, Funding acquisition. Declaration of competing interests The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Yaowei Wang, Deming Zhao, and Huibin Shi are currently employed by Shandong Chambroad Petrochemicals Co. Ltd. The other authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments Special thanks for the support from the National Natural Science Foundation of China (No. 22108307 ). Thanks for the support from the open project of the State Key Laboratory of Heavy Oil Processing . Appendix A Supplementary data The following are the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Multimedia component 2 Multimedia component 2 Multimedia component 3 Multimedia component 3 Multimedia component 4 Multimedia component 4 Multimedia component 5 Multimedia component 5 Appendix A Supplementary data Supporting Information exhibits the details of simulation and optimization of process model, original codes, optimization of neural network hyperparameters, and the calculation method of CO 2 emissions. Supplementary data to this article can be found online at https://doi.org/10.1016/j.gce.2024.06.002 .
|
[
"JIANG",
"YANG",
"WANG",
"ZHU",
"ARTZ",
"MA",
"DELUNA",
"ROGELJ",
"WU",
"ZOU",
"JIANG",
"WEI",
"COLBERG",
"DUARTE",
"BUDAK",
"TAN",
"DONG",
"TAVARESLIMA",
"WANG",
"SALUSJARVI",
"BOJE",
"ZEB",
"WERBINSKAWOJCIECHOWSKA",
"ROBU",
"MARCATO",
"JI",
"SHIELDS",
"SAMEK",
"COSTA",
"DENG",
"BAO",
"BA",
"MALIK",
"GUO",
"MA",
"SHARMA",
"YIN",
"SHIN",
"ZHOU",
"LI",
"SCHWEIDTMANN",
"KARNIADAKIS",
"BIZON",
"ZHOU",
"LIU",
"ULLAH",
"MEHRANI",
"DONG",
"ZHOU",
"YANG",
"HWANG",
"YAN",
"ZHOU",
"ZHOU",
"GAO",
"HOBBIE",
"CHEN",
"ZAHMATKESH",
"PATEL",
"KOU",
"NGO",
"CHEN",
"PATEL",
"CHOI",
"WU",
"XIAO",
"SOROURIFAR",
"GUZMANMARTINEZ",
"MALEKI",
"YIN",
"RANGAIAH",
"VANHAASTERECHT",
"ZHAN",
"SHI",
"BERNDT",
"DU",
"TANG",
"FALASE",
"LU",
"ZHANG",
"BENBASSAT",
"KATAOKA",
"KANG"
] |
2ea656e738bd4222a27d96c295360113_Airway management in obese patients_10.1016_j.bjane.2020.12.017.xml
|
Airway management in obese patients
|
[
"Gómez-Ríos, Manuel Ángel",
"Gómez-Ríos, David",
"Xu, Zeping",
"Esquinas, Antonio M."
] | null |
Dear Editor, We read with interest the article of Turna et al. on their randomized trial of performance of the Airtraq videolaryngoscope versus the intubating laryngeal mask airway (ILMA) in obese patients. There are several aspects to the study we believe necessary to consider. 1 Airway management in obese patients is a challenging issue associated with a high incidence of complications. The accumulation of adipose tissue causes several changes in airway anatomy and respiratory function. Thus, obesity is associated with, among others, decreased pharyngeal area, obstructive sleep apnea, restrictions in neck flexion, narrow jaw opening, enlarged tongue, reduction in functional residual capacity and alveolar oxygen reserve, and increase in O 2 consumption. Therefore, obese patients are at increased risk of difficult mask ventilation, difficult tracheal intubation, and hypoxemia during the process of securing the airway, even after short periods of apnea. The core recommendations of the recent guidelines focus on limiting the duration and number of attempts at tracheal intubation in order to achieve early atraumatic intubation, the philosophy on which the vortex approach is based. Accordingly, an undue number of attempts to test a device is not justified. Thereby, it was published in 2016 a useful consensus on airway research ethics that every researcher should take into account. It recommends limiting to a maximum of two failed attempts before following the usual progression in the airway management algorithm and restricting the inclusion of patients to ASA I and II to minimize harm. 2 Likewise, direct laryngoscopy could not be the most suitable rescue method after the unsuccessful use of a videolaryngoscopy or an ILMA given that its probability of success can be lower in this situation. Perhaps, it would have been more appropriate to use the other device under study as a backup plan. In addition, any blind technique should be avoided due to the significant failure rate, the frequent need for repeated attempts, and the potential for airway trauma, which can result in deterioration of ventilation. Therefore, fiberoptic intubation through the ILMA is the method recommended. 3 On the other hand, testing a laryngeal video mask as the Totaltrack VLM (Medcomflow S.A., Barcelona, Spain) instead of the ILMA versus the Airtraq would allow a more adjusted comparison. In fact, it is a device similar to Airtraq since it has a guide channel and a fiberoptic system with LCD screen that provides a view of the larynx and tracheal tube as it passes through the vocal cords. It also combines a supraglottic airway device with the described structure allowing to perform intubation after securing the airway and establishing optimal ventilation limiting the period of apnea. 4 This is especially advantageous in obese patients since they have reduced physiological reserves. 5 Similar clinical trials are necessary to determine the most reliable and safe airway method for this population. 5 Conflicts of interest The authors declare no conflicts of interest.
|
[
"TURNA",
"WARD",
"GOMEZRIOS",
"GOMEZRIOS",
"GOMEZRIOS"
] |
9306563efe6d43d280986428d3e2f1f6_Effectiveness of functional orthodontic appliances in obstructive sleep apnea treatment in children _10.1016_j.bjorl.2021.02.010.xml
|
Effectiveness of functional orthodontic appliances in obstructive sleep apnea treatment in children: literature review
|
[
"Bariani, Rita Catia Brás",
"Bigliazzi, Renato",
"Cappellette Junior, Mario",
"Moreira, Gustavo",
"Fujita, Reginaldo Raimundo"
] |
Introduction
Obstructive sleep apnea syndrome is a common condition in childhood and if left untreated can result in many health problems. An accurate diagnosis of the etiology is crucial for obstructive sleep apnea treatment success. Functional orthodontic appliances that stimulate mandibular growth by forward mandibular positioning are an alternative therapeutic option in growing patients.
Objective
To perform a literature review about the effects of functional orthodontic appliances used to correct the mandibular deficiency in obstructive sleep apnea treatment.
Methods
The literature search was conducted in June 2020 using Cochrane Library; PubMed, EBSCO (Dentistry & Oral Sciences Source), LILACS Ovid; SciELO Web of Science; EMBASE Bireme and BBO Bireme electronic databases. The search included papers published in English, until June 2020, whose methodology referred to the types and effects of functional orthopedic appliances on obstructive sleep apnea treatment in children.
Results
The search strategy identified thirteen articles; only four articles were randomized clinical studies. All studies using the oral appliances or functional orthopedic appliances for obstructive sleep apnea in children resulted in improvements in the apnea-hypopnea index score. The cephalometric (2D) and tomographic (3D) evaluations revealed enlargement of the upper airway and increase in the upper airspace, improving the respiratory function in the short term.
Conclusion
Functional appliances may be an alternative treatment for obstructive sleep apnea, but it cannot be concluded that they are effective in treating pediatric obstructive sleep apnea. There are significant deficiencies in the existing evidence, mainly due to absence of control groups, small sample sizes, lack of randomization and no long-term results.
|
Introduction Obstructive sleep apnea syndrome (OSAS) in childhood is characterized by intermittent partial (obstructive hypopnea) or complete collapse of the upper airway (apnea) during sleep. OSAS is a common condition in childhood (ranging from 1.2% to 5.7%) 1 and if left untreated can result in many health consequences including lethargy, memory loss, problems with thinking and judgment, disruption of normal metabolic functions, and cardiovascular disorders. 2 Obstructive sleep apnea (OSA) in children differs in relation to adults regarding the pathophysiology, clinical picture, diagnosis and treatment. 3 Pharyngeal and palatine tonsillar hypertrophy and obesity are the most common causes of the syndrome in childhood, but the complexity of OSAS is exemplified by other related factors involving the craniofacial structures and neuromuscular tone. 4 OSA severity is heterogeneous among patients and the wide range of presentation leads to variations in management approach and differences in treatment response. 4 5 The treatment of OSA is based on the child’s age, severity of symptoms, clinical findings, presence of comorbidities, and polysomnographic (PSG) findings. High clinical therapeutic effectiveness for OSA has been reported after adenotonsillectomy in nonobese children, and there is evidence of improvements in oximetry as well. 6 Evidence-based guidelines support the use of continuous positive airway pressure treatment (CPAP) as an effective first-line treatment of OSA in children without adenotonsillar hypertrophy; however, this is complicated by low tolerance or high refusal level of treatment (25%–50%). 7 8–10 Children with OSA with concomitant craniofacial risk factors should be referred to an orthodontist involved in a multidisciplinary sleep medicine team. Orthodontic treatment for correction of maxillomandibular anomalies or mandibular retrusion has been shown to improve OSA. Functional orthodontic appliances (FOA) are used for craniofacial abnormalities and may induce significant change in mandibular shape that leads to correction of dentoskeletal disharmony associated with mandibular retrusion. 11 The nature of the variations that induce mandibular growth with functional appliances is not yet clear but orthopedic correction of mandibular retrognathism seems to increase the airway space in the short term in 3-dimensional (3D) perspective. 12 Several studies in the literature have investigated the mechanisms of action and the effects of functional appliances and there is no evidence of contra-indications or even significant side effects as its use is short-term in nature. 13 Recent systematic reviews and meta-analyses have shown that, in the short term, FOA produces greater skeletal mandibular effects when performed at puberty. 14 In patients treated before the pubertal period, the significant effects seems to be confined to the dentoalveolar level, with minimal clinical implications. 12 15 There are few studies evaluating the use of FOA and their effectiveness in children during sleep for OSAS. The aim of this study, therefore, was to perform a literature review about the effects of FOA used to correct the mandibular deficiency in OSA treatment. 16 Methods Search strategy Two authors (R.C.B.B. and R.B.) screened studies and extracted data independently in Cochrane Library; PubMed, EBSCO (Dentistry & Oral Sciences Source), LILACS Ovid; SciELO Web of Science; EMBASE Bireme and BBO Bireme electronic databases. The following search strategy was used: apnea syndrome, sleep OR apnea syndromes, sleep OR apnea, sleep OR apneas, sleep OR breathing, sleep-disordered OR hypersomnia with periodic respiration OR hypopnea, sleep OR hypopneas, sleep OR mixed central and obstructive sleep apnea OR mixed sleep apnea OR mixed sleep apneas OR sleep apnea OR sleep apnea syndrome OR sleep apnea, mixed OR sleep apnea, mixed central and obstructive OR sleep apneas OR sleep apneas, mixed OR sleep disordered breathing OR sleep hypopnea OR sleep hypopneas OR sleep-disordered breathing OR sleep apnea OR sleep apnea OR sleep apnea syndrome OR sleep apnea syndrome OR snoring OR upper airway resistance syndrome AND intraoral OR intra-oral OR oral OR klammt OR bimler OR “functional orthodontic appliance” OR “functional orthopedic appliance” OR “activator appliance” OR “mandibular advancement appliance” OR “oral appliance” OR “kinetor appliance” OR “planas appliance” OR “bimler appliance” OR “frankel appliance” OR “frankel function regulator” OR “functional regulator” OR “harvold activator” OR “andresen appliance” OR “bass appliance” OR bionator OR “bite block” OR “twin block” OR “herbst appliance” OR “herren activator” OR “woodside activator” OR “dental device” OR “intraoral device” OR “oral device” OR “anterior mandibular positioning device” OR “tongue device” OR “mandibular device” OR “mandibular advancement device” OR “dental appliance” OR “tongue appliance” OR “mandibular appliance” OR “intraoral appliance” OR “mandibular advancement splint” OR “mandibular prosth*” OR correct* OR prevent* OR intercept* AND orthodont* AND device* OR mobile OR equipment OR appliance* OR removable OR orthodont*. All reviewed articles and cross-referenced studies were screened for relevant data. A manual review of reference lists of included studies and previously published systematic reviews and meta-analyses on OSA and intraoral appliances was also conducted. No language restrictions were applied. Any disagreement was solved by consensus. All reviewed articles and cross-referenced studies were screened for relevant data. Inclusion criteria The inclusion criteria were formulated according to the population, intervention, comparison, outcome, study design (PICOS) principle: 17 Population — Children and adolescents (14 years old or younger) diagnosed with OSA without craniofacial syndromes. Intervention — FOA. Comparison — With or without a control group or pre-treatment and post-treatment. Outcome — Primary outcome was the apnea-hypopnea index (AHI); secondary outcomes were (1) oxygen saturation level, (2) sleep quality (SQ), (3) improvement on sagittal relationship between the maxilla-mandible measured by cephalometric data; and (4) upper-airway space. Study design — Case reports, pilot studies, randomized (RCTs) and nonrandomized controlled trials. Studies considered for inclusion were published in any language. As one of the outcomes is the AHI, polysomnography was mandatory for inclusion of the chosen articles. Data items and collection The following data items were independently extracted from each included study by two reviewers: author, year of publication, study design, subjects, age, interventions, wearing time, drop out, AHI before and after FOA (only effects would be pooled), and secondary outcomes. Results Summary of included studies A flow diagram of the study identification, screening, eligibility, and inclusion is shown in Fig. 1 . A total of 754 studies were identified and assessed for inclusion. After exclusion on the title and abstract stages, 22 articles were retrieved for full review. Nine were later excluded after full text review for different reasons. Therefore, only 13 articles met the inclusion criteria set for this study. Key methodological and descriptive characteristics of the included articles are presented in Tables 1–4 . All the included articles were published between 2002 and 2019 and were in English language except for one article in German. The study included 13 articles, and a summary of study characteristics and results of the studies is shown in Tables: five clinical trial studies ( 28 Table 1 ), three RCTs ( Table 2 ), three case reports ( Table 3 ) and two pilot studies ( Table 4 ). All included studies investigated 271 growing subjects (range 3.5–14 years), with mean age of 7.61 ± 1.99 years. As with age, the mean treatment observation varied widely between studies (range 1–20 months), with treatment time of 7.71 ± 5.13 months on average. As for the type of removable appliances, the most used was the Twin Block, Frankel II, 22,24,26 and Modified Monoblock. 27,28 Three studies 18,21 did not report the amount of mandibular advancement during treatment, while in three others, 19,22,30 a single mandibular advancement to an incisor end-to-end relationship was performed. In the other studies included, mandibular advancement varied from 3 to 7 mm. 18,21,25 Regarding the AHI index changes, twelve studies reported reduced AHI after treatment, even though this conclusion could not be statistically reached due to the considerable heterogeneity of pooled data. Only Rădescu et al., in a case report, found a negative correlation between AHI and FOA. Villa et al., 26 summarized sleep quality (SQ) data as daytime and nighttime symptoms, expressed as the percentage of positive reports among treated subjects. These administered questionnaires showed diminished symptoms following 6 months of treatment. Conversely, Cozza et al., 23 discussed reduced daytime sleepiness following treatment, but without reporting any data. Overall, the appliances were well tolerated. 18,21 Discussion Effective treatment for OSA in children should be focused on one or more risk factors to help cure the obstruction. An accurate diagnosis of the etiology of OSA is crucial for the treatment success. Conditions such as obesity, adenoid hypertrophy, craniofacial abnormalities, and other factors could narrow the anatomic airway. A significant number of children with OSA do not respond favorably to the primary treatment “adenotonsillectomy” or do not tolerate CPAP treatment. Removable functional appliances are less invasive and can be better tolerated than other modalities. 31 OSA has been associated with deviations in craniofacial growth in children. Maxillary constriction and skeletal class II with retruded small mandible and hyperdivergent pattern have been widely accepted as dominant risk factors of OSA. 24 An et al. 32–34 emphasize that the strength of the relationship between these craniofacial morphologies and the development of OSA is not well established. The authors 35 identified three phenotypes in OSA adults based on clustering using craniofacial variables in relation to OSA severity and obesity and characterized the phenotypes by differential correlation factors to OSA severity (AHI): Cluster-1, obesity type, Cluster-2, skeletal type, and Cluster-3, complex type. The patients in Cluster-2, who have collapsible upper airway primarily driven by craniofacial anatomic vulnerability without non-anatomic problems, would be the best indication of orthopaedic or surgical modification of craniofacial structure. 35 FOA has been used for many decades to correct mandibular retrognathism. To treat some types of malocclusion, the mandible posture is previously changed to stimulate mandibular growth, especially in cases of retrognathism. Functional treatment stimulates mandibular growth by forward posturing of the mandible with the condyles displaced downward and forward in the glenoid fossa. This change will also transform the relationship between all structures adjacent to the mandible, also increasing the dimensions of the upper airways. Growing adolescents with skeletal class II malocclusions treated with functional appliances demonstrated an increase in pharyngeal airway dimensions of the oropharyngeal region, and such changes were consistently maintained even after growth completion. 36,37 32,38 The aim of this review was to evaluate the types of FOA and their effectiveness for sleep apnea in children. Few prospective and randomized clinical studies with methodological quality have been identified and included in this study. Villa et al. in 2002 reports that, in addition to treating the craniofacial problem, FOA would also be treating OSA because they promote mandibular replacement during sleep and increase the retroglossal space by anterior displacement of the tongue, improving respiratory function, especially at night. Therefore, early treatment of craniofacial abnormalities can prevent the development of long-term respiratory failure, impacting the quality of life in adulthood. 23 23,34,39–41 Several randomized clinical trials suggest that orthodontic treatments, such as mandibular advancement with functional appliances, can be effective in the management of pediatric snoring and OSA. Thus, these results indicate that correcting craniofacial structure imbalances during growth can reduce snoring and OSA in children and adolescents. Thus, orthodontic treatment using FOA is considered a potential additional treatment for pediatric OSA for all included studies. 23–25 The amount of mandibular advancement in FOA construction varies across patients and this is evident in the great variation reported in the studies of our review. In case of limited overjet, the bite can be registered by placing incisors in an edge-to-edge relation, while in case of large overjet the bite is usually registered 2–3 times by advancing the mandible gradually (step by step) in a limited of 4 mm per jump, which brings greater orthopedic changes and obviously has a positive impact on the improvement of oropharyngeal conditions. The studies included in this review used different appliances to achieve mandibular advancement but similar results according to the type of appliance it was observed. Within the limitations and heterogeneity of the included studies it appears that, despite the specific type of appliance used and the protocol followed, we observed a reduced AHI index after treatment, with reports of improving daytime sleepiness and sleep quality, decreasing snoring and the mouth breathing and promotion of the enlargement of pharyngeal dimensions and beneficial cephalometric changes. No study can be included in our review to discuss the impact of FOA treatment in long-term observation period. 37 Removable functional appliances can help improve the permeability of the upper airway during sleep, widening and decreasing the collapse of the upper airway, thus increasing its muscle tone. FOA therapy should be encouraged in pediatric OSA, and an early approach can permanently change breathing and nasal breathing, thereby preventing upper airway obstruction. 32 42 Our literature review found low-quality evidence to support the use of mandibular advancement appliances in managing obstructive sleep apnea in children. The different therapeutic effects of a FOA in the treatment of obstructive sleep disorders might be due to differences in study protocols, appliance design and subject selection. 43 The orthodontist should be part of the health professional team involved in the multidisciplinary treatment of OSAS because, when treating malocclusion and craniofacial orthopedic problems, they may eventually be treating the respiratory problems of their patients. Conclusion FJO can be considered as a potential additional treatment in children with OSA, but more randomized studies are necessary with larger sample sizes involving a representative number of patients with apnea and malocclusion to establish protocols related to the time of use of the appliance per day, total treatment duration and long-term comparison of the effects of different types of FOA. Funding Associação de Incentivo à Pesquisa ( AFIP ), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior ( CAPES ) - provided material and financial support. Number 88882.430440/2019-01 . Conflicts of interest The authors declare no conflicts of interest.
|
[
"TSARA",
"MARCUS",
"AIELLO",
"BEHRENTS",
"BERTOZ",
"AMERICANACADEMYOFSLEEPMEDICINE",
"MITCHELL",
"ROBERTS",
"HAWKINS",
"XANTHOPOULOS",
"HUYNH",
"PERINETTI",
"LI",
"KORETSI",
"PAVONI",
"ALJEWAIR",
"DACOSTASANTOS",
"COZZA",
"LEVRINI",
"MASPERO",
"COZZA",
"ZHANG",
"VILLA",
"IDRIS",
"NUNES",
"RADESCU",
"ROSE",
"SCHESSL",
"MODESTIVEDOLIN",
"MACHADOJUNIOR",
"KADITIS",
"PAVONI",
"PIRELLI",
"VILLA",
"AN",
"COZZA",
"GAZZANI",
"HAN",
"VILLA",
"PIRELLI",
"PIRELLI",
"VILLA",
"FLUGER"
] |
dfb971be0fe9479ab9ace85a380d0a22_Influence of thermally grown oxides on interfacial friction during hot deformation of large-size for_10.1016_j.jmrt.2022.10.131.xml
|
Influence of thermally grown oxides on interfacial friction during hot deformation of large-size forging ingots
|
[
"Vedaei-Sabegh, Ali",
"Morin, Jean-Benoît",
"Champliaud, Henri",
"Jahazi, Mohammad"
] |
High-strength steels are pre-heated in gas-fired furnaces before undergoing the open-die forging process. This process increases thermal oxidation on steel surfaces, affecting the interfacial friction between ingot and anvils during deformation. Two medium carbon high-strength steels with different nickel contents were oxidized, and the mechanical characteristics of oxide layers were investigated by micro and nano-indentation methods. It was found that the formed layers on high nickel steel had lower Young modulus and hardness compared to the steel with lower nickel. Finite element modeling and ring tests were used to assess oxide layers' effect on interfacial friction during deformation. The results demonstrated that oxide layers' formation decreased the interfacial friction and deformation load, acting as lubricants at high temperatures.
|
1 Introduction Thermally grown oxides can form on most steels during hot deformation by interacting solid with reactive gaseous atmosphere [ 1 ]. In addition to material waste and surface quality deterioration by oxidation, the interfacial condition can be influenced by scale layers. The presence of oxide can change the heat transfer between the ingot and the dies as well as the surrounding environment [ 2 ]. During deformation, the oxide layers can detach and break into the ingot, causing cracking [ 3 ]. Also, friction and wear between the ingot and the dies can be affected by the presence of oxides, thereby decreasing their useful service time [ 4 ]. To form the oxide layers, the Fe ions must diffuse outward the surface. The oxidation kinetics is influenced by the diffusion of Fe ions (Fe 2+ and Fe 3+ ) and inward diffusion of oxygen anions ( ) [ O 2 − 5 ]. Temperature and time are two factors increasing the diffusion of Fe and oxygen and increasing the kinetics of oxidation [ 6 ]. The thermally grown oxide on pure iron is comprised of three layers: wüstite (FeO), magnetite (Fe 3 O 4 ), and hematite (Fe 2 O 3 ) [ 7 ]. Wüstite is the first layer, closest to the base metal, with the lowest hardness between all scale layers. Hematite has the highest hardness, causing wear on contacting surfaces. Magnetite has an intermediate hardness compared to the two other layers [ 8 ]. The chemical composition of the alloy, oxidation atmosphere, oxidation time, and oxidation temperature all influence the diffusion of Fe and oxygen and, therefore, the thickness of the different scale layers [ 1 ]. For oxidation of low carbon steel at 1073–1423 K, Abuluwefa et al. [ 9 ] observed all three scale layers by oxidation in pure oxygen. However, only a single wüstite layer was found for the oxidation of the same steel in the water vapor atmosphere. Si was found to decrease the oxidation of low alloy steel below 1450 K by Alaoui Mouayd et al. [ 10 ]. Takeda et al. [ 11 ], reported that Cr decreased the oxidation kinetics of low carbon steels by forming a FeOCr 2 O 4 layer at the oxide-metal interface. Webler et al. [ 12 ] found that the addition of 0.3Cu + 0.1Ni decreased pure iron oxidation. Yin et al. [ 13 ] assessed Fe–Cu–Ni alloys' oxidation at 1423 K, where Ni increased the oxidation resistance. In a recent study, the effect of Ni on high temperature oxidation of high-strength steels was evaluated by Vedaei et al. [ 14 ]. The results showed a remarkable decrease in oxidation kinetics by adding Ni. However, the impact of the oxide layers on interfacial friction during high temperature deformation was not studied. Such evaluation would require accurate determination of the mechanical properties of the oxide layers. Considering each oxide layer's different characteristics, the tribological conditions can be remarkably affected at the die-ingot interface [ 1 ]. Methods like ring test and pin on disk can be employed to investigate the oxide layer's effect on the friction between the die and the ingot. Hot ring compression test is a commonly employed method to evaluate the interface tribology at high temperature and obtain the variations in deformation loads and interfacial friction coeffiecient, accurately [ 15 , 16 ]. Ashimabha et al. [ 17 ] investigated the interface tribology of 316 L stainless steel using ring compression test in dry deformation condition. The rings were deformed in the temperature range of 1173–1473 K without considering the effect of thermal oxidation on interfacial friction. The authors reported higher friction at 1173 K and smaller at 1473 K and associated the obtained results to changes in different degrees of thermal softening and material flowability with temperature. Munther and Lenard [ 6 ] assessed the effect of oxide layers on the interfacial tribology during rolling of AISI 1018 steel and reported that the friction coefficient decreased by increasing the oxide thickness with the highest friction for 0.015 mm of scale and the lowest for 1.01 mm. Employing high temperature pin on disc method, Vergne et al. [ 18 ] investigated the effect of oxidation on interfacial tribology between AISI 1018 disc with a cast iron pin. It was found that oxidation at 1223 K for 1 h decreased both interfacial friction and wear. Zambrano et al. [ 19 ] used pin on the disk to assess the effects of oxide layers formed on ASTM A36 steel at 1223 K, against two high speed steels. The outcomes showed that the formation of oxides decreased the friction coefficient, whereas the wear rate increased. Graf et al. [ 20 ] used ring compression tests at 1173 and 1273 K for evaluation of friction and deformation load of C15 steel and reported that the presence of 30 and 50 μm of scale layers remarkably decreased the friction and deformation load of rings. Hardell et al. [ 21 ] employed pin on disc to assess the influence of oxide formation on interfacial friction between two steels. It was found that the formation of higher amounts of magnetite layers at 673 K decreased the friction and wear rate between contacting surfaces. Odabas [ 22 ] investigated the interfacial tribology between AISI 3315 Steel against AISI 3150 Steel with pin on ring method. They reported lower interfacial friction on samples submitted to higher loads or sliding speeds and related their findings to higher temperatures at the surface, and hence oxidation, of such samples. Matsumoto et al. [ 23 ] used ring compression tests to investigate the oxide layer's tribological impact on the deformation of Cr rich steels at 1273 K and concluded that the presence of oxide layers with thickness in the range of 6–300 μm resulted in reduced deformation loads. However, in their simulations, the oxide layer was considered as a single layer of wüstite and very little microstructure analysis of the oxide material behavior was carried out. The above studies show that the formation of thermal oxides reduces the interfacial friction during deformation. Therefore, accurate quantification of the impact of oxide layers requires a more precise determination of the mechanical and morphological characteristics of different oxide layers as a function of process parameters such as oxidation temperature, oxidation time, initial composition, or oxidation atmosphere. The evaluation of the above characteristics of different oxide layers and their different effects on interfacial friction is a piece of critical information that needs to be determined to accurately predict the friction coefficient. To acquire the mechanical properties of the different oxide layers, tensile or compression tests could be employed. However, these methods need molds, machining, furnaces, and complex sample preparations, making them time-consuming, and costly. Micro and nano-indention methods are prevalent methods to investigate thin films and coatings and more convenient to conduct compared to compression tests. However, micro and nano-indentation test results are very sensitive to slight variations in oxidation temperature, oxidation time, oxidation atmosphere, or initial composition. For instance, Takeda et al. [ 24 ] measured at room temperature the hardness of formed oxide layers on pure iron at 1273 K. The Vickers hardness for wüstite, magnetite, and hematite was 3.5, 4, and 6.7 GPa, respectively. On the other hand, Barrau et al. [ 25 ] reported different hardness values, 2.64–2.94, 4.2–4.9, and 10.1 GPa for wüstite, magnetite, and hematite layers formed on iron, respectively. Luong and Heijkoop [ 26 ] obtained 4.6, 5.3, and 10.3 GPa for the three oxide layers formed on AISI 1340 steel oxidized at 1423 ± 20 K in air. Amano et al. [ 27 ] reported 3.5, 3.9, and 7.2 GPa values for the hardness of wüstite, magnetite, and hematite formed on Fe-0.5Si alloys at 1273 K for 18 ks in air and measured at room temperature. Zambrano et al. [ 8 ] investigated the hardness of different oxide layers formed at 1473 K in the air on a low carbon steel and obtained hardness values of 5.5 ± 1.1, 6.5 ± 0.9, and 12 ± 2.5 GPa for wüstite, magnetite, and hematite layers at room temperature. Hutchings and Shipway [ 28 ] reported a hardness range of 3.6–5.9 GPa for magnetite, for different oxidation conditions. The above studies reveal that the initial composition has a significant influence on the characteristics of each oxide layer and, therefore, the need for conducting indentation tests for each specific steel. Finally, it must be noted that one of the factors that could also influence the reported results is the measurement technique (i.e., nano-indentation or micro-indentation), as evidence by results reported by Chicot et al. [ 29 ] and Seo and Chiba [ 30 ] on magnetite hardness values. Therefore, in the present study, both nano indentations and micro indentation techniques were used in order to ensure the accuracy of the obtained mechanical characteristics of the thermal oxides. Although the indicated analysis shows the impacts of oxide layers on friction between die and ingot, very little information is available on the specific roles of different oxide layers and the evolution of their morphological and mechanical properties as a function of oxidation parameters. The aim of the present study is to investigate the characteristics of each oxide layer and assess the effect of oxidation on interfacial friction between ingot and anvil during open-die forging. In this regard, oxidation experiments were carried out on two different high-strength steels at different temperatures, and the mechanical properties of each oxide layer was assessed employing micro and nano-indentation tests. The results were then employed as input for finite element simulations to evaluate oxide layers' effect on the interfacial friction between anvils and ingot using high-temperature ring tests to simulate the open die forging process. 2 Materials and methods Two high-strength medium carbon low alloy steels were used in the present study. One of the sub-objectives of the project was to evaluate the impact of a higher Ni content on the oxidation behavior of large size components during open die forging, and therefore a higher Ni content was employed, as shown in Table 1 . For ease of use, the steels are identified as low nickel (LNi) and high nickel (HNi). The materials were provided by Finkl steel, Sorel, Quebec, Canada, and were obtained from large-size forgings used in energy and transportation industries. 2.1 Oxidation experiments Samples for the oxidation experiments were cylinders with 10 mm diameter and 15 mm height. Before oxidation experiments, the samples were grounded by 320 mesh SiC papers to achieve similar surface roughness and ultrasonically cleaned and kept in the vacuum chamber. The oxidation tests were conducted employing a radiative furnace mounted on a Material Testing System (MTS), series 809. The furnace was a water-cooled IR E4 Research Inc. radiative type and was equipped with four lamps and elliptical polished aluminum reflectors that provided the infrared radiation and produced a hot zone with uniform temperature over a 100 mm distance thereby, producing uniform temperature conditions all around the samples. Samples were heated to an oxidation temperature of 1473 K with a heating rate of 2 K/s. The heating was conducted under the protection of argon gas with a flow rate of 50 ml min −1 to avoid oxidation during heating. At this temperature, the argon was switched off for 60 min, leaving the sample to oxidize in the air. Then, the argon flow was connected again to cool down the sample to room temperature and avoid oxidation during cooling. The formed scale layers are fragile and need to be carefully preserved for metallography, indentation, and microscopy. A method was developed to guard the scale layers from manipulation damages. This method consisted of using a cold mounting product composed of 60–70% Zirconium oxide, 20–40% fused Silica and 0–10%Aluminum oxide with a hardener composed of Amines, polyethylenepoly, and triethylenetetramine and introduced to the oxidized sample in a vacuum chamber. As thermal oxidation is a surface phenomenon, it is reigned by the diffusion of ions near the surface. So, the oxide growth would be the same for a small sample compared to a large-sized ingot. 2.2 Indentation on oxide layers To evaluate the material properties of different scale layers, indentation tests were conducted. Micro-indentation tests were conducted using an Anton Paar Micro-Hardness Tester (MHT), and nano-indentation tests were carried out with Micromaterials NanoTest Vantage. For both tests, a Vickers diamond indenter was used on the transverse section of oxide layers. Before nano-indentation, the sample was placed in the machine chamber for 48 h to ensure temperature homogeneity. The diamond tip was applied to the examination point up to a specific load during the indentation, where it was kept for 30 s, followed by unloading [ 31 ]. The output of an indentation test is a curve showing the applied load versus the indentation depth. The contact depth of the indenter, h c , was calculated using the Oliver and Pharr analysis [ 32 , 33 ]: where h (1) h c = h m − ε F m S m is the maximum indentation depth (μm), ε is a constant related to the indenter geometry, F the maximum normal load (mN), and S is the stiffness of the sample (mN/μm) acquired from the slope of the unloading portion of the indentation curve. In the unloading part of the graph, the instrumented hardness of the tested material, H m IT , can be obtained using equation (2) [ 32 , 33 ]: where A (2) H I T = F m A p p is the contact area between the indenter and the specimen at the maximum depth and load (μm 2 ). Reduced elastic modulus, E r (GPa), considers that the elastic displacement occurs in both indenter and sample and can be obtained as follows [ 32 , 33 ]: where β is a geometrical constant on the order of unity. The instrumented elastic modulus in the specimen, E (3) E r = π 2 β S A p IT , is acquired as follow [ 32 , 33 ]: where ν (4) 1 E r = ( 1 − ν s 2 ) E I T + ( 1 − ν i 2 ) E i s is the Poisson's ratio of specimen and E i and ν i are the elastic modulus and Poisson's ratio of the indenter, which for a diamond indenter are 1141 GPa and 0.07 [ 34 ]. 2.3 Hot compression tests To obtain the stress-strain behavior of the studied materials for friction assessment, hot compression tests were conducted. Cylindrical samples 10 mm diameter and 15 mm height were heated to 1373 and 1473 K under argon protection to avoid oxidation and compressed to 50% of their height. The strain rate was 0.25 s −1, which is designated based on the employed parameters of the industrial partner for this study. To minimize the effect of friction on results, graphite powder and mica sheets were placed between the sample and the anvils. However, a slight barreling was still observed, indicating some friction during compression testing. Therefore, the stress-strain curves were corrected for this frictional effect to ensure accurate evaluation of the material mechanical behavior during high-temperature deformation. The instantaneous friction was calculated as follows [ 35 ]: where μ (5) μ = μ 0 + A exp ( ε ε 0 ) 0 , A, and ε 0 are constants and are given in Table 2 for different temperatures based on empirical investigations [ 35 ]. Using equation (5) and the data in Table 2 , the corrected stress related to the deformation can be calculated as follows [ 35 ]: (6) σ = C 2 P 2 [ exp ( C ) − C − 1 ] where P is the acquired stress from the testing device, r and h are the specimen's initial radius and height. (7) w i t h C = 2 μ r h 2.4 Hot ring compression test for friction assessment Originated by Kunogi [ 36 ] and improved in a practical way to be utilized by Male and Cockcroft [ 37 ], ring test is an accurate method to obtain the friction coefficient. This method is commonly used for the determination of interfacial friction coefficient on various materials, including steels [ 15 , 16 , 38 ]. Rings with dimensions of 18 mm for outer diameter, 9 mm inner diameter, and 6 mm height were machined from the as-received materials. These rings were heated to 1373 and 1473 K deformation temperatures and isothermally deformed to 50% of their height. The variation of the ring inner diameter with the height reduction can be correlated with the friction at the die-part interface [ 39 ]. If the friction is low, the ring will flow outwards, causing the inner diameter to increase. If the friction is high, the material will flow inward, and the inner diameter will decrease. The rings were oxidized before deformation to evaluate thermal oxidation's effect on the interfacial friction during deformation. Before compression testing, the rings were hung with the help of a support inside the furnace to allow both parallel surfaces of the sample to be in contact with the air and be oxidized by the indicated procedure in section 2.1 . Two oxidation temperatures, 1373 and 1473 K, and three oxidation times, 10, 30, and 60 min, were used to assess the influence of different oxide thickness on interfacial friction. 3 Finite element modeling of the ring test Friction calibration curves (FCC), as reported in the literature [ 39 ], have been widely employed to estimate the friction coefficient by obtaining the variation of inner diameter for a specific height reduction. However, the studies have demonstrated that the FCCs can be remarkably different for various materials, deformation temperature, heat transfer, ring geometry, etc. [ 39 ]. In other words, the utilization of conventional FCCs cannot be a precise approach for all investigations. On the other hand, experimental determination of FCC for all the experimental conditions encountered in actual deformation processing conditions is a very time-consuming exercise; hence, finite element (FE) with Abaqus/CAE software was used to develop each of the investigated steels' FCC. Due to the axisymmetric nature of the ring-compression test, a 2D axisymmetric model was developed. Furthermore, because of the planar symmetry during compression, half of the ring's cross-section was considered in the simulations, which further reduced the computational time during the simulations. The die was considered as a non-deformable discrete rigid with 2-node linear axisymmetric rigid elements (RAX2). The ring was modeled as deformable elastic-plastic with 4-node bilinear axisymmetric quadrilateral mesh (CAX4R) for axisymmetric stress. The mechanical properties were acquired by the cylindrical compression tests. Rings were deformed up to 60% of the height. Fig. 1 shows the employed mesh and deformation of the LNi ring at 1473 K. Based on the machine manual, the die temperature was considered the same as the ring, as the temperature is uniform for 100 mm length inside the radiative furnace, taking the advantage of long radiative lamps and reflectors. This assumption was verified by installing a thermocouple on die surface during oxidation, where the discprencies between ring and die temperature were negligible. The ring could deform in both horizontal and vertical directions, whereas the die had the freedom to only move in a vertical direction. The two common friction models utilized in metal forming to describe the frictional conditions of contacting surfaces are the Coulomb friction model and the constant shear friction law [ 40 ]. These models can be indicated as follows [ 41 , 42 ]: (8) τ = μ P ( C o u l o m b f r i c t i o n m o d e l ) where τ, μ, P, m, and σ (9) τ = m K , K = σ s 3 ( C o n s t a n t s h e a r f r i c t i o n l a w ) s are frictional shear stress, friction coefficient, normal stress, shear friction factor, and effective flow stress [ 41 , 42 ]. The FCCs in this study are developed by both models to understand better and compare two friction models. 4 Results and discussion 4.1 Micro-indentation and nano-indentation on oxide layers Fig. 2 shows the formed oxide layers on LNi and HNi steel at 1473 K for 60 min. The three oxide layers of hematite at the top, magnetite as an intermediary layer, and wüstite at the bottom can be seen for both steels. The formed scale layers on HNi are thinner compared to LNi steel, as Ni suppressed the oxidation. The formed oxide sub-layers, and the mechanism by which Ni hinders the oxidation were discussed in a former study [ 14 ]. The formed thermal oxide on HNi steel is continuous and smooth compared to the rough and porous LNi ones, particularly for the magnetite layer. Furthermore, the interface of oxide-metal for grown oxide layers on HNi steel is thicker, which implies the better adhesion of this scale to the base metal compared to LNi one. Despite the presence of pores and cracks during sample preparation, there was still sufficient room to conduct indentation tests properly. The Vickers micro indenter was applied to different oxide layers, leaving a pyramid with square base imprint (see Fig. 3 ). The indenter was applied with different loads, held for 30 s at peak load and followed by unloading, as illustrated in Fig. 4 -a for formed wüstite on LNi steel at 1473 K. The evolution of the penetration depth for indenter as a function of indentation load is shown in Fig. 4 -b. Employing Fig. 4 -a and 4-b with equations (1)–(4 ), the Young modulus and hardness of different oxide layers were determined according to Oliver and Pharr's method [ 32 , 33 ]. The results for the formed wüstite layer on both steels are given in Fig. 5 . The results reveal that the average Young modulus of wüstite formed on LNi steel and acquired with micro-indentation was 141.0 ± 7.5 GPa while the one for the HNi steel was remarkably lower, at 100.0 ± 6.0 GPa. It is interesting to note that the Young modulus's results obtained from nano-indentation of the LNi sample are very close to the ones obtained by micro indentation 145.5 ± 3.5 GPa. This finding indicates the reliability of the less demanding micro-indentation technique as compared to the more complex experimental set-up of the nano-indentation one for characterizing oxide layers. The hardness of wüstite on LNi was determined to be 6.0 ± 0.3 GPa by both micro and nano-indentation techniques, further confirming the reliability of measures with micro-indentation technique. It is also interesting to note that the hardness of the wüstite layer on the HNi steel was about 3.7 ± 0.3 GPa which is much softer than the one in measured in the LNi steel. The results also show that the stiffness of wüstite increases by increasing the applied load or penetration depth. The elastic-plastic work conducted to penetrate both wüstite layers on steels was equal to %26 and %74 for elastic work and plastic work, respectively. The elastic and plastic work are total conducted work units during indentation, which equal to the area under indentation force-penetration depth curve (see Fig. 4 -b). The elastic and plastic work are the area under unloading and loading parts of curve, respectively [ 43 ]. In agreement with the Young modulus results, the stiffness of the wüstite for LNi steel was higher than that of the HNi steel. The results for the formed magnetite layer on LNi and HNi steels are illustrated in Fig. 6 . The average elastic modulus of the magnetite layer acquired by micro-indentation and nano-indentation were 160.0 ± 6.0 GPa and 162.0 ± 5.3 GPa, respectively. A lower value of 143.0 ± 8.6 GPa was measured for the Young modulus for the magnetite layer grown on HNi steel. The same trend was observed for hardness as the hardness of magnetite on LNi was 6.0 ± 0.2 GPa, obtained from both micro and nono-indentation, compared to 5.0 ± 0.4 GPa of hardness for the layer on HNi steel. The stiffness for formed magnetite on LNi was slightly higher compared to the one on HNi. Still, the same percentage of %26 and %74 was observed for elastic and plastic work of both steels' magnetite. Fig. 7 gives the results for the outermost layer, hematite formed on both steels. The Young modulus reached 236.0 ± 10.7 GPa and 239.0 ± 14.3 GPa acquired by micro and nano-indentation, further confirming the agreement between the measures made by both techniques. This value was remarkably lower at 195.0 ± 13.4 GPa for hematite of HNi steel. The same trend was observed for hardness as it was 11.0 ± 0.8 GPa and 13.7 ± 0.4 GPa for LNi compared to 8.8 ± 1.0 GPa for HNi one. Like the magnetite layer, LNi hematite's stiffness was slightly higher than grown hematite on HNi steel. The elastic and plastic work ratio was different from the previous two layers, equaling %42 and %58, respectively. As the deformation occurs at high temperatures, the question can arise that will the measured Young modulus difference for formed oxide layers at room temperature can be seen at high temperature? The reported literature shows that the same type of difference is to be expected for the measured Young modulus at room temperature for high tempeartures. The conducted study by Schütze et al. [ 44 ] reported the following equation for obtaining the Young modulus at high temperature from measured amounts at room temperature: where E (10) E o x = E o x o ( 1 + n ( T − 25 ) ) ox is the Young modulus at high temperature (GPa), is the Young modulus at room temperature (GPa), n is a constant which reported to be −4.7 × 10 E o x o −4 for iron oxides, and T is designated temperature (K). Applying this equation to this case indicates that the difference between two thermally grown oxides will still be present at high temperature. Based on equation (10) , the 141 and 100 GPa of Young modulus for formed wüstite on LNi and HNi steels, decreases to 45 and 32 GPa (the same 40% difference) at 1473 K. For magnetite formed on LNi and HNi, the Young modulus is 51 and 46 GPa at 1473 K, compared to 160 and 143 GPa at room temperature. For hematite, the formed layer on LNi and HNi steels have the Young modulus of 236 and 195 GPa at room temperature, which is 75 and 62 GPa at 1473 K, employing the indicated equation (10) . The same trend is observed for other high temperatures. The oxide-metal interface, known as the transition layer, is thicker for HNi steel compared to LNi. The thicker transition layer makes a better oxide-metal mechanical bonding, resisting for descaling of formed oxide on HNi steel. This case was evident during the cleaning of the smples from oxides. The formed oxide on LNi was cleared by a small force, whereas the HNi oxide needed a sharp edge and a higher force to remove. Micro-indentation on the transition layer showed that the hardness was 5.1 and 4.7 GPa for LNi and HNi steel, where the thicker layer has a lower hardness. Immediately after the transition layer, the micro-indentation measures on the base metal indicated a hardness of 3.7 and 2.9 GPa for LNi and HNi, respectively. The low hardness level of the base metal in the zone adjacent to the oxide-metal interface, si due to the decarburization by diffusion of ions. The above results clearly reveal the differences in the mechanical properties of the different layers and, therefore, the need for the quantification of their effects on the interfacial friction between ingot and anvils during deformation. 4.2 Friction calibration curves (FCCs) As illustrated in Fig. 8 , both steels' stress-strain curves were obtained by compression tests at 1373 and 1473 K with 0.25 s −1 , and the effect of friction was corrected using the procedure described above. The corrected stress-strain curves were utilized for FE simulation of hot ring compression tests to develop the FCCs. The FCCs illustrated in Fig. 9 for LNi, and HNi steels were acquired at indicated temperatures of 1373 and 1473 K, by obtaining the variations of the inner diameter of rings by height reduction. The curves were acquired by both Coulomb and friction Constant shear friction models to provide a comparison between two models. The single points on the graphs show the results of the experimental tests for rings, with and without oxide layers, which makes the correlation between FE and experimental parts. The results show that both friction models came to the same predictions for a friction coefficient range from frictionless conditions to a friction coefficient of m = 0.2. Afterward, the Coulomb model shows slightly higher values. However, the friction coefficient for conducted experiments never surpasses the coefficient of m = 0.2. For all tested conditions, the highest friction was obtained for rings without oxidation, at m = 0.2. At these temperatures, steel tends to stick to die surfaces and high frictional conditions are produced. The decrease in friction coefficient with the oxidation progress shows that the oxide layers act as a lubricant at high temperatures. For LNi, at deformation temperature of 1473 K, the friction decreased from m = 0.2 for the ring without oxidation to m = 0.14, 0.11, and 0.1 after oxidizing the ring for 10, 30, and 60 min, respectively. Therefore, as the oxide thickness increases, it can act more as a lubricant to decrease the ring and anvils' interfacial friction. This outcome is in accordance with the conducted studies by Zambrano et al. [ 45 ] on the tribological behavior of a mottled cast iron sliding against formed oxide on ASTM A36 . Their results indicated that by increasing the testing temperature the friction coefficient decreases due to the formation of oxide layers. For HNi steel, at the same deformation temperature, the friction decreased from m = 0.2 to m = 0.17, 0.14, and 0.12 for the 10, 30, and 60 min oxidation times, respectively. The friction decreased by oxidation for HNi steel, but at a lower level as compared to LNi steel. This issue is due to the lower thickness of the formed oxide on HNi steel than LNi steel, where the Ni modifies the oxidation. For LNi steel at deformation temperature of 1373 K, the friction coefficient was m = 0.16, 0.14, and 0.12 by oxidation for 10, 30, and 60 min. For the same deformation temperature and oxidation times, friction coefficients of m = 0.19, 0.15, and 0.13 were obtained for HNi steel. The results reported in Figs. 5–7 show that the Young modulus and hardness of the oxide layers formed on the LNi steel are higher than the ones on HNi steel. The higher mechanical properties of oxides on LNi samples, delays the early disintegration of these layers. As these layers play the role of lubricants; therefore, the oxide layers of LNi steel decreased friction in a larger degree compared to the HNi oxides. This effect was further accentuated as the oxide layer on LNi was about 20% thicker than the one formed on the HNi one. As a result of friction decrease by oxide layers, the forming loads decrease too. Fig. 10 illustrates the applied forming loads by the anvils to deform rings, with and without oxide layers, at 1473 K. The required deformation loads for deformation of rings at 1473 K for both LNi and HNi steels were decreased by oxidation. This is related to the fact that oxide layers act as lubricants at this temperature. The loads for deformation of LNi steel decreased from 20412 N for without oxidation condition to 16815, 15714, and 14121 N after oxidation for 10, 30, and 60 min. For HNi steel, the deformation load decreased from 19677 N at its maximum to 17589, 15975, and 14874 N after oxidation for 10, 30, and 60 min. To validate the accuracy of the developed FE model and FCCs, the developed FE model was compared with the conducted experiments. The outer diameter of the ring after deformation, and the deformation load of the ring with respect to its height reduction were compared. The obtained results are in a reasonable agreement, and the errors are given in Fig. 11 . 5 Summary and conclusions The mechanical characteristics of different oxide layers formed on two high-strength steels were evaluated by micro-indentation and nano-indentation methods. It was found that while the same layers of wüstite, hematite, and magnetite were observed for both steels, their characteristics were different. The micro-indentation and nano-indentation results on thermally grown oxide layers on LNi steel were close and in a reasonable agreement. The Young modulus, hardness and stiffness of all oxide layers were higher for formed layers on LNi steel than HNi ones, whereas the elastic and plastic work ratios remained the same for two oxides. The Young modulus and hardness values of LNi oxide layers were consistently higher than those for the HNi oxide layers. Ring compression tests and FEM modeling were employed to evaluate interfacial friction using both Coulomb and constant shear friction models. The results clearly showed that the presence of the oxide layer acted as a lubricant at higher temperatures with the oxide layers on LNi steel decreasing the friction more than the formed oxides on HNi steels. This phenomenon was associated with the higher thickness of formed oxide layers on LNi steel and the higher mechanical properties of LNi oxides compared to the HNi ones. Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Mohammad Jahazi reports financial support was provided by Natural Sciences and Engineering Research Council of Canada (NSERC) . Acknowledgment The authors would like to express their appreciation to Finkl Steel for the current study's specimen supply. The authors would also like to thank Dr. Daniel Paquet at Institut de recherche d'Hydro-Québec (IREQ) for collaboration in conduction of indentation tests. This work was supported by the Natural Sciences and Engineering Research Council of Canada ( NSERC ) for their support in the framework of a Collaborative Research and Development project (CRD) [Grant number 5364418].
|
[
"BIRKS",
"JANG",
"UTSUNOMIYA",
"BARRAU",
"SUAREZ",
"MUNTHER",
"SUAREZ",
"ZAMBRANO",
"ABULUWEFA",
"ALAOUIMOUAYD",
"TAKEDA",
"WEBLER",
"YIN",
"VEDAEISABEGH",
"ALTAN",
"DIETER",
"ASHIMABHA",
"VERGNE",
"ZAMBRANO",
"GRAF",
"HARDELL",
"ODABAS",
"MATSUMOTO",
"TAKEDA",
"BARRAU",
"LUONG",
"AMANO",
"HUTCHINGS",
"CHICOT",
"SEO",
"FISCHERCRIPPS",
"OLIVER",
"OLIVER",
"LAWRENCE",
"LI",
"KUNOGI",
"MALE",
"BEDDOES",
"SOFUOGLU",
"KOBAYASHI",
"AVITZUR",
"ZHU",
"LEE",
"SCHUTZE",
"ZAMBRANO"
] |
b3f33044e78b42569c4eb4939f533a78_Prevalence antimicrobial resistance and phylogenetic analysis of Salmonella contamination and transm_10.1016_j.vas.2025.100428.xml
|
Prevalence, antimicrobial resistance and phylogenetic analysis of Salmonella contamination and transmission in yellow-feathered broiler hatcheries in China
|
[
"Wu, Canji",
"Deng, Yuhui",
"Chen, Zeluan",
"Peng, Junhao",
"Wu, Peizhi",
"Chen, Jinger",
"Chen, Pengju",
"Liao, Ming",
"Xu, Chenggang",
"Zhang, Jianmin"
] |
Salmonella is a significant avian pathogen causing infectious diseases in poultry, with hatching playing a crucial role in its transmission. Despite its importance, systematic research on Salmonella transmission in hatcheries remains limited. This study evaluates the prevalence and antimicrobial resistance of Salmonella throughout all production stages in yellow-feathered broiler hatcheries: laying, egg storage, incubating, hatching, and post-hatch. We found an overall Salmonella prevalence of 11.3 %, with the pathogen detected in both chickens and environmental samples. The hatching stage was identified as the most critical for Salmonella spread. Moreover, Salmonella Pullorum is the predominant serotype (93.97 %). Notably, all Salmonella isolates exhibited multidrug resistance, with some resistant to polymyxin B (22.41 %) and tigecycline (12.93 %). Resistance rates were highest for nalidixic acid (100.00 %), sulfamethoxazole (100.00 %), ciprofloxacin (95.69 %), and ampicillin (94.83 %). Additionally, antimicrobial resistance plasmid replicons and virulence genes were identified in these isolates. Whole genome sequencing was performed on 43 S. Pullorum isolates, revealing that the majority were ST92 (90.70 %). Phylogenetic analysis classified the isolates into three lineages, with Lineage III being the most predominant (83.72 %). It was found that Salmonella isolates from chicks and eggs across various production stages were closely related, and those from the environment also showed significant similarity. This suggests that Salmonella in the environment may originate from chicks/eggs and spread to other stages. More attention should be paid to Salmonella contamination in yellow-feathered broiler hatcheries, and stringent measures should be taken to control the horizontal spread of Salmonella, in addition to blocking the pathway of vertical transmission.
|
1 Introduction Salmonella is a significant avian pathogen that can cause a reduction in production performance and death in poultry, resulting in substantial economic losses to the global poultry industry ( Caffrey et al., 2021 ; Wang et al., 2020a ). In China, the annual production (head units) of live yellow-feathered broilers is approximately 4.0 billion, comparable to that of white-feathered broilers ( Bai et al., 2021 ). Compared with white-feathered broilers, yellow-feathered broilers exhibit a longer growth cycle and a greater variety of strains and farming methods. Furthermore, each strain varies in body size, growth rate, and disease resistance ( Qi et al., 2017 ), making Salmonella prevention and control more challenging. However, there is a notable lack of systematic research on Salmonella in yellow-feathered broilers, highlighting the urgent need to enhance such research for improved monitoring within this sector. Hatcheries, as the upstream stage of the broiler industry chain, serve as a crucial intervention point for controlling Salmonella in yellow-feathered broiler production. Vertical transmission is a key route for the spread of Salmonella , such as Salmonella Pullorum and Salmonella Enteritidis, in poultry and can lead to their introduction into poultry flocks from infected hatcheries ( Shang et al., 2021 ; Volkova et al., 2011 ). However, researchers often focus on breeding farms, with a scarcity of systematic studies on hatcheries. Moreover, previous research on hatcheries has predominantly targeted specific aspects, such as chicks or eggs. Actually, hatchery production encompasses various stages, the use of production equipment, and the worker flow, all of which may facilitate Salmonella spread. Therefore, comprehensive monitoring of the source and transmission pathways of Salmonella contamination in the hatchery is essential. In China, antibiotic use remains the primary method for preventing and treating Salmonellosis in yellow-feathered broilers. However, the abuse of antimicrobials in the broiler industry has resulted in the emergence of antimicrobial resistance (AMR) bacteria, significantly diminishing the effectiveness of some drugs used in clinical treatments ( Talukder et al., 2021 ). Furthermore, it has been reported that AMR Salmonella strains found in human cases are closely linked to the extensive use of antimicrobial agents in livestock and poultry farming ( Belachew et al., 2021 ). Previous studies have described the spread of AMR Salmonella in the broiler farm, slaughterhouse, and its downstream retail markets ( Samia et al., 2021 ; Shang et al., 2021 ; Wang et al., 2020a ). However, research on AMR Salmonella isolated from yellow-feathered broiler hatcheries is limited. As the upstream stage of the broiler industry chain, hatcheries may serve as a key entry point for studying the spread of AMR Salmonella throughout the chain. Therefore, investigating the prevalence and AMR of Salmonella in yellow-feathered broiler hatcheries is crucial for identifying specific distribution patterns and developing effective strategies to control and prevent Salmonella infections in both humans and animals. In this study, we conducted longitudinal sampling across all production stages of the yellow-feathered broiler hatchery to identify the main entry points and transmission routes of Salmonella . We further assessed the AMR characteristics of the isolates. Additionally, we employed whole genome sequencing (WGS) technology to investigate the relationship between strains at different production stages. Our aim was to reveal the prevalence, AMR, and phylogenetic relationship of Salmonella in the yellow-feathered broiler hatchery, providing a reliable reference for precise Salmonella control within broiler industry chains and for the purification of yellow-feathered broiler provenance. 2 Materials and methods 2.1 Sample collection From July 2020 to July 2021, a total of 1023 samples were collected from five production stages in the large-scale commercial yellow-feathered broiler hatchery (accommodating >50,000 yellow-feathered broiler embryos) in Guangdong Province, China. The stages include the laying stage, egg storage stage, incubating stage (the stage of incubation of eggs in the incubator from day 1 to day 17), hatching stage (the stage of hatching of eggs in the hatchery from day 17 to day 21), and post-hatch stage (the stage of eliminating weak chicks, vaccinating, and packing). The main sources of samples were dead embryos, sick chicks, environment, meconium, workers' hands/shoes, etc. ( Fig. 1 ). The collected samples were kept in the foam box with ice packs, and it was ensured that the samples arrived at the laboratory within 2 h. 2.2 Salmonella isolation and identification Upon arrival at the laboratory, the swabs were transferred to 10 mL of BPW (Buffered Peptone Water) and incubated at 37 °C for 8–12 h for pre-culture of bacteria. And then 1 mL of BPW was transferred to 9 mL of SC (Selenite Cystine Broth) and incubated at 37 °C for 14–16 h for selective culture of bacteria. At last, the bacterial fluid was inoculated in XLT-4 (Xylose Lysine Tergitol-4 Agar) and incubated at 37 °C for 24 h. During the process, we operated in strict accordance with aseptic requirements, strictly sterilized, and changed the tools for each sample. Genomic DNA was isolated with a Bacterial Genomic DNA kit (Omega, USA) according to the manufacturer's instructions. The obtained supernatant (template DNA) was stored at -20 °C until use. After DNA was extracted by the above method, we detected it by PCR through the Salmonella -specific gene invA ( Lu et al., 2011 ). The sample separation method was optimized mainly based on the Standard ISO-6579 (International Organization for Standardization, 2002) method ( Chen et al., 2020 ; Ren et al., 2016 ). All Salmonella isolates were serotyped by slide agglutination with O and H antigen-specific sera according to the Kauffmann-White scheme or by National Food Safety Standard food microbiological examination ( Chen et al., 2020 ). 2.3 Antimicrobial susceptibility test Minimum inhibitory concentrations (MICs) were determined by the agar dilution method using Mueller-Hinton agar according to the standards of the Clinical and Laboratory Standards Institute ( Wang et al., 2020b ). A total of 13 antimicrobial agents were tested: ampicillin (AMP), cefotaxime (CTX), imipenem (IPM), streptomycin (STR), gentamicin (GEN), nalidixic acid (NAL), ciprofloxacin (CIP), florfenicol (FFC), chloramphenicol (CHL), sulfamethoxazole (SMZ), polymyxin B (PB), tetracycline (TET), and tigecycline (TGC). Escherichia coli ATCC 25922 and ATCC 35218 were used as quality control organisms for this MIC determinations. And the breakpoints for antimicrobials followed interpretive standards provided by CLSI (2022). In addition, an isolate was defined as‘multidrug-resistant (MDR)’ if it displayed resistance to ≥ 3 different classes of antimicrobials ( Tenover, 2006 ). 2.4 Whole genome sequencing Representative S. Pullorum strains from different times, stages, and sources were selected and underwent WGS and bioinformatics analyses. The selection method is as follows: Firstly, the sampling sources were retained, which exhibited a limited number of isolates (meconium and workers' hands/soles). Secondly, for those sampling sources with a higher number of isolates (dead embryos, sick chicks, and environment), we eliminated the highly similar clonal strains by comparing the sampling time, production stage and the resistance profiles of the subdivided sampling source strains, and highly similar clonal strains were eliminated to ensure the scientific validity of the WGS strains. The strains’ raw sequencing data were assembled and evaluated using Trimmomatic v0.36, SPAdes v3.12.0, and QUAST tool 5.0.2. ( Bolger et al., 2014 , Bankevich et al., 2012 , Gurevich et al., 2013 ). Plasmid typing, antibiotic resistance genes, and virulence genes were screened using RGI (Resistance Gene Identifier) ( Alcock et al., 2020 ), Abricate 1.0.1, and Plasmidfinder databases ( Carattoli et al., 2014 ). Furthermore, MLST v2.11 was used for sequence typing (ST) ( Larsen et al., 2012 ). Based on the core SNP loci of the strains, Gubbins ( Croucher et al., 2015 ) and FastTree ( Price et al., 2009 ) were used to generate the maximum-likelihood phylogenetic tree. Finally, the phylogenetic tree ( Price et al., 2009 ) was visualized and embellished using the iTOL ( Letunic & Bork, 2021 ) online tool. The reference strain LHTF01.1 was downloaded from NCBI ( https://www.ncbi.nlm.nih.gov/ ). 2.5 Statistical analysis SPSS 26.0 statistical software (v.16.0, SPSS, Chicago, IL, USA) Fisher's exact test was used to analyze the significant differences in Salmonella sample isolation. p < 0.05 indicated a significant difference. 3 Results 3.1 Prevalence of Salmonella The overall prevalence of Salmonella in the yellow-feathered broiler hatchery was 11.3 % (116/1023), and different prevalences of Salmonella among the various production stages could be seen. Specifically, the prevalence during the laying, egg storage, incubating, hatching, and post-hatch stages was 2.1 % (6/288), 0.0 % (0/38), 2.0 % (1/49), 17.2 % (62/361), and 16.4 % (47/287), respectively ( Fig. 2 ). We can note that, during the hatching stage, the prevalence of Salmonella increased significantly to 17.2 %, in contrast to the first three production stages. And the prevalence of Salmonella remained high in the post-hatch stage (16.4 %). Meanwhile, the Salmonella prevalence of chickens/eggs in the laying, egg storage, and incubating stages was 0.0 %, whereas those in the hatching and post-hatch stages were 23.0 % and 19.3 %, respectively ( Fig. 2 ). In order to identify additional Salmonella transmission paths, we also collected environmental source samples. During the laying stage, the most significant source of Salmonella contamination was the troughs (33.3 %), followed by the chicken feed (21.4 %) and ditch sewage (13.3 %). During the incubating stage, Salmonella was only isolated from the incubation trays (20.0 %). During the hatching stage, the fluff in the air (33.3 %) was the most contaminated with Salmonella , followed by the floor (20.0 %) and pad paper (16.7 %). During the post-hatch stage, pad paper (28.1 %) had the highest incidence of contamination, followed by chicks’ baskets (11.1 %), tables for screening chicks (11.1 %), workers’ soles (11.1 %), and workers’ hands (10.0 %) ( Fig. 2 ). 3.2 Serotypes analysis of Salmonella Six different serotypes were identified among isolates in our study: S. Pullorum (109/116, 93.97 %) was the predominant serotype, while a small number of other serotypes were present: S. Enteritidis (2/116, 1.72 %), S. Typhimurium (2/116, 1.72 %), S. Tennessee (2/116, 1.72 %), and S. Braenderup (1/116, 0.86 %). S. Pullorum exists in various production stages and various source samples, while S. Enteritidi s, S. Tennessee, and S. Braenderup were only found in environmental source samples during the laying and post-hatch stages. It is worth noting that the Salmonella isolated from sick chicks and dead embryos were all S. Pullorum ( Table 1 ). 3.3 Antimicrobial resistance analysis of Salmonella The AMR of 116 Salmonella isolates is as follows ( Table 2 ): The highest resistance was observed against NAL (100.00 %), SMZ (100.00 %), CIP (95.69 %), and AMP (94.83 %), followed by FFC (34.48 %), PB (22.41 %), TET (22.41 %), TGC (12.93 %), STR (11.21 %), CTX (1.72 %), and CHL (1.72 %). IPM (0.0 %) and GEN (0.0 %) had 100 % susceptibility to the Salmonella isolates in this study. Furthermore, we also compared the resistance rates of different source isolates. Salmonella from different sources exhibit different resistance characterizations. Compared with other sources of samples, the isolates from the environment were resistant to more drugs (11/13, 84.61 %), such as CTX and CHL. And more resistant strains of PB and TET in the cloacal swabs of sick chicks and meconium were identified ( Fig. 3 ). The MDR rate of Salmonella isolates in this study was 100 % ( Table 3 ). Isolates multi-resistant to AMP, NAL, CIP, and SMZ took up 35.34 %, this is the predominant resistance pattern. A total of five isolates showed resistance to six classes of antibiotics. Interestingly, four of these five Salmonella isolates showed resistance to PB, and two isolates showed resistance to TGC. However, TGC and PB are both types of line of defense drugs used in human clinical settings. 3.4 Whole genome sequencing and bioinformatics analyses Phylogenetic tree analysis was performed based on SNPs of S. Pullorum isolated at various sampling dates, production stages, and sampling sources ( Fig. 4 ). The S. Pullorum isolates were distributed in 3 different lineages, named Lineage Ⅰ, Lineage Ⅱ, and Lineage Ⅲ here. There were 4 (9.30 %) and 3 (6.98 %) isolates in Lineage Ⅰ and Lineage Ⅱ, respectively. The dominant cluster in this study was Lineage Ⅲ, which had a total of 36 (83.72 %) isolates. Among them, Lineage Ⅰ was ST2151, Lineage Ⅱ and Lineage Ⅲ were ST92, and isolate FHC-103 was an unknown ST type. It is noteworthy that the dominant Lineage Ⅲ was isolated from five time periods, three production stages, and five sampling sources ( Fig. 4 ). We further compared the SNPs of S. Pullorum isolated from different sources of samples. Significant cross-contamination was found to exist at the hatching and post-hatch stages, with isolated samples originating from dead embryos, sick chicks, meconium, workers’ hands/soles, and other environmental source samples ( Fig. 4 , branches marked in red). Specifically, many isolates from animal sources such as sick chicks, meconium, and dead embryos of different stages were closely related (SNP ≤5). At the same time, the isolates from environments such as chicken feed and workers’ hands/soles were closely related. All the isolates were identified with various AMR genes, which were consistent with the resistance phenotype. Six plasmid replicons were detected among the isolates in this study, including ColRNAI, ColpVC, Col440I, IncFII(S), IncN, and IncX1. And all isolates were carried with the Col and IncFII(S), whereas 41 (95.3 %) isolates were carried with the IncX1 and 2 (4.6 %) were carried with the IncN. 4 Discussion Salmonella contamination has historically posed a significant challenge in the Chinese broiler industry, particularly in the production of yellow-feathered broilers. The hatchery, as a vital component of the broiler production chain, plays a crucial role in preventing Salmonella contamination in this sector. In this study, we found the total prevalence of Salmonella in hatcheries was 11.3 %, which is higher than the prevalence reported in previous studies on broiler farms ( Zhao et al., 2020 ) and breeder farms ( Barua et al., 2013 ). In addition, the Salmonella prevalences in dead embryos and sick chicks were recorded at 23.0 % and 19.3 %, respectively, surpassing figures reported for hatcheries raising white-feathered broilers ( Ha et al., 2018 ; Oloso et al., 2019 ; Shang et al., 2021 ). These results indicate a severe level of Salmonella contamination in yellow-feathered broiler hatcheries, highlighting the urgent need for more in-depth and comprehensive research on the epidemiology of Salmonella in this context. In the present study, we found that Salmonella contamination occurred at multiple stages of the hatchery. Firstly, during the laying stage, there were high isolation rates of Salmonella in chicken feed and troughs, indicating that these may have been the initial source of contamination. Salmonella infects hens first, subsequently spreading vertically to chicks or eggs. The risk of further spread to downstream industries cannot be ignored. At the same time, Salmonella has been detected in various environmental source samples, including workers’ hands and soles, at different stages of production. To prevent the spread of Salmonella , it is crucial to strengthen daily management practices related to these sources. Previous studies have shown a higher prevalence of Salmonella during the laying stage compared to the hatching stage ( Fei et al., 2017 ). However, our findings indicate that Salmonella prevalence was relatively low during the laying, egg storage, and incubation stages, while a significantly higher level was observed during the hatching stage. Notably, despite the elimination of sick chicks after hatching, Salmonella was still detected in the healthy chicks at the post-hatch stage ( Fig. 2 , Meconium). Additionally, while fumigation was proceeding daily, Salmonella was still detected in the environment (such as hatching trays, fluff in the air, and the floor) of the hatching stage. These results suggest that Salmonella may spread to the environment upon chicks hatching and then horizontal transmission to other chicks via fluff in the air or other obscure environmental media. Further investigation into the serotypes of the isolates revealed that S. Pullorum was present at all stages of hatchery production and was the sole serotype associated with the mortality of chicken embryos and the illness of chicks. This finding indicates that S. Pullorum is the predominant Salmonella serotype in yellow-feathered broiler hatcheries. Similar conclusions have been reported in previous studies ( Wang et al., 2020a ; Xu et al., 2020 ). Conversely, S. Enteritidis was predominant in the white-feathered broiler hatcheries ( Shang et al., 2021 ; Zamil et al., 2021 ). To effectively prevent Salmonella in the yellow-feathered broiler hatcheries, greater attention should be given to the predominant serotype, S. Pullorum, and its prevalence in this sector deserves our more in-depth study. AMR in Salmonella of poultry origin has emerged largely due to the widespread use of antimicrobials ( McDermott et al., 2018 ). In this study, all isolates were MDR strains, exhibiting high resistance rates to AMP, NAL, CIP, and SMZ. Notably, the AMR rates for these four drugs were higher than those reported between 1962 and 2019 ( Sun et al., 2021 ). In most hatcheries across China, day-old chickens receive a single dose of ampicillin to mitigate the risk of Salmonellosis before transfer to farms. Additionally, the frequent use of antimicrobials in upstream egg-laying farms exacerbates AMR acquisition in Salmonella during the hatching stage. The study investigated the AMR of isolates from different sources. The strains isolated from the environment showed resistance to most of the antimicrobials tested, which could be attributed to the adaptive evolution of Salmonella under environmental pressure ( Müller et al., 2022 ). Furthermore, the presence of isolates with identical MDR profiles across various production stages and sources suggests potential horizontal spread during the production process of yellow-feathered broiler hatcheries. Therefore, monitoring the transmission pathways of AMR Salmonella throughout production is essential to prevent further spread to downstream industries. In consideration of public health, it is also urgent to monitor the resistance profile of Salmonella in the hatchery. In the present study, we identified certain isolates resistant to TGC and PB, which are considered "last defense" drugs (PB, TGC, and IPM) for human clinical treatment. It should be noted that these antimicrobials are banned in poultry and livestock production ( Yang et al., 2022 ). However, other colistin and tetracyclines antimicrobials are still commonly used in clinical practice to prevent and treat bacterial infections in poultry and livestock production. The emergence of "last defense" resistant isolates may result from either gene horizontal transfers from other strains or synergistic resistance to similar antimicrobials. Therefore, enhancing surveillance of Salmonella resistance in yellow-feathered broiler hatcheries is essential, alongside raising public health awareness to mitigate potential threats from cross-contamination and antibiotic misuse. To further explore the characteristics of Salmonella in the hatchery, we conducted WGS to analyze its evolutionary relationship during production. Our findings indicate that Salmonella isolates from chicks and eggs at different stages were closely related, and those from the environment also showed significant similarity. This demonstrates that cross-contamination occurs among chicks and eggs in the hatchery, as well as between these and environmental factors. The hatching stage appears critical for cross-contamination. Environmental isolates, such as fluff in the air, troughs, and pad paper during the laying, hatching, and post-hatch stages, exhibited close genetic relationships (SNP ≤10) and carried similar resistance genes and plasmid replicons. Salmonella in the environment may come from dead embryos and sick chicks, and spreads into the environment during the hatching stage and subsequently through the worker's hands/soles to the post-hatch stage. Therefore, implementing robust monitoring and control measures is essential to mitigate the horizontal transmission of Salmonella . In addition, the isolates carried some AMR plasmid replicons: IncN and lncX1. These isolates could horizontally transfer AMR plasmid replicons to other recipient bacteria through conjugation, making Salmonella with T4SS a potential AMR gene reservoir. At the same time, all serotypes contained virulence genes encoding for nonfimbrial adherence, survival in macrophages, enterotoxin, invasion, magnesium uptake, and secretion systems ( Zuo et al., 2020 ). The presence of these genes heightens the risk of Salmonella spread and infection in the yellow-feathered broiler industry's downstream processes. 5 Conclusions Our findings suggest that in addition to vertical transmission, horizontal transmission is also an important route of Salmonella transmission in hatcheries, as demonstrated by phenotype comparisons and WGS analyses. Salmonella transmission occurred through various media during daily production, leading to potential cross-contamination. Additionally, we systematically reveal the distribution of resistance genes and plasmid replicons in MDR Salmonella in yellow-feathered broiler hatcheries. These findings provide comprehensive insights to understand Salmonella in yellow-feathered broiler hatcheries. Ethical statement Samples were collected and processed in accordance with Chinese regulations on poultry inspection. Prior to the sampling of animal specimens, including chicken cloacal swabs, sick chicks and dead embryos, the consent of the hatchery proprietor had been obtained. The study was approved by the Animal Ethics and Morality Committee of the College of Veterinary Medicine, South China Agricultural University. CRediT authorship contribution statement Canji Wu: Writing – original draft, Software, Data curation, Conceptualization. Yuhui Deng: Software, Data curation. Zeluan Chen: Writing – original draft, Data curation. Junhao Peng: Investigation, Data curation. Peizhi Wu: Investigation, Data curation. Jinger Chen: Investigation, Data curation. Pengju Chen: Project administration. Ming Liao: Supervision. Chenggang Xu: Project administration. Jianmin Zhang: Writing – review & editing, Supervision. Declaration of competing interest The authors declare that they have no competing interests. Acknowledgments This work was supported by the National Key Research and Development Program of China [grant numbers 2023YFD1801000 ]; Rural Science and Technology Specialist Programme [grant numbers 2023E04J0092 ]; the “14th Five-Year” Guangdong Province, agricultural science and technology innovation project [grant number 2022SDZG02 ]; Double first-class discipline promotion project [grant number 2023B10564003 ]; College Students' Innovative Entrepreneurial Training Plan Program [grant numbers X202210564151 , S202210564173S ]; Walmart Foundation [Project # 61626817 & SA1703162 ] and supported by Walmart Food Safety Collaboration Center; National Broiler Industry Technology System Project [grant number cARS-41- G16 to cARS-41-G16 ]. The funders have no role in the study design, data collection and interpretation, or the decision to submit the work for publication.
|
[
"ALCOCK",
"BAI",
"BANKEVICH",
"BARUA",
"BELACHEW",
"BOLGER",
"CAFFREY",
"CARATTOLI",
"CHEN",
"CROUCHER",
"FEI",
"GUREVICH",
"HA",
"LARSEN",
"LETUNIC",
"LU",
"MCDERMOTT",
"MULLER",
"OLOSO",
"PRICE",
"QI",
"REN",
"SAMIA",
"SHANG",
"SUN",
"TALUKDER",
"TENOVER",
"VOLKOVA",
"WANG",
"WANG",
"XU",
"YANG",
"ZAMIL",
"ZHAO",
"ZUO"
] |
86d18e7202b345a79d7ad1a215e50759_Dimensional differences in mandibular antegonial notches in temporomandibular joint ankylosis_10.1016_S2212-4268(11)60004-3.xml
|
Dimensional differences in mandibular antegonial notches in temporomandibular joint ankylosis
|
[
"Singh, Stuti",
"Kumar, Sumit",
"Pandey, Rahul",
"Passi, Deepak",
"Mehrotra, Divya",
"Mohammad, Shadab"
] |
Background
Deep antegonial notch (AN) is seen in congenital and acquired abnormalities of mandible like condylar hypoplasia, temporomandibular joint ankylosis (TMA), muscular hypoactivity and brachial arch syndrome. This study was planned with an aim to study the depth of AN in TMA and find its relation with the duration of ankylosis.
Materials and Methods
The study comprised 20 cases of unilateral or bilateral TMA, with age range 8–25 years. A comparison between ipsilateral and contralateral AN was done on orthopantomograms of these bilateral and unilateral cases with the time period of total duration of ankylosis.
Results
Seven cases had right-sided ankylosis, six had left and seven had bilateral TMA where the history of ankylosis ranged from 6 months to 12 years. Spearman's rank correlation indicated a strong correlation between the duration of history of ankylosis and AN on the ipsilateral/contralateral sides. Wilcoxin signed rank test proved the results to be statistically significant.
Conclusion
Deep accentuated AN is one of the clinical features of the TMA and has a direct relation to the morphology and growth pattern of mandible.
| null |
[] |
6272ea2a09414391900479568c0fec60_Effective sample size calculation How many patients will I need to include in my study_10.1016_j.mefs.2011.10.001.xml
|
Effective sample size calculation: How many patients will I need to include in my study?
|
[
"Youssef, Mohamed A.F.M."
] | null |
Properly designed clinical trials are the heart of evidence based medicine (EBM). A well defined research question and adequate sample size with an appropriate statistical power, relative to the aim of the study, are the main pillars for properly designed clinical trials to retrieve accurate results. Although it is unethical to expose your patients to the risk of the intervention, waste your resources and time to conduct a study with lower power or inadequate sample size, there are many published clinical trials that ignored these pillars because it is always difficult to be planned prospectively. There are five ingredients that should be fulfilled to calculate a proper sample size: the effect size, variance, preset significance level, statistical power and whether a one or two tailed statistical analysis is planned. The effect size It is the lowest measured difference between the compared groups. For example, suppose we designed a study to compare the standard, long GnRH agonist protocol with clinical pregnancy rate of 30% in normo-responder women with GnRH antagonist protocol of unknown but potentially higher clinical pregnancy rate (CPR). It would probably be clinically unimportant if the GnRH antagonist protocol led to only 31% clinical pregnancy rate, but suppose we believe that it would be a clinically important improvement if the GnRH antagonist protocol will lead to 40% CPR. Therefore, we would choose an effect size of 10% (0.10). The results of pilot studies, historical data or a literature review can also guide us to find a reasonable effect size. As effect size is decreased, sample size increases Variance Standard deviation (SD) quantifies variability in the measurements made within each comparison group. The estimated variance could be determined on the basis of previous data collected from a similar study population, pilot study or a review of the literature. If preliminary data are not available, this parameter may have to be estimated on the basis of subjective experience. As variance is increased, sample size increases Statistical power It is the number or percentage or fraction that indicates the probability that a study will obtain a statistically significant effect (1) . For example, suppose a study is conducted to explore if clomiphene citrate plus metformin is better than metformin alone in PCOS women as regards clinical pregnancy rate. Clearly the study would be interesting if a statistically significant difference was found between the two treatments. When the study concludes that “no statistically significant difference has been found between both treatment groups”, you should not necessarily conclude that the treatment was ineffective. It is possible that the study missed a real difference because you used a small sample. In this case you made a Type II error – obtaining a “not significant” result when in fact there is a difference. Then you should ask yourself how much power the study had to find various differences if they existed. For example, a power of 80% (or 0.8), which is commonly used in randomized controlled trials (RCT), means that a survey or study when conducted repeatedly over time, is likely to produce a statistically significant ( P < 0.05) result 8 times out of 10. As power is increased, sample size increases Level of significance It is the maximum P value for which a difference is to be considered statistically significant. The commonly used significance level is 0.05. If a result is statistically significant, there are two possible explanations: The populations really are different, so your conclusion is correct. The difference may be large enough to be scientifically interesting. The populations are similar, so there really is no difference. By chance, you obtained larger values in one group and smaller values in the other. Finding a statistically significant result when the populations are identical is called making a Type I error (i.e., false positive results). If you define statistically significant to mean “ P < 0.05”, then you’ll make a Type I error in 5% of experiments where there really is no difference. As P value is decreased, sample size increases One or two-tailed statistical analysis One tailed analysis is used when the difference between both the comparisons will be in one direction, suppose, we believe that GnRH antagonist protocol mentioned in the previous example will lead only to higher not lower clinical pregnancy rate, while two tailed analysis should be used if we suppose that the protocol may lead to either higher or lower clinical pregnancy rate than the standard long GnRH agonist protocol. Finally, sample size calculation should be calculated before the initiation of the study, this allows proper and early modification of the study design and will lead to more robust results which will potentiate the study value and support its publication in high impact journals. There is a lot of sample size calculation equations which depend on the type of research study, either comparative or descriptive, and data distribution, either normally distributed or not and one sample or two sample scenario, however, many researchers find a difficulty to apply these. Currently there are a lot of user friendly and freely available sample size calculators which could be used by researchers, but does not replace the role of statistician in the design of the study. Examples of sample size calculators: • http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/ ; • http://www.statisticssolutions.com/products-services/login/standard-membership/sample-sizepower-analysis-calculator-with-write-up • http://www.raosoft.com/samplesize.html • http://www.macorr.com/ss_calculator.htm • http://www.dssresearch.com/toolkit/sscalc/size_a1.asp
|
[
"BROWNER"
] |
ace0e31e90b442aa981d232ae58b7267_Table of Contents_10.1016_S2666-6677(20)30074-X.xml
|
Table of Contents
|
[] | null | null |
[] |
eff540193c7c473dbb637b27a87c03c4_Management of Mild Degenerative Cervical Myelopathy and Asymptomatic Spinal Cord Compression An Inte_10.1016_j.bas.2023.102071.xml
|
Management of Mild Degenerative Cervical Myelopathy and Asymptomatic Spinal Cord Compression: An International Survey
|
[
"Brannigan, Jamie",
"Davies, Benjamin",
"Mowforth, Oliver",
"Yurac, Ratko",
"Kumar, Vishal",
"Dejaeghar, Joost",
"Zanamorano, Juan",
"Murphy, Rory",
"Tripathi, Manjul",
"Anderson, David",
"Harrop, James",
"Molliqaj, Granit",
"Wynne-Jones, Guy",
"Arbitin, Jose",
"Kato, So",
"Ito, Manabu",
"Molliqaj, Granit",
"Romelean, Ronie",
"Dea, Nicolas",
"Graves, Daniel",
"Tessitore, Enrico",
"Nouri, Aria"
] | null |
Oral e-Poster Presentations - Booth 2: Spine A (Degenerative Disease), September 25, 2023, 1:00 PM - 2:30 PM Background: Currently there is limited evidence and guidance on the management of mild degenerative cervical myelopathy (DCM) and asymptomatic spinal cord compression (SCC). Anecdotal evidence suggest variance in clinical practice. The objectives of this study were to assess current practice in the assessment and management of mild DCM and asymptomatic SCC and to quantify the variability in clinical practice. Methods: Neurosurgeons, spinal orthopaedic surgeons, and some additional health professionals completed a web-based survey distributed by email to members of AOSpine and the CSRS North American Society. Questions captured experience with DCM, frequency of DCM patient encounters, and standard of practice in the assessment of DCM. Further questions assessed the definition and management of mild DCM, and the management of asymptomatic spinal cord compression. Results: A total of 699 respondents, mostly surgeons, completed the survey. Every world region was represented in the responses. Half (50.14%, n=359) had greater than 10 years of caring for patientswith DCM. A process of standardised follow-up for non-operative patients was reported by 488 respondents (69.52%). At follow-up for mild DCM, there was a heterogeneous mix of investigations reported, and this most often occurred at 6 months (32.92%, n=158). There was some conflictregarding which clinical features would cause a surgeon to counsel a patient towards surgery. Practice for asymptomatic SCC aligned closely with mild DCM. Finally, there were some contradictory definitions of mild DCM provided in the form of free text. Conclusions: Professionals typically offer outpatient follow up for patients with mild DCM and/or asymptomatic SCC. However, what this constitutes varies widely. Further research is needed to define best practice.
|
[] |
bf0b8a5f548446ab9ef600c1f5624931_Apoptosis induction capability of silver nanoparticles capped with Acorus calamus L and Dalbergia si_10.1016_j.heliyon.2024.e24400.xml
|
Apoptosis induction capability of silver nanoparticles capped with Acorus calamus L. and Dalbergia sissoo Roxb. Ex DC. against lung carcinoma cells
|
[
"Thakkar, Anjali B.",
"Subramanian, R.B.",
"Thakkar, Vasudev R.",
"Bhatt, Sandip V.",
"Chaki, Sunil",
"Vaidya, Yati H.",
"Patel, Vikas",
"Thakor, Parth"
] |
Silver nanoparticles (AgNPs) were prepared using a one-step reduction of silver nitrate (AgNO3) with sodium borohydride (NaBH4) in the presence of polyvinylpyrrolidone (PVP) as a capping agent. Plant extracts from D. sissoo (DS) and A. calamus L. (AC) leaves were incorporated during the synthesis process. The crystalline nature of the AgNPs was confirmed through X-ray diffraction (XRD), confirming the face-centered cubic structure, with a lattice constant of 4.08 Å and a crystallite size of 18 nm. Field Emission Gun Transmission Electron Microscopy (FEG-TEM) revealed spherical AgNPs (10–20 nm) with evident PVP adsorption, leading to size changes and agglomeration. UV–Vis spectra showed a surface plasmon resonance (SPR) band at 417 nm for AgNPs and a redshift to 420 nm for PVP-coated AgNPs, indicating successful synthesis. Fourier Transform Infrared Spectroscopy (FTIR) identified functional groups and drug-loaded samples exhibited characteristic peaks, confirming effective drug loading. The anti-cancer potential of synthesized NPs was assessed by MTT assay in human adenocarcinoma lung cancer (A549) and lung normal cells (WI-38) cells. IC50 values for all three NPs (AgPVP NPs, DS@AgPVP NPs, and AC@AgPVP NPs) were 41.60 ± 2.35, 14.25 ± 1.85, and 21.75 ± 0.498 μg/ml on A549 cells, and 420.69 ± 2.87, 408.20 ± 3.41, and 391.80 ± 1.55 μg/ml respectively. Furthermore, the NPs generated Reactive Oxygen Species (ROS) and altered the mitochondrial membrane potential (MMP). Differential staining techniques were used to investigate the apoptosis-inducing properties of the three synthesized NPs. The colony formation assay indicated that nanoparticle therapy prevented cancer cell invasion. Finally, Real-Time PCR (RT-PCR) analysis predicted the expression pattern of many apoptosis-related genes (Caspase 3, 9, and 8).
|
1 Introduction The second-most lethal kind of cancer in the world is lung cancer [ 1 ]. In 2022, 1, 30, 180 projected fatalities and 2, 36,740 additional cases were predicted [ 2 ]. Small-cell lung cancer, which accounts for 20 % of cases diagnosed, and non-small-cell lung cancer, which accounts for 80 % of cases diagnosed, are the two forms of lung cancer. There are several therapies for lung cancer, including radiation, chemotherapy, and surgical removal, but sadly they all have a lot of adverse effects. Drug resistance in cancer cells has become a significant factor recently. To trigger apoptosis in lung cancer with fewer side effects and at a reasonable price, a new medication with biocompatible treatment methods is desperately needed [ 3 , 4 ]. With the least amount of expenditure and side effects, nanotechnology provides tools and resources to diagnose and treat a range of malignancies [ 5 , 6 ]. Production of green nanoparticles is affordable, secure, non-toxic, and ecologically responsible [ 7 ]. Silver (Ag) is the most commercially successful nano-compound, according to the Woodrow-Wilson database on nano-products, due to its physicochemical properties, antibacterial activity and treatments, bimolecular recognition, biolabeling, catalysis, and microelectronics [ 8 ]. Various consumer goods, including electronics, cosmetics, household appliances, textiles, food processing, and medical supplies, use silver nanoparticles (AgNPs). Numerous plant extracts are thought to be effective natural reducing agents with strong antioxidant activity [ 9 , 10 ]. When compared to other metal nanoparticles, AgNPs are less toxic to humans [ 11 ]. Due to the development of implantable biomaterials, molecular imaging, wound healing, and drug administration, to name just a few of the growing number of biomedical applications, green production of nanoparticles with a restricted range of toxicity has become a hot study topic [ 12–15 ]. In particular, the use of silver nanoparticles for cancer detection and treatment has increased, not only as ideal platforms for targeted therapeutic administration or as early cancer screening probes but also as a potential therapeutic molecule on its own [ 16–18 ]. Silver nanoparticles showed potential cytotoxicity when tested on a variety of cancer cell lines (A549, MCF-7, HT29, HeLa), as well as Dalton's lymphoma ascites tumor [ 19–23 ]. The leaves of D. sissoo and A. calamus L. have been shown to contain a variety of secondary metabolites, including phenols, tannins, alkaloids, anthraquinones, saponins, and flavonoids, as determined by Gas Chromatography-High-Resolution Mass Spectrometry (GC-HRMS) [ 24,25 ]. They also exhibit strong cytotoxic action against A549 cells. Therefore, to create a new treatment strategy, we chose to synthesize DS@AgPVP NPs and AC@AgPVP NPs. Using multiple spectroscopic techniques and microscopic inspection, we have developed stable nanoparticles with a limited size distribution for the current investigation. We then investigated the anticancer properties of our green-synthesized NPs on A549 cells in vitro . 2 Experimental section 2.1 Chemicals and materials Silver nitrate (AgNO 3 ), sodium borohydride (NaBH 4 ), and PVP40 (C 6 H 9 NO) n were used for the synthesis of AgNPs. Silver nitrate extra pure was purchased from SRL (94118, India), and Sodium borohydride was purchased from Merck (106371, India) and utilized without any further purification. Polyvinylpyrrolidone (PVP) with an average F.M. 40,000 ≥ 99 %, High Purity (K30), was purchased from Fisher Scientific (Amresco, 0507-500G), USA. Milli-Q water was used throughout the experimentation. 2.2 Preparation of plant extracts Plant extracts were prepared from D. sissoo (DS) and A. calamus L. (AC) leaves, as mentioned previously [ 24 , 25 ]. The extracts were stored at 4 °C until further use. 2.3 Synthesis of colloidal PVP-coated silver nanoparticles Silver nitrate (AgNO 3 ) was reduced with sodium borohydride (NaBH 4 ) in one step to produce the colloidal silver solution. Silver nanoparticles (AgNPs) were produced by adding a 2 mM reducing agent (NaBH 4 ) drop-by-drop to a 1 mM silver salt (AgNO 3 ) at room temperature. For 30 min, the mixture was regularly stirred, and during that time, the solution quickly became yellow. As a capping agent, 0.1 % PVP was added to the AgNO 3 solution to create the PVP-coated silver nanoparticles. Next, NaBH 4 was added drop by drop, and the mixture was vigorously stirred for 30 min. Then, add drop-by-drop plant extracts (DS and AC) to a different tube mixer and swirl for 40–45 min at 500 rpm. The PVP Ag nanoparticles that had been coated with extract were ready after 45 min. TEM, UV, particle size analysis, zeta potential analysis, and FTIR analysis characterized the NPs. 2.4 Characterization of synthesized PVP-coated silver nanoparticles The crystalline phases of the samples were analyzed using a Rigaku Ultima diffractometer equipped with a Cu(kα) radiation source (λ = 1.546 Å) through powder X-ray diffraction technique. The morphology of AgPVP NPs was characterized using a transmission electron microscope (TEM) (Tecnai 20, Philips, and Holland). Particle size and zeta potential were determined using a particle size analyzer with a zeta potential measuring system (HORIBA, SZ100) at a scattering angle of 173° and a temperature of 25 °C using samples. The Fourier transform infrared spectra (Spectrum GX, PerkinElmer, and U.S.A.) at a resolution of 0.15 cm −1 were used to estimate the structural features of nanoparticles in the 400–4000 cm −1 range using KBr pellets. 2.5 In-vitro anticancer studies 2.5.1 Cell lines The National Centre for Cell Sciences (NCCS) in Pune, India, has a repository of human adenocarcinoma lung cancer (A549) and Lung Normal cells (WI-38), which were obtained and preserved as per following previous instructions mentioned [ 26 ]. 2.5.2 Cell viability assay To study the cytotoxic capability of all NPs (AgPVP NPs, DS@AgPVP NPs, and AC@AgPVP NPs) on A549 and WI-38 cells, an MTT test was performed [ 27 ]. 2.5.3 Detection of morphological alteration in A549 cells An inverted fluorescent phase contrast microscope was used to examine the morphological changes within NP-treated A549 cells. 5 X 10 4 cells were incubated overnight in a 96-well plate at 37 °C with 5 % CO 2 before being treated for 24 h with the IC 50 concentrations of NPs. Morphological changes were seen and photographed using a 40× inverted fluorescence phase-contrast microscope (Carl Zeiss, Axio observer A1). 2.5.4 Estimation of intracellular reactive oxygen species (ROS) in A549 cells ROS production was analyzed using the fluorescent marker 2,7-dichlorodihydrofluorescein diacetate in diacetate (DCFH-DA) probe, as previously described [ 28 ]. 2.5.5 Estimation of mitochondrial membrane potential in A549 cells The activity of NPs on 3 X 10 4 A549 cells/well was determined using mitochondrial membrane potential (MMP) by using JC-10 dye (Sigma Aldrich, MAK159). After 24 h of the incubation period, cells were stained with JC-10 dye for 30–35 min. Then, the cells were washed with 1× PBS. The fluorescence intensity of control and treated cells was analyzed using a microplate reader (Molecular Devices, USA, SpectramaxM2e) at 485 nm excitation and 530 nm emission, respectively. 2.5.6 Nuclear assessment in A549 cells by 4-6-diamidino-2-phenylindole staining Nuclear morphology was identified by 4-6-diamidino-2-phenylindole (DAPI) staining (HiMedia, India) in (1.5 X 10 4 ) A549 cells with the capacity to create fluorescence in DS-DNA after treatment with IC 50 doses of NPs respectively. After 24 h, cells were washed 2–3 times with 1× PBS, stained with 50 μl of DAPI dye, and incubated in a CO 2 incubator for 30 min. Following incubation, the cells were rinsed with 1× PBS to remove excess dye. A fluorescence-inverted phase-contrast microscope with a DAPI filter was used to investigate the cells [ 29 ]. 2.5.7 Live/dead cell differentiation in A549 cells by double fluorescence staining with Acridine Orange/Ethidium Bromide (AO/EB) A549 cells (2 X 10 5 ) were seeded in 6-well plates overnight at 37 °C in a CO 2 incubator. After 24 h, they were treated with the IC 50 concentration of NPs and re-incubated for 24 h. Cells were fixed with ice-cold methanol for 15–20 min at room temperature, then washed 2–3 times with 1× PBS. Cells were stained with 10 μg/ml AO/EB in each well and then incubated for 15 min at 37 °C in a CO 2 incubator. After incubation, cells were washed with 1× PBS and visualized by a fluorescence inverted microscope (40× magnification, Axio Observer A1, Carl Zeiss) under Fluorescein Isothiocyanate (FITC) and Tetramethylrhodamine Isothiocyanate (TRITC) filters [ 29 ]. 2.5.8 Estimation of apoptosis in A549 cells by Giemsa staining 2 X 10 4 cells were seeded in a 6-well culture plate for 24 h. After 24 h, cells were treated with their IC 50 concentrations of NPs respectively. After 24 h of incubation, the culture media was removed, and the cells were washed with 1× PBS. For the fixation of the cells, ice-cold methanol was used. Giemsa staining was used to determine the morphology of proliferative and apoptotic cells. 2.5.9 Assessment of clonogenic assay in A549 cells To determine the Anti-invasion effects on in vitro cell proliferation, the colony formation assay was performed [ 30 , 31 ]. 2.5.10 RT-PCR studies in A549 cells Around 1 X 10 5 lung cancer cells were treated with the IC 50 concentrations of NPs in 6-well plates for 24 h at 37 °C in a CO 2 incubator, followed by total RNA extraction using the TRIzol reagent method. 2 μg of extracted RNA was further used for cDNA synthesis by using the Thermo Fisher cDNA synthesis kit as per the protocol. The primer sequences (0.5 μM each of forward and reverse primers) used in this gene expression study are listed below, as described previously [ 24 ]. Furthermore, an RT-PCR assay was performed using the Biorad SYBR Green qPCR Kit following the manufacturer's instructions. 2.6 Brine shrimp lethality assay Brine prawn ( Artemia salina ) eggs were hatched in artificial seawater with 38 g/L of table salt. A lamp was placed above the tank's open side to lure freshly born prawns near the tank wall. The prawns were ready for the test after 24 h of development into nauplii ( Artemia salina ). The nanoparticles were subjected to the usual brine shrimp lethality bioassay [ 32 ]. To get concentrations ranging from 10 to 100 μg/ml, all nanoparticles (1 mg) were dissolved in 1 ml of 1 M NaOH (pH 8). A Petri plate containing 1 M NaOH (pH 8) in 5 ml of salt water was used as the negative control. As a positive control, potassium dichromate was dissolved in 1 M NaOH (pH 8) and serially diluted to concentrations of 5 mg/ml. A suspension of larvae (0.1 ml), containing about 10 larvae, was added to each Petri plate and incubated for 24 h. The Petri plate was then examined, and the number of dead larvae in each bottle was counted after 24 h. The death percentage was calculated as per the following equation: (Equation No. 1) Percent of Death= (Total shrimp-Alive shrimp) / (Total Shrimp) X 100 ------ 2.7 Statistical analysis All reported data are expressed as the mean ± SEM of three individual experiments performed in triplicates. Statistical analysis among different treatment groups was determined using a one-way ANOVA followed by Tukey as a post-assay through GraphPad Prism 9.4.1. Significance: *p < 0.05, **p < 0.01, ***p < 0.001 and ****p < 0.001. 3 Results and discussion 3.1 Characterization of nanoparticles 3.1.1 XRD analysis The powder X-ray diffraction (PXRD) pattern of the AgNPs is presented in Fig. 1 . The obtained diffraction pattern was analyzed and indexed using Powder-X software. The XRD pattern of the AgNPs exhibited distinct and prominent peaks at 2θ angles of 38.12°, 44.28°, 64.39°, 77.49°, and 81.56°, which corresponded to the crystal planes (111), (200), (220), (311), and (222), respectively. These peaks matched well with the face-centered cubic crystal structure, confirming the crystalline nature of the sample, and agreed with the reference data from the JCPDS card (04–0783). The lattice constant (a) was determined using the formula where (Equation No. 2) a = d h 2 + k 2 + l 2 represents the interplanar spacing. The calculated lattice parameter for the sample was found to be 4.08 Å. The crystallite size ( d D ) of the samples was calculated using the Debye-Scherrer formula, as reported in Ref. [ 33 ]. The calculated value for the crystallite size was determined to be 18 nm. 3.1.2 Field emission Gun transmission electron microscope (FEG-TEM) analysis The shape and size of the particles were determined using a FEG-TEM investigation. A carbon-coated copper grid was utilized to contain dilutions of an Ag nanoparticle solution, which was then allowed to dry naturally while FEG-TEM pictures were captured. The FEG-TEM micrographs indicated that the produced AgNPs were spherical and agglomerated less ( Fig. 2 A). The average diameter of AgNP was determined to be between 10 and 20 nm. The clear and consistent lattice fringes visible in the inset FEG-TEM image ( Fig. 2 B) show the crystalline nature of the produced AgNPs. The observed distance between lattices is around 0.23 nm, which is attributable to the (111) planes of a silver crystal lattice. Furthermore, it implies that the prominent faces of silver nanoparticles are in good agreement with the face-centered cubic structure's lattice fringe (111). Approaching the electron beam perpendicular to one of the AgNPs spheres resulted in the selected area electron diffraction (SAED) pattern ( Fig. 2 C). The diffraction spot pattern's strong symmetry suggests that the produced AgNPs are well crystalline. Fig. 2 D shows a slight increase in particle size, significant agglomeration, and noticeable morphological alterations, indicating that PVP was successfully adsorbing onto the surface of AgNPs. Conspicuous changes in color intensity are seen in Fig. 2 (E) and (F), with separate areas showing considerable blackness and brightness between two individual particles. This contrasting color disparity signifies the effective encapsulation and loading of a drug onto the PVP-coated AgNPs. 3.1.3 UV–Vis spectra Visual examination of the solution was used to monitor the reduction of Ag ions (Ag + ) to Ag nanoparticles. The color shift from colorless to yellow in the AgNO 3 :NaBH 4 solution during the NPs synthesis was the first sign that AgNPs were effectively created. Silver nanoparticles look bright yellowish in an aqueous solution because of the surface plasmon resonance (SPR) of metal nanoparticles, according to Ref. [ 34 ]. The surface plasmon resonance (SPR) of metal nanoparticles is widely understood to be created by numerous excitations of electrons near the nanoparticle's surface in resonance with a light wave. The UV–Vis spectra of silver nanoparticles ( Fig. 3 ) at room temperature indicate a prominent surface plasmon resonance (SPR) band at 417 nm, confirming the synthesis of AgNPs. Whereas of the surface plasmon absorption peak for PVP-coated silver nanoparticle appears at 420 nm, which exhibits redshift. This red shift of the surface plasmon absorption in PVP-coated silver nanoparticles can be attributed to the presence of PVP as a capping agent. Experimentally, it is reported that the intensity of the surface plasmon resonance depends on the particle size, shape, capping agent, and environment of the particle [ λ max 35 , 36 ]. The UV–Vis spectra can be used to determine the size distribution of nanoparticles in a colloidal solution. Sharma and his colleagues suggest that the full width at half maximum (FWHM) of the UV–Vis absorption peaks can serve as an indicator of the extent of nanoparticle aggregation [ 37 ]. In the present study, a narrower peak broadening (FWHM) in the UV–Vis absorbance spectra of silver nanoparticles is correlated with a decrease in nanoparticle polydispersity. In addition, UV–Vis absorbance spectra of AC@AgPVP NPs and DS@AgPVP NPs displayed strong absorption peaks at 314 nm and 293 nm, respectively. The existence of particular chromophores or phytochemical elements inside the leaves can explain these different peaks. 3.1.4 Particle size distribution and zeta potential The size of the particles created had to be determined to prove their nanoscale nature. As a result, using a Zeta sizer, the Z-average size, particle size distribution, and polydispersity index (PDI) of drug-loaded Ag NPs were measured. DLS was utilized to investigate the average nanoparticle diameter and size distribution profile of colloidally produced materials. The average diameter of nanoparticles was calculated using the Stokes-Einstein equation. where (Equation No. 3) d H = K B T 3 π η D is the hydrodynamic diameter, d H is Boltzmann's constant, T is the absolute temperature, K B is the viscosity of the medium, and D is the diffusion coefficient [ η 38 ]. In terms of intensity-weighted particle size distribution, Fig. 4 (A-D) depicts the DLS size measurement for AgNPs, AgPVP NPS, DS@AgPVP NPs, and AC@AgPVP NPs . The calculated average particle size was found to be 23.06 ± 3.62 nm for AgNPs, whereas PVPAg NPs had a particle size of 66.04 ± 16.02 nm. Both sizes acquired by DLS were somewhat larger than those obtained by HRTEM. This might be because HRTEM is based on physical size, but DLS is based on hydrodynamic size. As a result, the DLS has a greater size portion. Table 1 demonstrates that with the addition of PVP and drug, the average size of AgNP S rises, confirming the adherence of PVP and medication to the surface of AgNPs. Another essential statistic for measuring particle size heterogeneity in the medium is the PDI. The PDI scale is from 0 to 1. Where 0 implies a restricted size distribution and 1 denotes a relatively broad size distribution with the possibility of big particles or aggregates. The estimated PDI value for AgNP S and AgPVP NPs confirms the nanoparticles' monodispersity. Furthermore, the significant PDI values in AgNP S samples containing pharmaceuticals show that biological molecules also ensure drug loading to the surface of the NPs, which is consistent with prior findings. 3.1.5 Fourier Transform Infrared Spectroscopy (FTIR) Fig. 5 presents the FTIR spectra results spanning the 500-4000 cm −1 range, utilized for the identification of functional groups on NPs. The peak observed at 862 cm −1 indicated the presence of C–C stretching vibrations, while the peak at 1037 cm −1 represented the C–N stretching vibrations originating from the PVP polymer backbone. Furthermore, the peak observed at 1083 cm −1 was associated with the C–O stretching vibrations of the PVP molecule. The peaks at 1236 cm −1 and 1384 cm −1 corresponded to the C–O and C–N stretching vibrations, respectively, arising from the amide groups within the PVP structure. Soni et al. suggest that a strong, sharp peak at 1642 cm −1 can be attributed to C O stretching vibration, which exhibits shift when compared to pure PVP [ 39 ]. This wavenumber shift in the C O bond could be caused by bond weakening induced by the partial donation of lone pair electrons from oxygen in PVP to the vacant orbital of Ag. The presence of an intense peak at 2373 cm −1 indicated a C C triple bond stretching vibration. Additionally, the peaks at 2934 cm −1 and 3426 cm −1 were attributed to the asymmetric and symmetric stretching vibrations of the O–H groups present in the PVP polymer. The peak centered at nearly 1384 cm −1 can be assigned to C–H bending from CH 3 hydrocarbon groups which are likely to be present in all samples. In the spectra obtained after drug loading, a distinct and broadband was observed at approximately 3445–3459 cm −1 , indicative of the stretching vibration associated with hydroxyl (-OH) groups. This characteristic band confirms the presence of hydroxyl groups, which are essential constituents of diverse phenolic phytochemical compounds, including flavonoids, phenolic acids, and polyphenols. The detection of this band further confirms the presence of these bioactive compounds in the drug-loaded samples. The weak peak at 1074 cm −1 appearing in drug-loaded spectra corresponded to the C– O –C stretching of alkyl-substituted functional units. The prominent peak ( Fig. 5 ) region around 650-750 cm −1 belongs to C–H out of a plane from mononuclear aromatic benzene, which suggests the presence of a Flavonoid derivative with a Quercetin-like structure [ 40 ]. Moreover, the overall intensity shift and additional absorption peaks in the FTIR spectra further confirmed the presence of the drug on the PVP-Ag NPs surface. 3.2 Cell culture studies 3.2.1 Anti-proliferative study To estimate the cytotoxic or anticancer effects of NPs on A549 cells, an MTT assay was performed. A549 cells were treated with different concentrations of NPs for 24 h. The IC 50 of DS@AgPVP NPs, AC@AgPVP NPs, and AgPVP NPs against A549 cells were found to be 14.25 ± 1.85, 21.75 ± 0.49, and 41.60 ± 2.35 μg/ml. The IC 50 values DS@AgPVP NPs, AC@AgPVP NPs, and AgPVP NPs against WI-38 cells were 420.69 ± 2.87, 408.20 ± 3.41, and 391.80 ± 1.55 μg/ml respectively. The standard drug methotrexate (positive control) showed an IC 50 value of 10.20 ± 1.82 μg/ml on A549 cells and 26.21 ± 1.14 μg/ml on WI-38 cells. NPs treatments significantly reduced the cell viability of A549 cells, as shown in Fig. 6 A (A-D), compared to untreated cells. The well-known chemotherapy drug methotrexate has some serious negative side effects. Low dosages are therefore ideal for the patient's therapy. The NPs employed in this investigation demonstrated cytotoxicity exclusively against lung cancer cells, not toward healthy lung cells. It is because of the natural plant extract being embedded. Natural plant extracts have a reputation for being non-toxic to the body's natural cells. The toxicity of methotrexate against Human Embryonic Kidney Cells (HEK 293T) was also documented by Patel et al., in 2011 [ 41 ], indicating that the medicine might have negative effects on healthy human cells as well. The formation of nanoparticles increased the efficacy as evidenced by a reduction in IC 50 values. DS and AC hydromethanolic crude extracts showed IC 50 values of 90.56 ± 2.32 μg/ml and 92.83 ± 1.98 μg/ml respectively [ 24 , 25 ]. These lower IC 50 values of nanoparticles of identical chemicals imply that the AgPVP NPs have a more cytotoxic impact than crude extracts. Akter et al. [ 42 ] also reported a link between particle size and toxicity, demonstrating that smaller particle sizes induce greater toxicity. It is crucial to note that the capping material can alter the bioactivity of coated AgNPs since it helps the maintenance of AgNP surface chemistry by stabilizing, giving a clear shape, and reducing Ag + [ 43 , 44 ]. This section investigates the possible effects of AgNP coatings on toxicological phenomena. The kind of coating materials utilized can influence the cytotoxicity of AgNPs. Typically, the processes that lead to the induction of toxicity include ROS generation, the depletion of antioxidant defense systems, and the loss of mitochondrial membrane potential. The surface coating of AgNPs can influence their aggregation, dissolution ratio, and shape. The types of coatings utilized and their characteristics are critical in determining the cytotoxicity of AgNPs. Fig. 6 B depicts the concentration-response curve of the cytotoxicity experiment in A549 cells treated with NPs, methotrexate (positive control), and vehicle (hydromethanol). The vehicle has no discernible influence on the cytotoxic activity of A549 cells. In control and vehicle, the percentage of cell proliferation remains the same. The increased efficacy of NPs is corroborated by contemporary reports. Venugopal et al. [ 45 ] reported that AgNPs have good cytotoxicity activity against MCF-7 and A549 cells with IC 50 values of 60 and 50 μg/ml . Tian and his [ 14 ] team discovered that AgNPs also have cytotoxicity with IC 50 values of 50 μg/ml against A549 cells. Vivek et al. [ 46 ] reported that AgNPs have good cytotoxic activity against MCF-7, with IC 50 values of 50 μg/ml for 24 h and IC 50 values of 30 μg/ml for 48 h treatment. Hublikar et al. [ 47–49 ], reported that Green synthesis AgNPs from different extracts have good activity against A549 cells (IC 50 value of 85.47 μg/ml, and 49.52 g/ml) and also have good antibacterial properties towards the bacteria E. coli . 3.2.2 Effect of nanoparticles on the morphology of A549 cells The morphological changes were observed in NPs treated A549 cells when compared with the control cells ( Fig. 6 B (A)). The loss of membrane integrity, reduced cell development, and cytoplasmic condensation resulted in polygonal or bigonal-shaped lung cancer cells being transformed into round-shaped cells, which was the most notable morphological alteration of NPs-treated cells seen in this work ( Fig. 6 B (B-E)). 3.2.3 Detection of intracellular ROS An important study focuses on the ROS formation in cancer cells that is caused by oxidative stress reactions that are induced by drugs [ 50 ]. When the IC 50 concentrations of various NPs were used to treat A549 cancer cells in our investigation, higher ROS levels were observed in the cancer cells than in normal cells. The ability of the NPs to stimulate the creation of ROS would prevent the continued progression of the cell cycle, including cellular proliferation, attachment, and maturation, which elevate the genes for apoptosis and necrosis [ 51 ]. Because of the oxidation process, the NPs produced ROS in Lung cancer cells after the appropriate time interval, and these ROS accumulated on the DNA granules. The DNA then lost its capacity to transfer and prevented the synthesis of the polymerase enzyme [ 52 ]. Immature colonies were seen outdoors where the surrounding membrane was destroyed. Using DCFH-fluorescent DA's dye, the oxidation potential was found. Condensed immature DNA was released and bound to a specific DCFH-DA fluorescent dye when the DNA ability was destroyed, resulting in a sparse cell shape. In comparison to cells that weren't treated, treated cells displayed a severely condensed shape and necrotic structure ( Fig. 7 B-D). While the clumped colonies of untreated cells revealed their smooth shape ( Fig. 7 A). It was undeniably proven that the AgNPs' effects on A549 cells caused the ROS to be produced intracellularly [ 53 ]. Furthermore, the activator genes were silenced, and the cells' continuous production of ROS entered a decline phase, resulting in cell death. As a result, the current findings show that NPs have a stronger ability to suppress cancer cell proliferation due to oxidative stress-mediated ROS formation. According to experimental research, AgNP-generated ROS triggers the cancer cells' intrinsic apoptotic mechanism. The chemical conversion of Ag° to Ag+, Ag–O–, and Ag–S– increases the generation of ROS in cells. Overproduction of ROS within cells causes lipid peroxidation, protein oxidation, and DNA damage, all of which can trigger the cell's intrinsic apoptotic pathway. When cellular damage is high, the intrinsic apoptotic pathway is triggered by up-regulating pro-apoptotic Bcl-2 family members and down-regulating anti-apoptotic proteins [ 54 , 55 ]. According to Naveen Kumar [ 56 ], ROS is a crucial target for cancer cell suppression, and AgNPs significantly boosted oxidative stress responses in A549 cells. Bhakya [ 52 ] obtained a similar result against A549 cells by using a higher concentration of AgNPs. Furthermore, the oxidative stress response blocked the activation of apoptosis-related genes and resulted in programmed cell death due to intracellular breaches in the mitochondrial membrane. Padmini et al. [ 57 ] reported that Allium sativum silver nanoparticle with an IC 50 value of 22 μg/ml induced ROS-mediated apoptosis in A549 cells. The results showed conclusively that the increased levels of ROS production are directly related to the enhanced apoptotic effectiveness of nanoparticles. Recent research has linked the production of reactive oxygen species (ROS) within cells to the cytotoxic effect of AgNPs [ 58 , 59 ]. 3.2.4 Detection of MMP After 24 h of incubation, the fluorescent dye JC-10 was absorbed, permitting viewing of the mitochondrial membrane from cytochrome c leakage under a fluorescence microscope ( Fig. 8 ). The dye JC-10 is quite useful for determining the form of a damaged cancer cell membrane. It interrupts the intrinsic route of the cancer cell cycle. Because they are connected to wounded cells, they can cling to torn mitochondrial membranes. When cells were harmed, apoptosis and necrotic cells were constantly created, and they appeared in a variety of hues ranging from orange to green. It is critical to keep an eye on the depolarized membrane of the mitochondria due to the dysfunction of responsive genes. When the mitochondrial membrane is disturbed, caspases are activated, and Bcl-2 suppressive genes are expressed less frequently. Following the passage of the JC-10 through the mitochondrial membrane, the internal leakage materials were lost and changed color from red to green, suggesting apoptosis [ 60 ]. Our results were consistent with the guidelines and inhibited A549 cells at IC 50 NP concentrations. It triggered a cascade of apoptotic cell receptors in A549 cells, resulting in repeated life cycle arrests due to increased responsive gene activation. It is the first step in initiating apoptosis. This technique was most effective in treating cancer cells, similar to how mitochondrial membrane damage induced by instability and functional change promotes mortality. Our findings agree with Du et al., 2017 [ 61 ], who stated that mitochondria are essential for cell differentiation, death, and cell cycle growth control. 3.2.5 Effect on the integrity of nuclei by DAPI staining Nuclear fragmentation was caused by NPs, as detected by DAPI, a prominent nuclear counter-stain. Untreated cells had normal nuclei (smooth nuclear), but A549 cells treated with NPs had apoptotic nuclei (condensed or fragmented chromatin), as illustrated in Fig. 9 (A-D). In the A549 cells, nuclear morphology studies revealed typical apoptotic alterations such as chromatin condensation, nucleus fragmentation, and the production of apoptotic bodies. Surprisingly, recent research has found that AgNPs might cause DNA damage and death in cancer cells [ 62 ]. The number of apoptotic cells rose as NP concentration increased, suggesting that nanoparticles might trigger cell death ( Fig. 9 B-D). The cells' characteristics, which included spikes, shrinkage, and other signs of DNA fragmentation, were comparable to those previously reported as indicators of apoptosis in cells [ 63–67 ]. In agreement with these results, cancer cells have been shown to undergo DNA damage and apoptosis when exposed to AgNPs [ 21 ]. 3.2.6 Detection of apoptosis by AO/EB staining The induction of apoptosis after treatment with IC 50 concentrations of NPs was assessed by fluorescence microscopy after staining with AO/EB. AO penetrates the cell membrane, and the normal cells have green fluorescence, while in apoptotic cells, apoptotic bodies are formed because of nuclear shrinkage, blebbing that is observed as orange-colored bodies. Necrotic cells were observed as red fluorescence due to their loss of membrane integrity when viewed under the TRITC filter in an inverted fluorescence microscope ( Fig. 10 (A-H)) [ 68 ]. Venugopal et al. [ 45 ] reported that AgNPs synthesized from Syzygium aromaticum produced live and dead cells in MCF-7 cells confirmed by the AO/EB staining method. 3.2.7 Detection of morphological changes in cells by Giemsa staining The early morphological markers of apoptosis were identified using Giemsa staining. Cell shrinkage, loss of membrane asymmetry and attachment, and plasma membrane blebbing are the key features associated with morphological alterations. Cells treated with NPs had the most morphologically altered cells when compared to untreated cells ( Fig. 11 A). Under a phase-contrast microscope, Giemsa staining allows normal and treated cells to be distinguished, indicating that nanoparticles were more detrimental to A549 cells than to normal cells ( Fig. 11 A-D). 3.2.8 Colony formation assay A cell splits from the original tumor site and spreads to different locations of the body during the metastasizing process. To measure cell adhesion, a colony formation assay is utilized. Colony formation was significantly reduced in cells treated with NPs. Within the NPs-treated and untreated groups, the number of colonies varied greatly. However, their size has not changed ( Fig. 12 A). A reduced colony number in the treatment group demonstrated the extract's anti-proliferative efficacy. The proportion of dye uptake in cells treated with the extract was significantly lower, according to dye quantification ( Fig. 12 B). NPs significantly reduced colony numbers when compared to untreated cells (p < 0.0001). Bendale et al. [ 30 ] reported that platinum nanoparticles with an IC 50 value of 200 μg/ml reduced colony formation in A549 cells. 3.2.9 Investigation of the mechanism of apoptosis by gene expression studies To investigate the mechanism of apoptosis induction during 24 h incubation of A549 cells treated with NPs, qRT-PCR was used to evaluate the expression of several apoptosis genes, including Bcl-2, Bax, Cas-3, Cas-9, Cas-8, Fas, TNF-α, DR4 , and DR5 . Treatment of the A549 cells with IC 50 concentrations of NPs significantly decreased the mRNA level of Bcl-2 ( Fig. 13 A (A-I), 13B (A-I), 13C (A-I)). In addition, the mRNA levels of Bax, Cas-3, Cas-9, Cas-8, Fas, TNF-α, DR4, and DR5 were significantly increased after 24 h (p < 0.05). Additionally, after 24 h, the NPs were found to significantly reduce the levels of Cas-3, Cas-9, and Cas-8 in A549 cells by around 2.5-fold in comparison to the control ( Fig. 13 ). This finding demonstrates that NPs cause apoptosis by disrupting mitochondrial membrane potential. This decrease in mitochondrial membrane potential may have triggered the apoptotic cascade in A549 cells exposed to NPs. Activation of the DR4, DR5, Fas, and TNF-α receptors is lethal to cancer cells [ 69 , 70 ]. According to various researchers, AgNPs interact with cell membrane proteins, activate signaling pathways, and produce ROS, which damage proteins and nucleic acids and ultimately lead to apoptosis and cell proliferation inhibition. The action of AgNPs on mitochondrial membrane permeability causes the release of ROS, resulting in oxidative stress, interruption of ATP production, and DNA damage, which may affect JNK-mediated caspase-dependent apoptosis in human cell lines. JNK belongs to the MAPK family and contributes to apoptosis by phosphorylating Bcl-2 , resulting in Bcl-2 inactivation. When cytochrome c enters the cytosol, it initiates a cascade that includes caspases 3 through apaf-1 and caspase 9. The whole picture suggests that AgNPs induce apoptosis in cancer cells via ROS-mediated activation of the intrinsic pathway [ 71–75 ]. The activation of TRAIL receptors activates the caspase 8 channels, the extrinsic route, and the mitochondrial cascade, which signals the activation of caspase 9 (intrinsic pathway). Both caspases activate the common executioner ( Caspase 3 ) in A549 cells, causing apoptosis [ 76–78 ]. Throughout this process, the anti-apoptotic gene Bcl-2 is downregulated, which greatly raises the Bax/Bcl-2 ratio and initiates apoptosis in A549 cells [ 79 , 80 ]. As a result, in this study, we proposed a mechanism of action of NPs on A549 cells ( Fig. 14 ). In their study, Bethu et al. (2018) [ 81 ] observed that the administration of RS-AgNPs resulted in the induction of apoptosis in cancer cells. This phenomenon was attributed to the upregulation of many proapoptotic proteins, including caspase-3, caspase-8, caspase-9, p53, and Bax . Additionally, the downregulation of Bcl-2 was seen, suggesting the activation of both intrinsic and extrinsic pathways of apoptosis. According to earlier research, AgNPs induce apoptosis in A549 cells via the intrinsic pathway, whereas DS and AC, both crude extracts, induce apoptosis in A549 cells via the intrinsic and extrinsic pathways. Only AgNPs, DS, and AC crude extract nanoparticles activate apoptotic pathways internally or extrinsically in human lung adenocarcinoma cells (A549), according to gene expression analysis. 3.3 Brine shrimp lethality assay Brine shrimp cytotoxic activities of all AgNPs determine their different pharmacological properties [ 82 ]. In this study, ten different concentrations; 10–100 μg/ml of synthesized nanoparticles were used to determine their cytotoxicity using a brine shrimp lethality assay. The LD 50 value of DS@AgPVP NPs was found to be 40 μg/ml which is less than AC@AgPVP NPs (70 μg/ml) and AgPVP NPs (80 μg/ml) ( Table 2 ). The lowered LD 50 value suggested the cytotoxic nature of DS@AgPVP NPs and AC@AgPVP NPs compared to AgPVP NPs due to the presence of crude extracts capping the surface of AgPVP NPs. This enhanced cytotoxicity of the nanoparticles, having an LD 50 of 40 μg/ml to brine shrimp revealed the presence of toxic constituents. The Cytotoxic effects of nanoparticles on shrimp larvae can be linked with anticancer activity, and nanoparticles could be an alternative source of anticancer drugs [ 82 ]. These days, much attention is being given to metallic nanoparticles and their anticancer activity. The toxicity of these AgNPs and the mechanism of action cannot be explained in detailed [ 82 ]. 4 Conclusion The use of plant extracts from D. sissoo (DS) and A. calamus L. (AC) leaves in the synthesis of silver nanoparticles (AgNPs) resulted in the formation of well-characterized face-centered cubic structured nanoparticles. The crystalline nature, shape, and effective inclusion of polyvinylpyrrolidone (PVP) as a capping agent were validated by a thorough various techniques including X-ray diffraction (XRD), Field Emission Gun Transmission Electron Microscopy (FEG-TEM), UV–Vis spectra, and Fourier Transform Infrared Spectroscopy (FTIR). MTT assay findings revealed considerably lower IC 50 values on human adenocarcinoma lung cancer (A549) cells compared to lung normal cells (WI-38), indicating the anti-cancer potential of the nanoparticles (AgPVP NPs, DS@AgPVP NPs, and AC@AgPVP NPs). Nanoparticles induced the production of reactive oxygen species (ROS), changed mitochondrial membrane potential (MMP), and displayed apoptosis-inducing capabilities along with the prevention of cancer cell invasion. Notably, This study provides a comprehensive understanding of the synthesis process, characterization, and therapeutic potential of silver nanoparticles functionalized with plant extracts, opening avenues for further research and development in nano-medicine and cancer treatment. Data availability statement Data will be made available on request. CRediT authorship contribution statement Anjali B. Thakkar: Writing – original draft, Validation, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. R.B. Subramanian: Writing – review & editing, Supervision. Vasudev R. Thakkar: Writing – review & editing, Supervision. Sandip V. Bhatt: Writing – review & editing, Validation, Supervision, Methodology, Conceptualization. Sunil Chaki: Writing – review & editing, Supervision. Yati H. Vaidya: Formal analysis. Vikas Patel: Formal analysis. Parth Thakor: Writing – review & editing, Writing – original draft, Validation, Supervision, Methodology, Formal analysis, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements The authors are thankful to the P. G. Department of Biosciences and P. G. Department of Applied and Interdisciplinary Sciences (IICISST), DST-PURSE Programme, Sardar Patel University for the Instrumentation and infrastructure facilities.
|
[
"ANNA",
"VIJAYAKUMAR",
"BANO",
"IRAM",
"ALSHEDDI",
"ABDELNABY",
"HE",
"HARSHINY",
"STENSBERG",
"CHALOUPKA",
"KOHL",
"TIAN",
"DOSSANTOS",
"ONG",
"LOCATELLI",
"WEI",
"GENGAN",
"JEYARAJ",
"SANPUI",
"JEYARAJ",
"SRIRAM",
"THAKKAR",
"THAKKAR",
"THAKOR",
"GAJERA",
"DEGLI",
"AGABEIGI",
"BENDALE",
"MORAIS",
"GIRI",
"RAJPUT",
"ANANDALAKSHMI",
"KAUR",
"SILVA",
"SHARMA",
"MEHTA",
"SONI",
"BIJAULIYA",
"PATEL",
"AKTER",
"RETOUT",
"FAHMY",
"VENUGOPAL",
"VIVEK",
"HUBLIKAR",
"HUBLIKAR",
"HUBLIKAR",
"HU",
"SELVI",
"BHAKYA",
"RAJIVGANDHI",
"TOSUN",
"KAPLAN",
"NAVEENKUMAR",
"PADMINI",
"FRANCOMOLINA",
"KOVACS",
"MOMTAZIBOROJENI",
"DU",
"MAY",
"MONKS",
"VIJAYARATHNA",
"GRBOVIC",
"GRAIDIST",
"LUKHELE",
"KUMAR",
"RICCI",
"SHIN",
"KANIPANDIAN",
"ULLAH",
"HEMLATA",
"ZHEN",
"DAEI",
"KHOSRAVIFAR",
"NDEBELE",
"CRETNEY",
"TSUJIMOTO",
"BORNER",
"BETHU",
"PHULL"
] |
b3dc85372d1a4fc9a270db9e229a45b9_Revisión del fallo ventricular derecho agudo_10.1016_S1134-0096(07)70262-0.xml
|
Revisión del fallo ventricular derecho agudo
|
[
"Yankah, Charles A."
] | null | null |
[] |
a58b2732e91a44c8b51e159d0f906a6b_Primary hyperparathyroidism in a child The musculoskeletal manifestations of a late presenting rare _10.1016_j.ejrnm.2016.09.002.xml
|
Primary hyperparathyroidism in a child: The musculoskeletal manifestations of a late presenting rare endocrinopathy
|
[
"EL-Sobky, Tamer Ahmed",
"Ahmad, Khaled A.",
"Samir, Shady",
"EL Mikkawy, Dalia M.E."
] |
Primary hyperparathyroidism (PHPT) is rare in children and adolescents, but has greater morbidity in this age group. Most of these patients show predominantly skeletal pathology and to a lesser extent renal involvement. Osteopenia, osteoporosis and subperiosteal resorption are frequently encountered radiographic skeletal signs. This study describes the orthopedic manifestations of PHPT in a child. PHPT in this child exhibited a late presentation with significant clinical morbidity and extensive radiographic manifestations. The characteristic radiographic pattern of PHPT in childhood is an important contributor to the diagnosis. The radioclinical and biochemical correlations augment diagnostic accuracy and delineate extent of skeletal pathology.
|
1 Introduction Primary hyperparathyroidism (PHPT) in children is a rare entity. In contrast with the clinical profile of PHPT in adults, PHPT is much less common in infants and children, with an incidence estimated at only 2–5 in 100,000 [1] . Primary hyperparathyroidism appears to be a more aggressive disorder in children than in adults. In most cases, PHPT will result from a single benign parathyroid adenoma [1–3] . Childhood and adolescent PHPT often presents with vague symptoms. These symptoms usually include bone pain and abdominal pain. Most of these patients show predominantly skeletal pathology and to a lesser extent renal involvement. Interestingly, PHPT is clinically symptomatic in most younger patients. Few, if any, young patients with PHPT are discovered incidentally to have asymptomatic hypercalcemia [2–5] . Osteopenia, osteoporosis and subperiosteal resorption are the most often encountered radiographic skeletal signs. Primary hyperparathyroidism affects compact bone more than trabecular bone with particular sensitivity in the cortices of long bones [1–5] . We assume that describing consistent radiographic features of PHPT may aid diagnostic accuracy and help delineate extent of skeletal involvement. The purpose of this study was to describe the orthopedic manifestations of PHPT in a child. The clinical orthopedic profile was correlated to the skeletal radiographic characteristics of the patient with literature update. 2 Case report 2.1 Clinical presentation A 13 year old girl presented to our pediatric orthopedic outpatient clinic. The presenting symptom was generalized bone aches, especially of the spine and lower extremities. Repeated fractures arising from trivial trauma were reported. The patient’s symptoms deteriorated over the past 6 years. Prior to presentation to the authors the patient received repeated simple cast immobilization and analgesics. She was presumably treated as a case with traumatic fractures or osteogenesis imperfecta. Eventually the patient became wheelchair ridden for the past year. No history of underlying disease or surgical interventions was reported. No family history suggestive of multiple endocrine neoplasia was encountered. General examination revealed normal parameters. The patient exhibited an average built and had already achieved menarche at time of presentation. Local examination revealed diffuse musculoskeletal tenderness, especially of the lower limbs and spine. There was painful restriction of the range of motion of the affected joints. An Informed written consent was obtained from the patient and her parents for being included in the study. The authors declare that no conflict of interest exists. No financing was received for this study. 2.2 Imaging findings The patient received orthogonal plain radiographic examination over the pelvis, whole spine and legs to characterize and evaluate the extent of the disease. The images depicted a wide array of manifestations attributed to generalized skeletal demineralization, bone formation and pathologic fractures ( Figs. 1–3 ). 2.3 Laboratory findings Laboratory findings revealed hypercalcemia 11.8 mg/dl (range 9.5–10.4)/2.92 mmol/L (range 2.38–2.60), elevation of the alkaline phosphatase 786 IU/L (range 153–362)/13.13 μkat/L (range 2.56–6.05), serum phosphorus 2.8 mg/dl (range 3.5–4.9)/0.90 mmol/L (range 1.13–1.58), and an elevated serum parathyroid hormone 891 ng/L (range 15–65). Serum creatinine and urine analysis revealed normal findings. The diagnosis of primary hyperparathyroidism was established by correlating the clinical, radiologic and laboratory findings of the patient. 3 Discussion Plain radiographs may yield the most specific findings that are consistent with the PHPT, and radiography is the preferred examination when the clinical findings suggest primary hyperparathyroidism. Furthermore radiography may be useful in defining the extent of damage [3,5] . Hyperparathyroidism is a disease of increased bone resorption and bone formation. Subsequently, plain radiographic findings may include resorption and sclerosis of numerous sites in the skeletal system. Primary hyperparathyroidism affects compact bone more than trabecular bone with particular sensitivity in the cortices of long bones leading to subperiosteal bone resorption (seen as periosteal elevation on plain radiography). In advanced PHPT, the entire skeleton can be involved [1,4,5] . The pattern of skeletal demineralization depicted on radiographs of the current case conforms to observations of the previous authors. Primary hyperparathyroidism in childhood and adolescence is usually diagnosed later, and presents with clinically significant morbidity [1–4] . Advanced bone changes tend to be common. It has been emphasized that the presenting symptoms of PHPT in childhood and adolescents are predominantly musculoskeletal [2–4] . Osteopenia, osteoporosis and subperiosteal resorption are the most often encountered radiographic skeletal signs described [1–5] . The clinical profile of the current case is in line with that of the previous authors, especially the delayed presentation and the predominately skeletal complications. The current study demonstrates a clear correlation between the extensive bone resorption, cortical thinning and pathologic fractures found on the plain radiographs and the patient’s severe bone pain, tenderness and inability to weight bear. The longstanding patient immobilization may in part be a contributor to the generalized bone rarefaction seen on plain radiographs. Premature fusion of the major growth plates of the lower limbs and the other sclerotic manifestations may be attributed to the hypercalcemia and bone formation phase, especially in the early stage of the patient’s longstanding disease course. Brown tumors of the long bones and a salt-and-pepper appearance of the skull occur in less than 5% of the United States patients with primary disease [6] . Brown tumors were not reported in the presented case. Another way to monitor the severity of bony involvement is with bone densitometry, determined by dual energy X-ray absorptiometry (DEXA). Bone densitometry is the preferred diagnostic modality for the evaluation of osteoporosis, which is one of the most common findings in patients with PHPT. Bone density in the hip and lumbar spine, for which pediatric reference range values are often integrated into the computer software of the machine, is expected to be low compared with age-related reference range values [6] . However, osteoporosis may be associated with other diagnoses; therefore, the specificity of PHPT may be limited. Because of the obvious radiographic skeletal manifestations found in the presented case, both in extent and severity, it was assumed that added value from DEXA may be minimal. As described, children and adolescents with PHPT are usually diagnosed later and a few if any young patients with PHPT are discovered incidentally to have asymptomatic hypercalcemia. We suggest that DEXA may be a diagnostic indicator in these incidentally discovered patients prior to the establishment of manifest radiographic findings. 3.1 Conclusion Primary hyperparathyroidism in children and adolescents seems to be a disease with significant skeletal morbidity. The current pediatric case study showed a clear correlation between the clinical and radiographic skeletal manifestations of PHPT. Skeletal radiographs were helpful to visualize the extent and multiplicity of the lesions associated with PHPT. In addition to diffuse osteopenia and skeletal lesions presented in this case lesions were specifically bilateral, symmetric and multifocal, exhibiting different types of bone resorption. Furthermore, the coexistence of resorption and sclerosis was found at numerous sites in the skeletal system. In addition to the clinical and biochemical profile, the characteristic pattern of involvement on plain radiographs augmented the diagnostic accuracy. Early diagnosis of PHPT in children and adolescents is fundamental to avoid disease related complications and initiate timely and appropriate treatment. Conflict of interest The authors of the current study declare that no conflict of interest exists. No financing was received for research on which our study is based.
|
[
"MALLET",
"LI",
"GEORGE",
"BHADADA",
"HSU",
"SILVERBERG"
] |
0c1e371586e94d6f9f23ade73a00c413_Design of integrated passenger-freight transport A multi-stakeholder perspective_10.1016_j.jpubtr.2023.100069.xml
|
Design of integrated passenger-freight transport: A multi-stakeholder perspective
|
[
"Cavallaro, Federico",
"Eboli, Laura",
"Mazzulla, Gabriella",
"Nocera, Silvio"
] |
Integrated passenger freight transport (IPFT) is a tactical solution that can potentially reduce travel demand and the costs of first- and last-mile services. Although the scientific interest in this topic increased in the last decade, IPFT contributions are still mostly related to the definition of a general framework. Conversely, the design of the service attributes and the evaluation of the operational performances have been object of less attention. To address such a research gap, this paper presents the results of a Delphi survey with international stakeholders. The aim is to verify the minimum requirements upon the introduction of the IPFT in both urban and rural contexts, including fare reductions for users, which are necessary to compensate for the differences as compared with passenger-only and freight-only services. The survey results indicate the necessity of an efficient service in terms of information, environmental performances, space division, and security (the last aspect refers more to passenger than to freight transport). Other attributes, such as cleanliness on board, are more debated. Policymakers and practitioners can use these findings as a benchmark for the definition of performance requirements and boundaries for designing the service. The suitability of the IPFT scheme in real-case contexts is then verified. New suburban and urban IPFT services are designed by modifying the characteristics of two existing urban and suburban bus lines that operate in the Italian provinces of Forlì-Cesena and Rimini according to both the results of the Delphi survey and the territorial specificities. The scheme’s suitability to the existing schedules is finally determined. As next steps, the new design has to be assessed through a) a stated preference survey submitted to potential users of the service; b) a supply model that verifies the matching of supply and demand under the new configuration, and c) an economic evaluation of the service, which considers the perspective of single actors.
|
1 Introduction The term "first-last mile" transport (FLM) is used in literature to indicate the first and the last legs of each trip and may refer to both passenger and freight transport. It represents the set of links and services between an existing main transport service and its potential final users ( Nocera et al., 2021 ). From the perspective of transport operators, FLM is associated with high operational costs, which can account for 25–40% of the entire trip chain ( Macharis and Bontekoning, 2004 ) and, in some cases, can reach 50% ( Goh et al., 2011 ) or even 75% ( Boyer et al., 2004 ). According to Digiesi et al. (2017) , FLM is a significant issue, especially at the urban scale, because it is often fragmented and uncoordinated; consequently, it leads to low utilisation of vehicles, excessive movement, high environmental externalities, impacts on the community, and increased system costs. Among the numerous solutions conducted to improve the performance of the FLM, the integration of passenger and freight transport into a unique operational scheme is a potentially valid alternative ( EC, 2007 ). The integration of passenger and freight transport (IPFT) is a potential solution to increase the transport efficiency, by merging public transport with the distribution of goods. Its theoretical application may be quite extensive: if framed within the mobility as a service (MaaS) concept, it may contribute to improve the capacity use in public transport and reduce freight movements in cities ( Le Pira et al., 2021 ). Furthermore, its suitability has been tested in rural areas, by merging a demand-responsive transport system with the distribution of parcels in lockers located at main stops ( Cavallaro and Nocera, 2023a ). Operatively, Trentini and Malhene (2010) identified three potential forms of integration: infrastructural (i.e. shared infrastructure for freight and passenger vehicles), vehicular (i.e. goods and passengers transported on the same vehicle), and nodal (i.e. selected node of the network that combines passenger and freight functionalities). Vehicular integration has been frequently studied and discussed, whereas infrastructural and nodal components have been less investigated. The nature of these contributions has been assessed methodologically by classifying existing studies into interviews, case studies, concepts, models and simulations, and reviews. In rough terms, two main groups can be identified, which are represented equally. The first group is characterised by an attempt to conceptualise IPFT from different perspectives, including social orientation ( Horcas et al., 2020 ), links with transport externalities ( Wosiyana, 2005 ), and urban infrastructure ( Spickermann et al., 2014 ). The second group includes studies that address operational issues, propose a model to solve them, and test them experimentally ( Bakker, 2015; van Duin et al., 2019 ). The IPFT scheme has already been proposed for use in air, ferry, and long-distance rail transport, with a mixed use of vehicles for passenger and freight transport ( Ghilas et al., 2013 ). However, FLMs are less common in urban, rural, and peripheral contexts. The performance evaluation of the IPFT for the FLM is essential to understand the potential applicability of this solution in real-life conditions. In recent decades, the evaluation of service performance has become crucial for practitioners, managers, and researchers, who have mainly focused on the passengers’ perspective. The focus on the needs of public transport (PT) users, either current or potential, is based on the assumption of a central and consolidated role with European Standard EN 13816 ( CEN, 2002 ). Passengers’ opinions have generally been derived from customer satisfaction surveys (CSS), which allows the collection of data in terms of rates of specific service attributes or agreement/disagreement levels about certain statements. In addition, passengers’ opinions have been collected by means of experiments on stated preferences (SP) in terms of ranking, rating, and choice about hypothetical scenarios. Several studies on the analysis of different PT modes from services offered by systems, such as buses, to railway services or airport and airline services have been reported ( Transportation Research Board, 1998; Friman and Fellesson, 2009; Redman et al., 2013; de Oña and de Oña, 2015; Hansson et al., 2019 ). Regarding traditional PT services, understanding users’ perceptions and comprehending how well an IPFT transit agency performs during the FLM are essential to verify its attractiveness and hence, its potential design. To the best of our knowledge, studies addressing this issue have not yet been reported. Only Cochrane et al. (2017) conducted a Delphi survey on this topic, including 34 experts. However, the aim of their research was to explore the challenges and opportunities of freight on PT and to conceptualise potential operations in Toronto; the study did not focus on the evaluation of technical performance that makes the service appealing to users. Gatta et al. (2018) modelled the willingness to pay for a Business2Client service in Rome. Their analysis was limited to crowdshipping, which is a specific application of the IPFT. The present study is inserted into this research line, by identifying the performances of an integrated passenger-freight design according to the perspective of international stakeholders. Indeed, as the issue of IPFT involves a multitude of actors, the identification of a possible solution needs to be explored by considering the different instances. To this aim, an international Delphi survey evaluates the minimum service requirements for the provision of an IPFT in urban and suburban/rural contexts. The expected impacts deriving from the introduction of such schemes on selected service attributes and the maximum variance from traditional passenger- and freight-only services are presented and discussed. With such an outcome, policy makers and practitioners can understand the changes that may be introduced to the existing operational schemes, and hence adapt them accordingly. Other complimentary aspects, such as the economic evaluation or the optimization of this integrated scheme, are not directly addressed and left open for future research. The remainder of the paper is structured as follows. Section 2 presents a literature review of the main contributions related to IPFT and the evaluation of passengers’ perceptions and transit agency performance measures. Section 3 describes the method adopted to address this research question, which is based on an international online Delphi survey. Section 4 presents the results of such surveys, which are then discussed and used in Section 5 to define the key parameters of the service in urban and suburban contexts. Such key parameters are then investigated for the two lines located in the Italian provinces of Forlì-Cesena and Rimini, as described in Section 6 . Finally, the conclusion of the study is presented with next steps and implications for policymakers and practitioners. 2 Evaluation of IPFT: theoretical framework 2.1 IPFT: main characteristics of the service Transport solutions that can guarantee IPFT may potentially include all transport modes (air, sea, and land). Studies on long-haul journeys include different transport means, such as high-speed trains (e.g., Bollapragada et al., 2018 ; Huang et al., 2019 ), airplanes (e.g., Sourek and Seidlova, 2018 ; Vajdová et al., 2019 ), and extra-urban buses ( Terzi and Ockels, 2009 ). In terms of territorial contexts, there are few studies on peripheral and rural areas. Wosiyana (2005) analysed the use of light delivery vehicles in South Africa and its implication in terms of road accidents and related social costs. Namgung et al. (2010) assessed IPFT in rural areas of Japan and discussed the potential factors that affect user choice. Bruzzone et al. (2023) investigated the financial and operational conditions under which an integrated system can be used instead of performing conventional independent passenger transport and freight deliveries to achieve the goal of reducing freight vehicle-km and -consequently-the associated environmental impact. When referring to metropolitan and urban studies, solutions typical of high-density contexts have been considered, such as MaaS ( Le Pira et al., 2021 ), urban trains (e.g., de Langhe, 2017 ; Fumasoli et al., 2015 ; Zhou and Zhang, 2019 ), subways (e.g., Cochrane et al.; 2017 ; Visser, 2018 ), trams (e.g., Arvidsson and Browne, 2013 ; Strale, 2014 ), urban rapid buses (e.g., Fatnassi et al., 2015 ), and buses (e.g., Masson et al., 2017 ). A more detailed description of the contributions related to IPFT, along with the main methodological, geographical and content-related aspects, may be found in Cavallaro and Nocera (2022) . IPFT has the potential to be an effective service. The potential benefits have been demonstrated through a few real cases and simulated solutions ( Spoor, 2015; Ghilas et al., 2016; van Duin et al., 2019 ). In urban and metropolitan areas, PT operators may achieve economic advantages when availing their spare transport capacity for the transportation of parcels and/or small goods. Public authorities may also financially benefit because transit operations require lower subsidies. A potential alternative to such savings is the reinvestment of excess funds to increase service frequency. Such additional, and otherwise anti-economic, services increase the service appeal for users. Indirect effects can also include the better use of urban spaces. In rural and peripheral areas, a reduction in overall operational costs could allow additional and more frequent transit or delivery services. This may relieve the isolation that is typical of rural areas and, in the long run, may contribute to a modal shift from private vehicles to PT. In both urban and rural contexts, reduction in kilometres also has a positive impact on fuel consumption and reduction of transport externalities, such as air and noise pollution, accidents, and congestion ( Larrodé and Muerza, 2019; Cavallaro and Nocera, 2023b ). However, some difficulties in the development of a sound business model related to IPFT complicate its adoption ( van Duin et al., 2019 ). In contrast, the main trade-off derived from the introduction of such services (mostly in urban contexts where stops and stations are closer) is the increased travel time to perform both passenger and freight operations ( Ghilas et al., 2016 ). This aspect can lower the appeal of PT for users and needs to be counterbalanced by improving other characteristics of the service (e.g. increased frequency, reduced cost of PT fare, and increased service quality). IPFT requires numerous initial decisions and adaptations to the existing framework before it becomes operational. First, a significant change in the regulatory and legislative systems currently available at all territorial levels (from continental to local) is necessary. Passengers and freight transport are regulated as two separate entities. Adaptations to the existing legislative framework are required for the correct implementation of the service ( Jansen, 2014 ). Both public and private investments are necessary to adapt the infrastructure and vehicular equipment. For infrastructure, a consolidation facility has to be integrated with the terminal of PT, and pick-up and delivery locations must be made available at selected transit hubs. Regarding the latter, the purchase of new and low-impact vehicles to cover the freight transport aspect of FLM is highly recommended, although it is not essential. Alternatively, interventions in the existing fleet should be made to make it usable for easy, quick (especially in loading/unloading operations), reliable, and safe freight transport. At least in the preliminary phase, funding by public authorities and implementation of pilot initiatives are essential for a correct development strategy. Small-scale applications in a restricted sector of an urban agglomeration or a limited-size rural area may be useful for testing the efficiency of the proposed models. The conceptual model behind this initiative does not imply a radical change in the entire transport system. Rather, it includes tactical and short-term solutions that may facilitate future changes in a broader context. Ultimately, all aspects mentioned above affect the perceived quality of the service for users, who can determine the success or failure of the service. In this sense, IPFT also implies a variation in transport performance when compared to the passenger-only and freight-only services. Hence, the evaluation of such variations is of utmost importance in determining the potential of the service. 2.2 Evaluation of PT and IPFT services Evaluating the performance of an IPFT service assumes the same relevance of evaluating a PT service. In both cases the service is appreciated by the users, customizes them and attracts new ones, only if its performance is satisfactory. Yet, the scientific literature about IPFT service quality evaluation is not consolidated. This is due to several reasons, such as the novelty of IPFT research branch, the limited number of real cases of IPFT, and the variety of factors to be evaluated ( Hu et al., 2020b ). Conversely, PT services are characterized by well consolidated attributes and requirements. In this case, there is an extensive literature on the factors characterizing the service, values that the various service factors should assume, and methods for evaluating the performance. More specifically, service performance can refer to two main perspectives ( CEN, 2002 ): customers (i.e., potential users of PT) and service providers (operators and transit agencies). European Standard EN 13816 introduced the evaluation of service performance on the basis of a quality loop consisting of four distinct components, namely, expected and perceived service quality from the customers’ perspective, and targeted and delivered service quality from the point of view of service providers. Expected service quality refers to the level of quality that is required by the customer, whereas perceived quality is the level of quality actually experienced. Conversely, targeted service quality is the level of quality that the service provider aims to provide to customers based on the level of quality achieved daily (i.e., delivered service quality). A widely adopted measurement of PT services is based on the customers’ perspective. This topic has been more extensively studied in transport literature than service providers’ perspective, although both are crucial to evaluate the performance of a transit service ( Eboli and Mazzulla, 2012; de Oña et al., 2016 ). Some techniques were developed in marketing research. Examples are the SERVQUAL ( Parasuraman et al., 1985 ), customer satisfaction index ( Hill, 2003 ) and importance-performance analysis ( Martilla and James, 1977 ). Other techniques that may be adopted are statistical methods, such as factor analysis and regression models, or more advanced forms, such as structural equation modelling (e.g., de Oña et al., 2013 ; Allen et al., 2019 ), or classification and regression tree approach (e.g., de Oña et al., 2015; Bellizzi et al., 2022 ). Finally, some models adopt data collected from experiments based on SP techniques (e.g., Hensher and Prioni, 2002 ; dell’Olio et al., 2018 ; Eboli and Mazzulla, 2008, 2010 ; Bellizzi et al., 2020 ). The analysis of the performance guaranteed by IPFT remains open to debate. A number of authors have addressed this topic, referring to urban ( Strale, 2014; Leijenhorst, 2014; Sampaio et al., 2018 ) and rural case studies ( Jansen, 2014; Bakker, 2015; van Duin et al., 2019; Cavallaro and Nocera, 2023a ). A limited number of studies have adopted a qualitative approach to IPFT (e.g., through Delphi and SP techniques; see Cochrane et al., 2017 and Serafini et al., 2018 ), often supported by conceptual modelling ( Kiba-Janiak et al., 2021 ). Concerning IPFT services, the relevant issue is not related to value single service alternatives but rather to understand the minimum requirements or ranges of outcomes. This multi-stakeholder, multi-dimensional design problem needs a solution space to be defined in order to guarantee acceptability or a range of options. Many authors have stressed the importance of stakeholders’ involvement in design, in order to allow solutions concerning FLM (e.g. Macharis, 2004 ; Quak and De Koster, 2009 ; Anand et al., 2014 ; Van Duin et al., 2019 ; Mangano et al., 2021 ). This work contributes to solve the issue of FLM, by defining the requirements concerning the characteristics of the new proposed IPFT service. Indeed, to the best of our knowledge, there are no studies focusing on multi-stakeholder design requirements of this service. 3 Method: international Delphi survey on IPFT To investigate the opinion of international experts concerning the development of an integrated passenger-freight transport as a solution for the first-last mile problem in urban and suburban/rural areas, we designed and conducted a survey (presented in Appendix 1). We adopted the computer-assisted web interviewing (CAWI) method for data collection. The CAWI method has numerous advantages, such as the ability to contact a large number of people, possibility of saving time in collecting data, low costs, possibility of centralised control, and continuous monitoring of the investigation phase. In addition, the interviewees’ inhibitions in answering some questions were lower compared to face-to-face surveys, reducing distortions in the collected data. Conversely, the CAWI method has several disadvantages linked to the risk of focusing on web page design rather than the actual survey, high non-response rate, and accuracy of the data collected, which can be unsatisfactory if the questionnaire includes complex questions ( Loosveldt and Sonck, 2008; Eboli and Mazzulla, 2011 ). We tried to solve these problems by proposing a questionnaire designed to be as simple as possible (see Section 3.1 ). 3.1 Content of the survey For the purposes of our research, two types of information are needed: the maximum variation that IPFT can register to be considered a valid alternative to the independent freight distribution and PT services, and the agreement or disagreement about specific statements regarding a possible variation derived from the adoption of IPFT. To this end, urban and suburban/rural services were introduced and used as the basis for the evaluation (see Table 1 ). At the urban level, the main characteristics of the line and vehicles are provided according to a typical urban bus service for a medium-sized European city (250,000 inhabitants). The line is 15 km long and includes 45 stops (including the two termini), with an average distance of 400 m. Travel time to cover the entire distance is 50 min (commercial speed is 18 km/h). The daily service duration lasts from 06:00–22:00, and its frequency varies according to the time of the day. In peak hours, six rides per hour are considered, which lowers to four in the case of non-peak hours. The fare for this service is 1.50 €. The vehicle is a 12-m-long bus, powered by compressed natural gas that can transport up to 90 passengers (26 seated and 64 standing). In addition to the passenger component, IPFT implies the inclusion of selected goods. Considering the distinction proposed by WEF (2020) and based on the size and type of goods, only parcel size and deferred deliveries and parcel size and time-definite deliveries are considered suitable for the service. At the suburban/rural level, the main characteristics of the ideal line and vehicle are based on a bus service connecting a medium-sized European city (see above) with a small surrounding municipality (50,000 inhabitants). The length of the line is 30 km and includes 20 bus stops (including termini), with an average distance between bus stops equal to 1500 m. The travel time to cover such a distance is 60 min (commercial speed is 30 km/h). The daily service duration lasts from 06:00–20:00, and its frequency is hourly for the entire service duration. The fare is 3.50 €. The vehicle is a 12-m-long suburban bus powered by diesel (Euro 6) and can transport up to 85 passengers (45 seated and 40 standing). Concerning the freight component, the same typologies as those in the urban service are considered suitable for the service: parcel size and deferred deliveries as well as parcel size and time-definite deliveries. The object of evaluation is a series of service attributes that characterise the quality of the proposed IPFT. Starting from the literature presented in Section 2.2 , 11 attributes were selected: the number of bus stops, distance between bus stops, service frequency for the passenger and freight components, information about time, time in vehicle, punctuality, staff availability, fare, comfort in vehicle, cleanliness, security for the passenger and freight components, and environment. For each, we wanted to understand the expectations of respondents about the IPFT in terms of variation, maximum increase/decrease, or the level of agreement/disagreement with an expected change. In the following tables, the questions were grouped according to the type of information requested for the attribute. Both territorial contexts (urban and suburban/rural) were objects of evaluation. The first two attributes ( Table 2 ) refer to bus stops and frequency of runs. In the former case, the expected variation must be evaluated in terms of the maximum increase or decrease in the number of stops and in terms of the maximum increase or decrease in the distance between bus stops. In the latter case (frequency), the variation in the attributes offered to bus passengers and freight distribution are considered separately. This differentiation was chosen because we expected different frequency variations for the two types of services. Service frequency for bus passengers is affected by the need to deliver goods; at the same time, the number of deliveries per day could be increased owing to the bus stops for the access/egress of passengers. An additional differentiation was made for the service frequency for the passengers, in terms of the time period between the frequency during peak hours and the frequency in off-peak hours. Table 3 presents the four evaluated attributes for the maximum expected variation in order to ensure that the new IPFT service remains competitive. In contrast to the attributes reported in Table 2 , in this case, the variation compared with the current situation can be defined either as a decrease or increase in a univocal way. The four attributes refer to the maximum increase in on-board travel time and delay (as a measure of punctuality) or to the maximum decrease in ticket prices and surface for passengers in the vehicle (as a measure of comfort). Finally, for the five attributes related to the quality of the service (information, staff availability, cleanliness, and security) and environmental benefits, we presented an expected situation to the respondents and asked their agreement or disagreement with a specific statement ( Table 4 ). Indeed, IPFT implies the coexistence of both passengers and freight in the same vehicle, and this could cause some changes in the service attributes. For instance, IPFT is expected to provide more reliable information to passengers (e.g. real-time information), owing to the presence of an information system for freight traceability. In addition, the new transportation system requires the presence of personnel managing freight delivery; thus, passengers could benefit from the permanent presence of personnel on board. However, the coexistence of passengers and freight in the same vehicle could have negative effects on the cleanliness of the bus. Concerning security, two aspects need to be considered: security for passengers and freight. Passengers could benefit in terms of personal security from the presence of personnel on board managing the freight delivery (loading/unloading, assignment to parcel lockers at bus stops for the pick-up of clients). However, the presence of passengers (even in separated areas) could increase the risk of theft or damage to freight. Finally, IPFT is expected to have a positive effect on transport externalities (including congestion, accidents, local and global air pollution, and noise pollution). For each service attribute, the possibility of providing open comments was given, especially for questions belonging to the last group ( Table 4 ) in case of disagreement with the initial statement. The last question of the questionnaire included general comments on the entire questionnaire. 3.2 Selection of experts and administration of the questionnaire The questionnaire was designed in January 2022. Subsequently, it was submitted for preliminary validation to three scholars (who were not included in the panel of respondents), asking for their contribution both in terms of content as well as in their opinion about the clarity of the requested information. Based on their suggestions and recommendations, the questionnaire was slightly modified and finalised (see Appendix 1). After the online preparation of the questionnaire, potential respondents were contacted via e-mail between February and March 2022, presenting the aim of the research and asking for their contribution. Respondents were prompted several times to answer the questionnaire, as permitted by the CAWI method. As the topic is still limited in scientific literature as well as in the real world (see Section 2 ), the panel of respondents was identified according to the direct knowledge of the authors. In such cases, the panel size is not essential. According to Mullen (2003) , it may be as small as three members or as large as 80, whereas Turoff (2002) recommends panels between 10 and 50. The most important task is to select people who are knowledgeable in the field of study ( Grisham, 2009 ). The selection criterion was the participation or direct involvement of the respondents in previous research, case studies, and pilot actions related to the topic IPFT for the FLM. For practical purposes, we clustered the potential respondents (58) into seven categories (with the number of respondents number listed in parenthesis): PT operators (13), freight operators (5), public administration (4), consultants (9), scholars, including both universities and research centres (20), firms that have tested the service (5), and journalists (2) (see Table 5 ). PT users and clients of the freight delivery service are not target of this investigation. Their opinion is more useful in a subsequent phase of this experiment, when a SP survey based on the results of the experts could be administered to them. Overall, 17 of 58 invited contacts answered (29.3%). More than 60% of the answers were provided by scholars who primarily worked in Europe (France, Germany, Ukraine, the Netherlands, Spain, and Italy). Only one scholar was not from Europe (China). The transport agencies, freight operators and public administrations that answered the survey (5) operated in the same EU countries mentioned above, as well as Slovenia. If we compare the response rate with other general web surveys, the value is quite good (see, for example, Braunsberger et al., 2007 ; Bayart and Bonnel, 2008 ; Heerwegh and Loosveldt, 2008 ). In addition, if we compare the response rate with web surveys addressed to a panel of experts, we obtained a good response rate, as the range in the literature is from 20% to 25% ( Karakikes and Nathanail, 2020 ). 4 Results of the Delphi survey In this section, the results of the Delphi survey are presented. As traditionally the Delphi method involves multiple rounds of a survey to allow for feedback of participants’ survey responses, after completing the first round and summarizing the results, we launched a second round, where the participants had the possibility to know the opinions expressed by other experts in the first round. In order to simplify the process, the second round was proposed only to those respondents that had expressed incongruent values, if compared to the majority of the experts. More specifically, we contacted only two participants, in order to show them the opinions expressed by other respondents during the first round. This procedure avoided inserting those values that would be out of range, thanks to the fact that respondents in the second round decided to modify the values and uniform them to other respondents. The process was then stopped. In the following, only the results of the last round are reported. Similar to the subdivision of questions presented in Section 2.1 , we divided the results into three groups. The results for the service attributes described in Table 2 , for which a general variation was requested, are summarised in Table 6 . To answer these questions, we expected discordant values owing to the heterogeneity of the sample, the knowledge that they have and the instances that they represent (see Table 5 ). For instance, scholars and consultants are expected to have a more comprehensive vision, which considers the integrated system as a whole (PT and freight service, customers and users). Conversely, PT and freight operators represent a sectorial perspective, which aims at maximizing the efficiency of a specific component of the service. Whereas, the public administration aims at guaranteeing a minimum level of the service, which minimizes the service costs and the related negative externalities (air and noise pollution, congestion, accidents). Results confirm our initial expectations. More specifically, for 6 out of 16 experts (one expert did not answer), the delivery of freight by bus did not modify the total number of stops and their distance, while two experts stated that a reduction in stops and, consequently, an increase in the distance between them can be expected. According to some of the respondents, the urban service as presented in Table 1 is already dense enough. Parcel lockers distributed at identified stops guarantee a good territorial coverage for freight clients and the system has to be optimized in terms of travel times for PT users. On the other hand, rural service is likely to require some extra stops, given the higher territorial dispersion. The remaining nine experts provided discordant opinions in terms of the number of stops: five of them expressed a positive value (hypothesising an increase in stops) from a minimum of five to a maximum of 15, and three experts assigned a negative value, from − 15 to − 5. For suburban services, we registered similar (but not identical) values to those of urban services. The variation in the number of stops was a consequence of the statement regarding the average distance between stops. The results concerning the service frequency are also presented in Table 6 . For the passenger component, half of the experts believed that there would be no difference in the service frequency (both peak and off-peak hours, and both for urban and suburban/rural services). Only one expert retained that IPFT will decrease the urban service frequency by 2–3 rides/hour, depending on the peak and off-peak periods considered. The remaining half of the experts stated an increase in frequency from a minimum of 1 ride/hour to a maximum of 4 for urban services and 2 for suburban services. The results are quite similar for service frequency in terms of deliveries/day (thus referring to the freight component), even if the number of experts hypothesising no effects is slightly greater. The results for the service attributes described in Table 4 , for which the maximum expected value was requested, are summarised in Table 7 . For the attribute related to on-board travel time, a maximum increase in minutes was requested. Two experts did not provide any value: for one of them, there will be no increase in travel time, meaning that the service could be performed with the same scheduling as that of a passenger-only service. Considering the rest of the panel, one of the expert expressed in the first round of the Delphi survey an out of range value equal to 55 min for urban services and 60 min for suburban services. These value were modified in the second round. Definitively, travel time varies from 5 min to 25 min for urban service and 30 min for suburban service. Concerning the attribute linked to punctuality, a maximum acceptable delay in minutes was requested. In this case, five experts did not provide a value or expressed a value equal to zero, especially because they retained that if delivery times are known in advance, the schedule can integrate this information. Thus, the time required to perform both passenger and freight components does not cause delays. This aspect is discussed in the next section. Considering the rest of the panel, values from a minimum of 2 min to a maximum of 20 min for urban services and 30 min for suburban services were registered. As for fares, a maximum decrease in € was requested. In this case, up to eight experts either did not provide a value or expressed a value equal to zero. Considering the rest of the panel, for urban service values from a minimum of 0.1 € to a maximum of 0.7 € were registered, which is almost half of the suburban service, ranging from a minimum of 0.3 € to a maximum of 1.5 €. Finally, comfort in the vehicle was analysed in terms of the maximum decrease in the area available to passengers (%) when the freight component is integrated onboard the vehicles. In this case, only four experts provided no value. Considering the rest of the panel, for urban services values from a minimum of 5% to a maximum of 30% and from a minimum of 5% to a maximum of 50% were registered. Table 8 shows the responses of the group to questions requesting agreement or disagreement with a specific statement concerning a certain service attribute. In general terms, the results are quite similar for the two different types of services (urban vs. suburban/rural). The experts agreed with the hypothesised effects of the new IPFT on the analysed service aspect, except for the aspect linked to cleanliness. In this case, the experts generally disagreed with our initial hypothesis of the negative effects on bus cleanliness owing to the presence of freight. For more than three quarters of the panel, buses are not less clean. Some of the respondents felt that passengers could continue to be the main cause of dirt, as they "leave behind more rubbish and contamination than the carriage of goods"; hence, no substantial modification to the previous layout may be hypothesised. On the other hand, some said that the workers storing the parcels may keep surfaces cleaner, thus contributing to an improvement in cleanliness. Concerning the other aspects, respondents generally agreed with our hypotheses, especially regarding the aspects linked to information about time, staff availability, and environment. A high percentage of the panel (from 76% to 82%, both for urban and suburban services) retained that bus passengers could benefit from more reliable information owing to the introduction of traceability systems for the goods and from the permanent presence of personnel on board. Moreover, the integrated system could represent a benefit in terms of the reduction in externalities for the entire community. Finally, for security, even with a minor incidence, a major part of the panel (from 60% to 70%) agreed with the hypothesised statements, retaining that the passengers would benefit from a permanent presence of personnel on board, and that goods are subject to the risks of theft and damages. because of the presence of passengers on board. However, for some respondents, the presence of staff on board was not essential: a video surveillance system would be sufficient to allow the bus driver to see the interior from behind the wheel. 5 Discussion of results and definition of service attributes The results presented in Section 4 are not univocal; in some cases, they confirm our initial hypotheses. In other cases, this prevents us from drawing final conclusions. For instance, the definition of stops under the new IPFT scheme is worth mentioning. The sample was divided among experts who thought that there would be no changes in terms of the number of stops, those retaining that there would be an increase, and those considering a possible decrease. This suggests that the number of stops and their average distance should be determined by examining the specific characteristics of different cases. The adoption of optimization algorithms for routing and identification of stop locations is a potential solution to solve this problem ( Medina et al., 2013 ). Yet, the values found with our survey could be used as an initial constraint for designing the optimal solution. A similar approach should be adopted for service frequency, where an increase in frequency or no variation may be foreseen. To evaluate this aspect, a variation in the attribute by considering the average value and no variation should be considered, given that some experts also said that there would be no changes after the introduction of the new system. The other attributes were less contradictory. According to the results reported in Table 7 , the new system would be more reliable in terms of information (with a permanent availability of personnel on board), environmentally friendly, and more secure for passengers but less secure for freight. Therefore, these conditions must be considered. Finally, the hypothesis regarding the cleanliness of vehicles was not confirmed by the sample of experts. Hence, we did not consider this attribute. These results may also be adopted for the design of hypothetical IPFT lines in both urban and suburban contexts. Only the attributes for which we obtained more reliable results should be considered, leaving other aspects to be locally engineered and planned. For instance, the variation in cleanliness services compared to the existing condition should be left to discussion with local stakeholders for the aforementioned reasons. We considered only the distance between stops, excluding the number of stops, since these two indicators seem to express the same aspect; however, the former is less dependent on the length of the line and hence is preferred. Concerning service frequency, we selected the frequency for the passengers only (the main target of the service). The provision of freight delivery can be conceived as an initial constraint that must be satisfied to make the service operational. With regard to the time window, peak hours could be considered instead of off-peak hours; in this way, the evaluation is performed on a service during the most challenging configuration. We could also omit the environmental attribute, which in the first phase could be considered less relevant for the users as a service choice, owing to the difficulties in perceiving it correctly. Finally, we could disregard the attributes concerning security because we obtained fewer univocal results. As a result, the design of IPFT lines can be based on the following eight service attributes: average distance between consecutive bus stops, service frequency, information about time, time in vehicle, punctuality, staff availability, fare, and comfort in the vehicle. Table 9 presents the characteristics of the urban and suburban IPFT services. In both cases, two levels of variation were considered. We named them S1U and S2U for urban services and S1S and S2S for suburban services. Regarding the variation in the distance between stops, we considered two levels of variation, positive and negative, with the opinion of experts being discordant. The minimum and maximum values were selected (−200 m and +600 m for an urban service and −500 m and +500 m for a suburban service). Regarding the service frequency, fare, and comfort in vehicles, we can consider a variation from the minimum to maximum value. With respect to time in the vehicle and punctuality, the maximum values registered were quite high. For this reason, we opted for a range from zero to the mean values (18.8 min and 11.5 min for urban and suburban/rural services, respectively), which seems to be a more cautionary approach. In particular, referring to punctuality, some respondents found that the integration of the passenger and freight transport could not cause any delay on the scheduling of the line, if the service is designed properly. This condition would be optimal, but it could be in contrast with the minimization of in-vehicle travel times and does not consider the differences related to the mix of freight and passenger operations at stops. Hence, we considered a minimum delay of 2 min as the lowest value. At the same time, this suggests that the question was not fully understood by respondents and could have been expressed in clearer terms. Finally, the attributes related to information about time and staff availability could be simply considered to vary on two levels, expressing a condition similar to the service for only passengers in S1U and S1S, and a condition improved by IPFT in S1U and S1S. The selected attributes shown in Table 9 were those that the surveyed experts agreed upon the most. Therefore, they could be considered as the attributes characterising the quality level of the new IPFT scheme and the key points to start a process of IPFT design. The omitted attributes can be considered more dependent on the specific case study and should be evaluated in the specific local context adopted for the analysis. The economic aspects related to the implementation of the service are not considered; they need ad-hoc evaluations to verify their effectiveness. 6 IPFT in practice: urban and suburban lines in Emilia Romagna region The service attributes for the evaluation of IPFT performance obtained through our survey can be reported for concrete cases that operate in real-life conditions. In this section, we present two bus lines (one operating at the urban level and one at the suburban level), which present similar characteristics to those used for the Delphi survey ( Table 1 ). These lines were revised according to the attributes derived from the survey results to verify their adaptability to the IPFT scheme. Both lines operate in the area managed by the Agenzia Mobilità Romagnola (Romagna Mobility Agency, AMR), which is the agency that develops and coordinates the PT service in the Provinces of Ravenna, Forlì-Cesena, and Rimini in the Emilia Romagna region (Italy). 6.1 Urban and suburban PT The urban line "FO04" is operational in the city of Forlì from the terminus of Ronco to Cava and vice versa. Here, we analysed the direction of the Ronco-Cava. The line is approximately 11.5 km long. It consists of 38 stops, including the termini (19 of them are equipped with a shelter), with an average distance of 300 m between stops. Suburban line "160" goes from the railway station of Rimini to the municipality of Novafeltria and vice versa (in this case, we analysed only the direction of Rimini-Novafeltria). The total length of the line is approximately 34.5 km, with 50 stops including the termini and an average distance of 700 m between two consecutive stops. In both cases, the service is guaranteed daily from 06:00–21:00, with a frequency of six rides/h in peak hours for the urban service and one ride/h for the suburban service. The scheduled travel times of the two lines were 33 and 61 min, respectively. A delay of more than 3 min was registered in approximately 22% (urban) and 23% (suburban) of the runs. Busses are equipped with an automatic vehicle monitoring (AVM) system that tracks the real-time status of vehicles in terms of instantaneous position, minutes to reach a given stop, and occupancy of the vehicles. The service is available to customers at https://www.startromagna.it/orari-in-tempo-reale/ and is customised for web pages (but not for mobile phones), allowing one to search only using the name of the stops. It is also used for operational purposes by the service provider, including the real-time regulation of the service by control rooms and the automatic localisation of vehicles in case of an emergency. Regarding cleanliness, the external company operates in accordance with three standards. Ordinary internal cleaning and sanitation, which includes walking surfaces, upholstery, windows and panels, hat boxes, and other compartments, is performed daily. Thorough cleaning of both city and suburban buses is performed monthly. Thorough cleaning and disinfestation of intercity buses is performed every two months. Finally, concerning safety and security, no vandalism was reported in vehicles, which can thus be perceived as safe by passengers. The characteristics of the two lines, shown in Figs. 1 and 2 , are summarised in Table 10 . 6.2 Urban and suburban IPFT According to the characteristics of the service presented in Table 9 and considering the layout and technical characteristics of the vehicles (in terms of passenger and goods capacity), the PT service can be reconceived in terms of IPFT by varying the attributes in terms that have been indicated by the experts in the survey. Theoretically, several alternative solutions may be provided, by combining the results of the single attributes, as provided by the stakeholders. In this case, we present one of the possible alternative outcomes for illustration purposes. We use the extreme values indicated by the stakeholders as the maximum reference to redefine the operation of the lines, but they have been adapted according to the initial configuration of the service and the morphological characteristics of the area. The service has been modified in terms of the average distance between stops (and consequently the number of stops), number of rides per hour, travel times, expected maximum delay (due to loading/unloading activities), decrease in fare, decrease in area for passengers when parcels are transported, and information service for users. No change in cleanliness is foreseen, whereas additional staff are available only for rides that transport both parcels and passengers. Table 11 presents the characteristics of the proposed scheme for both urban and suburban services. For the urban passenger service, the number of stops for passengers decreases from 38 to 32 ( Fig. 3 ). To redefine the schedule, we maintain the constraint that the distance between two consecutive stops must not exceed 600 m. The average distance increases to approximately 350 m, and the scheduled travel time increases to 38 min (+5 min compared to the existing service) to ensure the delivery of the parcels into the parcel lockers. An example of the adjustment of a single run (morning peak hour) is presented on the left side of Fig. 3 . The frequency increases from 6 to 8 rides/h in peak hours, according to the minimum increase proposed by the experts ( Table 9 ). No change in the service span (06.00–21.00) is foreseen. In terms of vehicular changes, buses that operate along this line must be flexible enough to adapt to different layouts. The passenger-only configuration allows the transportation of 95 people. With the IPFT scheme, hybrid passenger-parcel configurations are also possible. The maximum room for parcels is approximately 12 m 3 with a contextual maximum number of passengers of 66. Parcels are stored at the back of the bus and can be loaded/unloaded using the back door. A separator can be utilised to avoid interactions between passengers and goods. Passengers can use central and frontal doors to exit and enter the bus. Regarding suburban passenger service ( Fig. 4 ), the distances between stops fluctuate from 400 m (in more urban areas) to 1.5 km (higher than 2 km in one case) and depend on local morphological specificities. On average, the distance between stops is approximately 700 m. Under the IPFT scheme, the average distance and the number of stops remains unvaried. We plan an increase of 7 min in travel time (total travel time 68 min), but we expect a higher variability in terms of delay (benchmark value: 60% of rides below 3 min) according to the number of parcels to deliver. An example of the adjustment of a single run (morning peak hour) is presented on the left side of Fig. 4 . The number of rides/h ranges from 1 to 2 in peak hours, whereas they do not vary in off-peak hours. Changes in vehicles are comparable to those in urban services, with the back door used for loading and unloading parcels. The only difference lies in the number of standing and seated users when the bus transports the maximum amount of goods, which lowers from 45 and 40 available if no goods are transported to 32 and 26 in the case of full capacity of parcels, respectively. Regarding the freight components of both urban and suburban services, IPFT requires infrastructural adjustments to the existing layout. These adjustments include the provision of a warehouse (where parcels are collected, stored, and loaded on buses) and parcel lockers for stops that have been identified as suitable for the delivery of goods (see Figs. 3 and 4 ). In the former case, the warehouse may coincide with the headquarters of the transport provider, which guarantees adequate space for loading/unloading activities and the possibility of serving all the buses. In the latter case, bus stops that host parcel lockers must also be equipped with a shelter to guarantee safe delivery in adverse weather conditions. They must be located and sized according to the potential catchment area and quantity of goods that are expected to be ordered by users navigating toward that stop. This last aspect is probably the most critical to be addressed. However, thus far, its knowledge among practitioners is limited, which could affect the sizing of the lockers, as well as the number of rides that operate under the IPFT scheme with both passengers and freight aboard. Ad-hoc surveys can provide a more detailed view of the phenomenon, with an approach that is similar to the surveys adopted to design and size the PT network. Regarding the location of the parcel lockers, the criterion of minimising the increase in travel time of a single ride was adopted. In urban and suburban services, we assume that 26 and 34 lockers are available along the line (13 and 17 per direction, see Figs. 3 and 4 ), respectively. In this way, each parcel locker is less than 500 m (urban line) and 1000 m (suburban line) away from the consecutive one, a distance that corresponds to 5 min and 10 min on foot, respectively. In this case, the distribution of urban services is more regular than that of suburban services. In terms of personnel, additional staff for those rides that transport freight must also be anticipated to guarantee the proper delivery of parcels into lockers. As the cost of such additional human resources is not negligible, their tasks must be planned efficiently considering integrative responsibilities (e.g. ticket sellers and inspectors). Furthermore, the scheduling of the lines that propose an integrated IPFT service must be addressed. As mentioned above, only a limited number of rides transport both passengers and goods, and correct planning guarantees the coverage of more lines with the same amount of employee s and an increase in service efficiency. The last two aspects related to the service include cleanliness and quality of information. Under the experts’ recommendation, the former remains unchanged under the new operational scheme, with daily cleaning at the end of the service. Indeed, information for passengers and customers of freight services must be revised. The real-time service currently available to PT passengers must be improved by making it more user-friendly and tailored to smartphones (which are the primary target, as this service is mostly useful for passengers who wait at the bus stop or are approaching it). Contextually, the information for the customers of the freight service must be guaranteed, with information regarding the status of their parcels, expected delivery, and the codes to unlock the lockers where the parcel is stored. 7 Conclusions The integration of passenger and freight transport into a unique service is a new promising field of research; however, stakeholders and policymakers struggle to find adequate information about its design. It is well known that service quality depends on key service parameters. Nevertheless, the limits to which they can eventually be extended to offer a quality-integrated service have still not been determined. In this paper, we provided a preliminary answer to this research question by administering a survey to relevant stakeholders with previous integration experience about the minimum attributes that the service (both at urban and suburban levels) should guarantee. The survey, described in Section 3 , considers a set of attributes that span from operational characteristics to the quality of the service, including environmental and economic aspects. The characteristics were derived from the combination of performance criteria provided by scientific literature and referred to freight-only and passenger-only transport, merging the two transport methods into a unique and integrated evaluation scheme. The results obtained from the answers of the panel ( Section 4 ) are useful for understanding which attributes are expected to impact service operation. To this end, eight attributes were used as the basis for the definition of an experimental design of the service, which compares a traditional urban PT service with IPFT ( Section 5 ). These attributes were then applied to an urban bus line and a suburban bus line already operational in Emilia Romagna (Italy) to verify their operational suitability. With some adaptations in terms of stops, average travel time, staff, and infrastructural equipment, the service can be implemented as an alternative to the passenger-only solution currently in service. The main positive implication deriving from the adoption of IPFT is the cross-subsidization that the freight service can guarantee to PT. Indeed, part of the budget currently used for freight service -and no longer needed- can be reallocated, resulting in a potential increase of the service frequency (or, alternatively, in a decrease of the fare). This opens a variety of new scenarios with impacts also on private mobility. These aspects should be managed by policymakers according to the context in which the service is introduced, and considering the positions of involved stakeholders. Indeed, beside the different interests among those actors who are supposed to collaborate for the success of this scheme ( Table 5 ), IPFT has several freight and delivery competitors. Apart from understanding client perspectives and preferences (in terms of timing and reliability), the relationship between them must be assessed and managed accurately to avoid contrasts among operators. Indeed, IPFT, as a part of the PT service, is likely to be subsidised by public authorities (at least in the first launching phase), leading to unbalanced competition with traditional freight distributors. The fact that only a limited portion of goods can be transported by IPFT (those belonging to the classes parcel size and deferred deliveries and parcel size and time-definite deliveries) can be helpful in avoiding conflicts. Even if these aspects are not directly perceived by users, they should be considered during the design phase, as they directly affect the service. Hence, the consensus on this solution should be built with a direct stakeholder involvement in the decision-making process, by adopting appropriate models such as the combination of discrete choice models (DCM) with agent-based models ( Le Pira et al., 2017 ) or the multi-actor multi-criteria analysis ( Macharis et al., 2009 ). General remarks derived from the evaluations of experts emphasize the importance of the financial aspects related to this type of service. According to one of the respondents, " the system is positive from any of the aspects that are analysed (economic, social, environmental), but the most critical thing is to get it right with the operational way of performing the service, which is economically and operationally viable ." Financial aspects should be assessed in a more detailed way, to evaluate the effectiveness of the system from the perspectives of PT and freight operators, as well as public authority. Another economic aspect concerns the variation in fares. In our survey, we asked for the potential maximum decrease that could be predicted realistically, but tariffs are not made for single lines. The effect on the whole bus line system will be much lower and should be judged in the context of the entire transport system. Finally, safety and security issues should be analysed with a specific focus, which includes the design of the vehicles and the adaptations to the existing layout. Also the interactions among passenger and freight flows, as well as the operational aspects related to loading/unloading activities are aspects that merit further examination. Ultimately, all such aspects are able to influence the modal split of freight transport, making the distribution at urban and regional level more rational ( Libardo and Nocera, 2008 ). In the next phase of this research, an SP experiment may be submitted to PT users to understand their opinions about the service, as well as their willingness to adopt it. In this sense, the information about the variation in ticket price could be considered as a form of compensation for the expected increase in travel times. Such an SP experiment could consist of a choice experiment between an alternative represented by the traditional bus system carrying only passengers and IPFT services characterised by different levels of variation of the service attributes presented in Section 4 . To evaluate this correctly, some of the suggestions highlighted by the experts in the open remarks at the end of the survey need to be considered. For instance, bus service design should include more specific information with a more detailed definition of freight demand (to understand the impact in terms of occupancy on vehicles), bus routes (also with visual aspects to understand the relationship within the context), and service provision. In particular, the management of the staff and the delivery process must be better defined by clarifying whether the staff on board these buses is always available, available only along specific routes, or only on selected rides with both parcels and passengers. CRediT authorship contribution statement Federico Cavallaro, Laura Eboli, Gabriella Mazzulla, Silvio Nocera : Conceptualization, Methodology, Formal analysis, Investigation, Writing − original draft preparation, Revision. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements We gratefully acknowledge the efforts of our survey respondents, who took valuable time to participate in this study. We also thank Mr. Marco Mazzocchi and Ms. Arianna Bichicchi of Agenzia Mobilità Romagnola for providing the data of the urban and suburban bus lines of Forlì and Rimini. Appendix 1 Survey on IPFT 1 Number of bus stops/distance between bus stops The need to deliver freight by public transport (PT) can modify the total number of stops and their locations. Consequently, their distances can also change. Please indicate the maximum increase and/or decrease in the number of stops that you expect from the introduction of such a service, as well as the maximum variation in the distance between stops. Indicate the increase with "+ ", the decrease with "-". Table URBAN SERVICE SUBURBAN/RURAL SERVICE Variation in the n° of stops Variation in the n° of stops Variation in the distance between stops Variation in the distance between stops Comments to this answer: 2a Service frequency for the passenger component The need to deliver freight by PT could influence service frequency either by increasing or reducing it. Please indicate the expected maximum frequency variation, expressed as the number of increased or decreased rides/h for passengers. Table URBAN SERVICE SUBURBAN/RURAL SERVICE Variation in the service frequency (peak hour) Variation in the service frequency (peak hour) Variation in the service frequency (non-peak hour) Variation in the service frequency (non-peak hour) Comments to this answer: 2b Service frequency for the freight component The delivery of freight by PT influences the frequency of freight services. Assuming that two deliveries/day for urban areas and one delivery/day in suburban/rural areas are planned in the "business as usual" scenario, please indicate the expected maximum variation, expressed as the number of increased deliveries/day for freight. Please note that not all vehicles are expected to transport freight. Table URBAN SERVICE SUBURBAN/RURAL SERVICE Variation in the freight service (potential deliveries/day) Variation in the freight service (potential deliveries/day) Comments to this answer: 3 Information about time An integrated transportation system is expected to provide more reliable information to passengers (e.g. real-time information) owing to the presence of an information system for freight traceability. Please express your agreement or disagreement with this statement. URBAN SERVICE . 1. Agree 2. Disagree SUBURBAN/RURAL SERVICE 3. Agree 4. Disagree If disagree, please elaborate the answer: 4 Time in vehicle The need to deliver freights to appropriate parcel lockers can decrease commercial speed and increase travel time. Please indicate the expected maximum travel time increase. Table URBAN SERVICE SUBURBAN/RURAL SERVICE Maximum travel time increase (min) Maximum travel time increase (min) Comments to this answer: 5 Punctuality In addition to the previous answer, delivering freight could influence the punctuality of the runs, increasing the waiting time at bus stops and delaying the destination. Please indicate the maximum minutes of delay that you expect. Table URBAN SERVICE SUBURBAN/RURAL SERVICE Maximum delay at destination (min) Maximum delay at destination (min) Comments to this answer: 6 Staff availability The integrated transportation system requires the presence of personnel managing freight delivery; thus, passengers can benefit from the permanent presence of personnel on board. Please express your agreement or disagreement with this statement. URBAN SERVICE . 1. Agree 2. Disagree SUBURBAN/RURAL SERVICE 3. Agree 4. Disagree If disagree, please comment the answer: 7 Fare The integrated transportation system could guarantee a decrease in fares owing to an increase in the efficiency of the service. Please indicate the maximum decrease that can be realistically predicted. Table URBAN SERVICE SUBURBAN/RURAL SERVICE Maximum decrease in fare (€) Maximum decrease in fare (€) Comments to this answer: 8 Comfort in vehicle The need to reserve a part of the bus for freight has an influence on the availability of space (seats) for passengers. Please indicate the maximum decrease in the surface area available to passengers (as %) that you expect. Table URBAN SERVICE SUBURBAN/RURAL SERVICE Maximum decrease in area for passengers (%) Maximum decrease in area for passengers (%) Comments to this answer: 9 Cleanliness The integrated transportation system implies the coexistence of both passengers and freight in the same vehicle. In turn, this has a negative effect on cleanliness. Please express your agreement or disagreement with this statement. URBAN SERVICE . 1. Agree 2. Disagree SUBURBAN/RURAL SERVICE 3. Agree 4. Disagree If disagree, please elaborate the answer: 10a Security for the passengers’ component The integrated transportation system requires the presence of personnel managing freight delivery (loading/unloading, assignment to parcel lockers at bus stops for the pick-up of clients). Passengers can benefit from the permanent presence of personnel on the board. Please express your agreement or disagreement with this statement. URBAN SERVICE . 1. Agree 2. Disagree SUBURBAN/RURAL SERVICE 3. Agree 4. Disagree If disagree, please elaborate the answer: 10b Security for the freight component The integrated transportation system requires the presence of personnel managing freight delivery (loading/unloading, assignment to parcel lockers at bus stops for the pick-up of clients). The presence of passengers on board (even if in separated areas) could increase the risks of theft, damage, etc., for freight service users. Please express your agreement or disagreement with this statement. URBAN SERVICE . 1. Agree 2. Disagree SUBURBAN/RURAL SERVICE 3. Agree 4. Disagree If disagree, please elaborate the answer: 11 Environment The integrated transportation system is expected to have positive variation in terms of the generated transport externalities (including congestion, accidents, local and global air pollution, and noise pollution). Please express your agreement or disagreement with this statement. URBAN SERVICE . 1. Agree 2. Disagree SUBURBAN/RURAL SERVICE 3. Agree 4. Disagree If disagree, please elaborate the answer: 12. Please provide eventual comments about the whole questionnaire.
|
[
"ALLEN",
"ANAND",
"ARVIDSSON",
"BAKKER",
"BAYART",
"BELLIZZI",
"BELLIZZI",
"BOLLAPRAGADA",
"BOYER",
"BRAUNSBERGER",
"BRUZZONE",
"CAVALLARO",
"CAVALLARO",
"CAVALLARO",
"COCHRANE",
"DELANGHE",
"DEONA",
"DEONA",
"DEONA",
"DELLOLIO",
"DIGIESI",
"EBOLI",
"EBOLI",
"EBOLI",
"EBOLI",
"FATNASSI",
"FRIMAN",
"FUMASOLI",
"GATTA",
"GHILAS",
"GHILAS",
"GRISHAM",
"HANSSON",
"HEERWEGH",
"HENSHER",
"HILL",
"HORCAS",
"HU",
"HUANG",
"JANSEN",
"KARAKIKES",
"KIBAJANIAK",
"LARRODE",
"LEPIRA",
"LEPIRA",
"LIBARDO",
"LOOSVELDT",
"MACHARIS",
"MACHARIS",
"MACHARIS",
"MANGANO",
"MARTILLA",
"MASSON",
"MEDINA",
"MULLEN",
"NAMGUNG",
"NOCERA",
"PARASURAMAN",
"QUAK",
"REDMAN",
"SAMPAIO",
"SERAFINI",
"SOUREK",
"SPICKERMANN",
"SPOOR",
"STRALE",
"TERZI",
"TRANSPORTATIONRESEARCHBOARD",
"TRENTINI",
"VAJDOVA",
"VANDUIN",
"VISSER",
"WOSIYANA",
"ZHOU"
] |
51b24bdf5e0948b0b2a213c564393dbf_Dimensionless process windows in laser-based powder bed fusion of AISI 316L using ring-shaped beam p_10.1016_j.addlet.2025.100284.xml
|
Dimensionless process windows in laser-based powder bed fusion of AISI 316L using ring-shaped beam profiles
|
[
"Grünewald, Jonas",
"Wudy, Katrin"
] |
The research trend to investigate the influence of alternative beam profiles on the process and component properties in laser-based powder bed fusion raises the question of how to compare the processes and process results generated with various beam profiles in different sizes. The current state of research mainly examines the process simplified on a single-track basis or addresses isolated aspects, such as the change in beam profile and size with constant absolute process parameters, which neglects the cross-effects of these parameters. Therefore, this paper presents a new approach to consider varied process parameters and their cross effects. The approach is based on a simple heat conduction model and allows the creation of beam shape and size-independent process maps. These dimensionless process maps are created by replacing the common dimensioned process parameters (laser power and scan speed) with combined dimensionless parameters (dimensionless enthalpy and Peclét number, each extended by a dimensionless hatch distance). This way, the parameters consider material and beam properties. Within the process maps, the process boundaries are predicted by simple geometric conditions of the calculated melt pools using the introduced heat conduction model. The model is experimentally validated by conducting a comprehensive parameter study using a multidimensional design of experiments with seven different beam profiles in various sizes and varying laser power, scanning speed, and hatch distance processing AISI 316L. The relative density and surface roughness are evaluated in the experiments. The predicted and experimentally determined process limits are in excellent agreement.
|
Nomenclature A Absorption c p Specific heat capacity d * Melt track depth E v Volume energy density g Dimensionless temperature field h Hatch distance h* Dimensionless hatch distance h s Enthalpy at melting point L f Latent heat of fusion Pe Peclét number P laser Laser power t Time, integration variable T m Melting temperature T 0 Ambient temperature V̇ Build rate v scan Scanning speed w * Melt track width w 0 Spot radius x̅ Scan vector length x̅ * Dimensionless scan vector length x, y, z Spatial coordinates x *, y *, z * Dimensionless spatial coordinates α Thermal diffusivity δ diff Thermal diffusion depth δ layer Layer thickness ΔH/h s Dimensionless enthalpy ρ Density 1 Introduction Laser-based powder bed fusion of metals (PBF-LB/M) is an additive manufacturing process consisting of layer-wise melting metallic powder in a powder bed using a laser beam. The laser exposes the desired areas in the powder bed by scanning defined exposure patterns consisting of individual hatched scan tracks. State-of-the-art PBF-LB/M uses single-mode lasers with a Gaussian intensity distribution [ 1 ]. These typically provide spot sizes between 30 and 115 µm [ 1 ]. The resulting high intensities ensure the complete melting of the powder feedstock, resulting in dense components (relative densities greater than 99.5 %) [ 2 ]. With PBF-LB/M, the process parameters are typically optimized to result in highly dense samples with low surface roughness. Parameter combination ranges leading to the desired result span the process window. Outside the process window, various physical effects lead to undesirable process and component properties. When applying high laser energy densities, material flows are created, material vaporizes, and vapor depressions form [ 3 ]. This can lead to unstable keyholing and keyhole porosity in the component [ 4 ]. When very low energy densities are used, the powder material is not melted sufficiently, leading to a lack of fusion [ 5 ]. If too high process speeds are selected, balling effects and bulging top surfaces can occur [ 6 ]. Above a critical scanning speed, the longitudinally elongated melt pool breaks up and forms melt accumulations [ 7 ]. Additionally, high lateral velocities can increase roughness and bulging surfaces, especially when using larger laser spots [ 6 ]. The bulging top surfaces of a layer can result in a process abortion due to collisions with the recoater [ 8 ]. The absolute boundaries of the process window regarding the process parameters depend on the system technology used, the processed material, the atmosphere, and a multitude of additional factors. The Gaussian shape of the single-mode lasers used has attributed an influence on the process limits in some cases [ 9 , 10 ]. Therefore, current research trends are studies investigating the influence of alternative beam shapes on process and component properties in PBF-LB/M. Single-line experiments [ 11 ] and multi-layer experiments [ 12 , 13 ] have shown that the shape of the melt pool changes when applying non-gaussian beam profiles. Melt tracks produced with ring-shaped instead of Gaussian beam profiles are generally wider [ 9 , 14 ]. However, alternative beam profiles often reduce the melt track depth due to a typically lower intensity [ 9 ]. This shifts the process window for three-dimensional processes towards higher laser powers [ 13 , 15 ]. Nevertheless, Wischeropp et al. [ 10 ] demonstrated that using ring-shaped beam profiles helps to avoid keyhole porosity when processing AlSi10Mg. Moreover, it has been shown that the number and velocity of spatters could be reduced using non-Gaussian beam profiles, indicating reduced melt pool dynamics [ 16 , 17 ]. The cited studies show the potential of alternative beam profiles for enlarging the process window. However, most studies attempt to create similar process conditions using the same absolute process parameters and then compare the process results with common dimensioned process variables, such as the volume energy density, even if the beam sizes are changed. Galbusera et al. [ 13 ] stated that using the volume energy density exclusively for evaluating the process result may fail when using alternative beam profiles. Additionally, Bakhtari et al. [ 18 ] emphasize that using volume energy density is inappropriate when using alternative beam shapes because of its inefficiency in delivering reproducible results. To overcome the challenge of comparing process behavior and results using laser spots in various sizes and shapes, this paper presents a new approach to process map representation as an alternative to the traditional P laser - v scan process maps and the deduction of dimensionless process windows within those maps. The approach is based on a simple heat conduction model and considers the spot size and thermophysical material properties in the characteristic values. This way, the dimensionless process maps enable mesoscopic process results to be compared using different beam profiles and spot sizes. The process window boundaries for specimens resulting in high relative density and low top surface roughness can be predicted using simple geometric criteria of the modeled melt pools. 2 Material and methods This section first presents the established absolute characteristic parameters for PBF-LB/M followed by dimensionless parameters, which are combined to form the novel dimensionless process map representation and predict the process limits. This is followed by the experimental conditions and process parameters investigated. Finally, the evaluation methods for the roughness measurements and density determination are described. 2.1 Characteristic process parameters in laser-based powder bed fusion of metals In order to combine the process parameters in one energy parameter, the volume energy density E v is typically used for PBF-LB/M. This characteristic value relates the laser power P laser used to the scanning speed v scan , the hatch distance h, and the layer thickness δ layer according to (1) E v = P laser v scan · · h · δ layer . Process speed is evaluated by the build rate V̇ . The build rate quantifies the volume added to the workpiece at a particular time during exposure according to (2) V ˙ = v scan · h · δ layer . The volume energy density E v is often used because of its simplicity. However, it neglects many influencing variables that enormously impact the process dynamics and results, such as the spot size and material-specific properties. Therefore, its suitability as a characteristic single process parameter is limited [ 19 ]. The dimensionless enthalpy, according to Hann et al. [ 20 ], includes thermophysical material properties and the spot size of the laser beam in addition to the standard process parameters of the laser power and the scanning speed. According to [ 21 ], it is calculated as with absorption (3) Δ H h s = A · P laser ρ · ( c p · ( T m − T 0 ) + L f ) · π · α · v scan · ( 2 w 0 ) 3 A , density ρ , specific heat capacity c p , the delta between melting and ambient temperature T m – T 0 , latent heat of fusion L f , thermal diffusivity α , and laser spot radius w 0 . As such, the dimensionless enthalpy is a measure of the amount of energy introduced into a certain mass relative to the energy required to achieve the melting of this mass. In order to consider the speed aspects in laser material processes, Ion et al. [ 22 ] introduced the dimensionless scanning speed as the Peclét number, which is calculated according to (4) P e = v scan · 2 w 0 α . Rubenchik et al. [ 23 ] showed that a dimensionless temperature field can be calculated with the Peclét number and the dimensionless coordinates x *, y *, and z * based on the Eager-Tsai model [ 24 ]. Thereby, the x and y coordinates are normalized to the beam radius. The z coordinate is normalized to the thermal diffusion depth. The normalization can be summarized according to (5) x * = x w 0 , y * = y w 0 , z * = z δ diff with δ diff = α · 2 w 0 v scan . The dimensionless temperature field is represented by the function g , which is calculated using (6) g = ∫ 0 ∞ exp ( − z * 2 4 t − y * 2 + ( x * − t ) 2 4 2 t P e + 1 ) ( 4 2 t P e + 1 ) t d t . The melt pool boundaries occur at points where the dimensionless temperature field g reaches the dimensionless melting temperature. This can be expressed using the dimensionless enthalpy. According to Rubenchik et al. [ 23 ], the melt pool boundaries result at points with the fulfilled condition (7) g = π 2 3 4 · Δ H h s . Using this model, the dimensionless enthalpy and Peclét number are suitable for predicting the dimensionless melt track depth d * and melt track width w * for a heat conduction-driven process regime in a known material [ 23 ], even if ring-shaped beam profiles are used [ 25 ]. However, the informative value of the dimensionless enthalpy and Peclét number are limited to single melt tracks. In a two- or three-dimensional additive manufacturing process, the hatch distance, layer thickness, and laser return time influence the process results. Since the layer thickness is kept constant in this study, the hatch distance and laser return time are considered exclusively. In addition to the number of tracks required per exposed area, the hatch distance determines the remelted area of neighboring scan tracks. The corresponding proportion significantly influences the accumulated heat. Accordingly, the hatch distance must be scaled with the expected melt pool width to obtain comparable processes. Since the coordinates x and y and thus the melt pool length and width in the model are normalized to the spot radius, the hatch distance is also normalized to the spot radius. This dimensionless hatch distance was already introduced by Thomas et al. [ 26 ] as whereby small values mean a larger overlap of the individual track cross-sections. An (8) h * = h w 0 , h* of 1.4 corresponds to Meiners' [ 27 ] original recommendation for a hatch distance that results in dense components. Besides the hatch distance, the laser return time influences the accumulated heat in the process zone. In the presented approach, this part is considered via the mean dimensionless scan vector length x̅ *, which results from the normalization of the mean scan vector length x̅ to the spot radius w 0 plus the dimensionless hatch distance h * according to (9) x ¯ * = x ¯ w 0 + h * . Consequently, Eq. (9) assumes that hatch sections are scanned in a meandering pattern and only negligible delay times occur between two neighboring scan tracks. The build rate is independent of the energy input, as it only includes the scanning speed and the geometric parameters during the PBF-LB/M process. However, this does not provide direct transferability to different materials. In this publication, a dimensionless build rate is introduced. As the experiments are carried out with a constant layer thickness, it is neglected in calculating the characteristic value. As a result, the dimensionless build rate is the product of a dimensionless scanning speed, respectively Peclét number and the dimensionless hatch distance according to which means that the dimensionless build rate is proportional to the build rate for a constant thickness and material. (10) P e · h * = v scan · h α , Dimensionless process maps can be drawn from the parameters defined in Eqs. (3) , 8 , and 10 . The ordinate of these maps is formed by the ratio of dimensionless enthalpy to the dimensionless hatch distance and is, therefore, a measure of the energy input. The dimensionless build rate is plotted on the abscissa. This way, processes with different absolute parameters can be compared within one map. This way, the approach is distinguished from previously published approaches by, e.g., Thomas et al. [ 26 ] or Patel and Vlasea [ 28 ]. Thomas et al. [ 26 ] used for the process map representation one energy dimension, which is calculated using the dimensionless energy and dimensionless velocity introduced by Ion et al. [ 22 ] and the self-introduced dimensionless layer thickness and hatch distance. The second dimension is the reciprocal dimensionless hatch distance. Thus, the map representation is particularly suitable for showing the influence of the dimensionless hatch distance on the process results. Patel and Vlasea [ 28 ] also extended the approach of Ion et al. [ 22 ] and included the dimensionless layer thickness of Thomas et al. [ 26 ] within their model. As the hatch distance was not included, the representation of Patel and Vlasea [ 28 ] is particularly suitable for observing the melting mode of individual scan tracks. Besides the pure clustering of process result categories, an additional added value of processing maps is provided if process limits can be predicted. The basis for this prediction is outlined in the following section. 2.2 Prediction of process window boundaries The heat conduction model from Eq. (6) can extract melt pool characteristics such as length, width, and depth using Eq. (7) . In a previous study [ 25 ], it was shown that the model could also be used to calculate the melt pool cross-sections when using ring-shaped beam profiles under the condition that the spot size was determined using the 2nd moment method and that the process is still in a conduction-driven process regime. However, in the existing form, the model neglects the influence of neighboring scan tracks on the melt pool shape and size. In this publication, the dimensionless temperature field is therefore extended by two neighboring scan tracks, which are scanned bidirectionally. The temperature field during the exposure of the center of the third scan track results from (11) g = ∫ 0 ∞ exp ( − z * 2 4 t − y * 2 + ( x * − t ) 2 4 2 t P e + 1 ) + exp ( − z * 2 4 t − ( y * + h * ) 2 + ( − x * + x ¯ * − t ) 2 4 2 t P e + 1 ) + exp ( − z * 2 4 t − ( y * + 2 h * ) 2 + ( x * + 2 x ¯ * − t ) 2 4 2 t P e + 1 ) ( 4 2 t P e + 1 ) t d t . The resulting melt pool of the heat source of the first term is analyzed to determine the melt pool dimensions. The second and third terms are implemented to consider the preheating of the analyzed process zone by heat accumulation of neighboring scan tracks. The spatial offset of the neighboring scan tracks in the y * direction is included via the dimensionless hatch distance h *. The laser return time is implemented via a local offset of the heat sources in the x * direction with the dimensionless mean scan vector length x̅ *. Thus, the method described calculates the dimensionless temperature field resulting exclusively from heat conduction based on the process parameters and geometric boundary conditions. The process is assumed to be in a steady state while exposing the center of a third scan vector. This procedure simplifies the complex PBF-LB/M process to a high degree and does not fully represent the underlying physics. However, if the process is conduction-driven, the simplified assumptions are sufficient to predict the melt pool sizes and process limits associated with the melt pool dimensions. Fig. 1 shows the x *- y * plane of an exemplary calculated temperature field, including the artifacts of the fictitious melt pools of the previous scan tracks. A dimensionless mean scan vector length of 71.7 was used in the figure, which would result from a 7 mm track length scanned with a 200 µm spot diameter and a dimensionless hatch distance of 1.7. As can be seen, the directly adjacent scan track (right in Fig. 1 ), in particular, influences the temperature field. The preheating effect of the next adjacent scan track (left in Fig. 1 ) is only slightly visible. The boundaries of the process windows are predicted using geometric characteristics of the melt pool under consideration. For lack of fusion defects, it is assumed that there must be a minimum fusion depth. Since the melt track depths produced between two melt tracks differ significantly with different hatch distances (see Fig. 2 ), a calculated melt pool depth at the y *-coordinate of h */2 ( d * in h Fig. 2 ) of at least 5 µm is targeted. For verification of the criterion, the melt pool cross-section is assumed to be elliptical, resulting in the ellipse equation as (12) y * 2 ( w * 2 ) 2 + z * 2 d * 2 = 1 . The geometric relationships are sketched in Fig. 3 . In order to obtain absolute values for the depth coordinate, the dimensionless melt track depth d * must be extended by the factor w 0 / ), as the P e z -direction in the coordinate system used is normalized to the thermal diffusion depth. With this extension, the limit for lack of fusion results at the position y *= h */2 from the condition (13) d * · w 0 Pe · 1 − h * 2 w * 2 ≥ 5 μ m . The criterion assumed by Kruth et al. [ 29 ] for balling is used for the process limit in the direction of bulging surfaces. Kruth et al. [ 29 ] describe the melt pool as a half-cylinder shape whose surface tension begins to break the melt pool into spherical accumulations when the length-to-width ratio exceeds 2.1. Therefore, the criterion for a process without bulging surfaces results from the dimensionless melt pool length l * to width w * ratio as (14) l * w * ≤ 2.1 . 2.3 Experimental setup The validation experiments are conducted on an in-house PBF-LB/M test bench. A fiber laser with switchable beam profiles is used as beam source (AFX-1000, nLIGHT, Inc., Vancouver, WA, USA). This beam source can distribute the laser power between a central Gaussian and a surrounding ring in seven steps. In this study, the beam profiles are named according to the relative power distribution scheme ratio "core/ring". A scanning system with four optical axes for the high-power laser beam is used for deflection, focusing, and expansion (AM-MODULE NEXT GEN, RAYLASE GmbH, Wessling, Germany). The PBF-LB/M process takes place in a self-built process chamber (see Fig. 4 ) under an argon atmosphere with a residual oxygen content of < 0.15 %. The shielding gas flow is directed over the process zone. In-house programmed software controls the hardware components, which was implemented based on the Machine Control Framework (Autodesk, Inc., San Rafael, CA, USA). This enables the experiments to be carried out automatically using a previously defined 3mf file. 2.4 Design of experiments All seven beam profiles available with the system technology are investigated. The approximately Gaussian beam profile 95/5 is used as a reference for the state of the art. The alternative beam profiles can be used for different target sizes. Beam profiles with lower intensities in the ring (75/25, 65/35, and 50/50) can be used for local pre- and post-heating of the process zone, while beam profiles with higher intensities in the ring (35/65, 20/80, and 10/90) can possibly influence melt flows. Within the design of experiments, laser power, scanning speed, and hatch distance vary in three stages. The approximately Gaussian beam profile 95/5 is examined in two sizes to separate the influence of the spot size and the influence of the beam profile. Table 1 summarizes the process parameters used. Beam diameters were determined by caustic measurements with a camera-based focus beam profiler (FBP-2KF, CINOGY Technologies GmbH, Duderstadt, Germany) using the 2nd moment method. The parameters are chosen based on preliminary single-track experiments published in [ 25 ] to obtain process results showing the process window and boundaries. The dimensionless enthalpy, dimensionless hatch distance, and Peclét number were kept in a similar range in the parameter sets. This design leads to 216 parameter sets examined with three repetitions on different positions of the build platform, whose mean value is shown in the process maps. AISI 316L (LPW-316–1, LPW Technology Ltd., Runcorn, Cheshire, United Kingdom) is used as the feedstock material. For the calculation of the dimensionless parameters, the thermophysical properties of AISI 316L are taken from [ 30 ] (see Table 2 ). Following King et al. [ 31 ], the absorption A is assumed to be 0.6. 2.5 Specimen design and measurement methods Square cuboids with an edge length of 7 mm and a height of 10 mm are produced as test geometries. The specimens are exposed with a bidirectional pattern. Each layer rotates the scan vectors by a 67° angle increment. The build job design is shown in Fig. 5 . Parameter sets that result in protruding parts colliding with the recoater are excluded from the build job within the first 20 layers. The roughness of all top surfaces (including the canceled specimens) is measured using fringe projection (VR-3100, Keyence Corporation, Ōsaka, Japan). Subsequently, the samples are separated from the build platform by wire electrical discharge machining and embedded in an epoxy resin parallel to the build direction. To create cross-sections of the test specimens, the samples are ground with silicon carbide paper in four stages (#180, #320, #800, #1200) and polished with lubricant and polishing suspension in 3 steps (3 μm, 1 μm, and OPS). The grinding plane is at least 2 mm inside the test specimen to exclude edge layer effects. The generated cross-sections are imaged using a digital microscope (VHX-7000, Keyence Corporation, Ōsaka, Japan). With the generated micrographs, the optical density is determined. 3 Results and discussion The results and discussion section starts presenting the specimen densities and top surface roughness as a function of the dimensionless enthalpy per dimensionless hatch distance. This is followed by process maps of the part density and top surface roughness using dimensioned and dimensionless representations to benchmark the approach introduced in the methodology section with the current state of the art. Within the introduced dimensionless process maps, the process window boundaries for components with high relative density and low top surface roughness are predicted for different dimensional hatch distances using the conditions from the methodology section. Since a multidimensional experimental design with various beam shapes, spot sizes, and process parameters is investigated in this study, using common P laser - v scan process maps is not feasible. Therefore, the volume energy density E v as energy input and build rate V̇ as process speed are used for the dimensioned process maps. The dimensionless process map uses the dimensionless parameters dimensionless enthalpy per dimensionless hatch distance and the dimensionless build rate as axes. Fig. 6 presents the measured data as a function of the dimensionless enthalpy per dimensionless hatch distance. Fig. 6 a shows the relative component densities, while Fig. 6 b shows the top surface roughness values. The density and the roughness values tend to increase with increasing dimensionless enthalpy per dimensionless hatch. However, the scatter of the values when considering only the energy input is so high that values with very low densities (< 99 %) and high roughnesses ( Sa > 90 µm) as well as high relative densities (> 99.8 %) and low roughnesses ( Sa < 40 µm) are present over an extensive range of the dimensionless enthalpy per dimensionless hatch distance. The standard deviation of the measured values increases with greater deviation from the desired values (high density and low roughness). Consequently, low mean values of density and high mean values of top surface roughness tend to have higher standard deviations, which is expected since the process is not reliable in these ranges. The form of representation in Fig. 6 shows that the energy-related dimension does not sufficiently reflect the process concerning the process results of density and top surface roughness. This way, the importance of suitable process maps is clearly emphasized. Fig. 7 shows the relative density process map of the manufactured specimens with the two representation methods. Fig. 7 a shows the dimensioned representation, while Fig. 7 b shows the dimensionless process map. At low build rates (≤ 3 mm 3 /s), beam profile 95/5 frequently results in keyhole porosity in the slowest scan velocity levels investigated. Build rates of approximately 3 mm 3 /s are common for state-of-the-art processes [ 32 ]. Beam profiles 75/25, 65/35, and 50/50 exhibit keyhole porosity at volume energy densities above 250 J/mm 3 . No keyhole porosities are detected for the beam profiles 35/65, 20/80, and 10/90. Lack of fusion porosities occur above a build rate of approx. 6.5 mm 3 /s for the beam profiles 95/5, 35/65, 20/80, and 10/90 at volume energy densities below 85 J/mm 3 . For the beam profiles 75/25, 65/35, and 50/50, no lack of fusion is detected in the examined parameter range. Additionally, below a build rate of 7 mm 3 /s, the samples of beam profile 95/5 exhibit a lack of fusion at the lowest power levels and the largest hatch levels. With the combination of high build rates and high volume energy densities, the process often results in protruding specimens. The build job for the corresponding specimens is aborted. Consequently, no density values could be determined (gray symbols in Fig. 7 a). The process boundary to the lack of fusion is mainly due to insufficient energy for the complete fusion of the powder particles or a hatch distance that is too large, resulting in the single tracks not being sufficiently connected. The boundaries of the density process window for keyholing porosity and protruding parts with protruding top surfaces are strongly blurred. For example, with the beam profiles 95/5, 20/80, and 10/90, it is still possible to generate highly dense components with a volume energy density of 120 J/mm 3 and a build rate of 4.4 mm 3 /s. However, the processes with the beam profiles 75/25, 65/35, and 50/50 had to be aborted in this parameter range. This is mainly due to the lack of presenting the intensity distribution and the spot size in the characteristic values used. Galbusera et al. [ 13 ] have attempted to create comparability in a similar parameter study on an Al alloy using the volume energy density. However, the study concluded that using volume energy density to evaluate the process results fails when alternative beam profiles are used. Moreover, Bakhtari et al. [ 18 ] state that using volume energy density is inappropriate for alternative beam shapes because non-comparable values can be obtained for differing beam shapes. To address this issue, the new approach to presenting a dimensionless process map introduced in the methodology section is shown in Fig. 7 b. The dimensionless process map in Fig. 7 b is qualitatively similar to the dimensioned process map in Fig. 7 a. Keyhole porosity occurs mainly for the nearly Gaussian beam profile 95/5 below Pe · h* of 9. For the beam profiles 75/25, 65/35, and 50/50, keyhole porosity occurs above a dimensionless enthalpy per dimensionless hatch distance of 6. In the range of a dimensionless build rate below 9 and a dimensionless enthalpy per dimensionless hatch distance below 6, components with a very high density are produced for the beam profiles 75/25, 65/35, and 50/50. This means that in terms of keyhole porosity, the boundary of the process window is strongly dependent on the beam profile. The almost Gaussian beam profile 95/5 has the highest tendency to result in keyhole porosity. If keyholes are present during processing using non-Gaussian beam profiles in the same parameter range, distributing the laser power in the ring may stabilize the existing keyholes and thus prevent collapses that lead to porosities, as demonstrated by Yuan et al. [ 33 ] using x-ray imaging. The dependence of the keyhole porosity on the beam profile, even with similar dimensionless characteristic values, demonstrates the advantages of alternative beam profiles. At the same time, it shows that the heat conduction model is not suitable for reliably predicting keyhole porosity. This limit of the model was expected, as the model neglects convection effects caused by evaporation or melt flows, for example. In the range of a dimensionless enthalpy per dimensionless hatch distance of 1.5 lies the transition from a process that results in dense components to a lack of fusion process. The transition is more evident when using the dimensionless parameters independent of the beam profile than when using the dimensioned parameters. The condition criterion in Eq. (13) can predict this lack of fusion boundary. The improved overlap of the dimensionless process boundaries for lack of fusion and protruding specimens when using alternative beam profiles is mainly because the dimensionless approach moves the coordinate points for very small beam profiles to higher values on the ordinate. The reason is the heat conduction model behind the dimensionless enthalpy, which includes the thermal diffusion depth of the introduced heat over the interaction time between the beam profile and a considered point in the specimen. With smaller beam profiles, the considered volume (and thus the mass) is smaller, which leads to higher enthalpies introduced in the considered volume. This introduced enthalpy is not dependent on the beam shape and correlates very well with the phenomena of lack of fusion. Analogous to the dimensioned process map, the data points of the bulging samples (gray dots in Fig. 7 b) are located in an area with a high energy input (dimensionless enthalpy per dimensionless hatch distance) and high process speeds (dimensionless build rate). The boundary to this area of protruding parts is pronounced with the help of the dimensionless parameters. The regions of built and aborted specimens overlap significantly less in Fig. 7 b. With increasing dimensionless build rate, lower dimensionless enthalpies per dimensionless hatch distance result in protruding parts. Qualitatively, the condition from Eq. (14) describes the progression of the boundary between the highly dense samples and the samples having to be aborted during the process. However, some samples still exhibit high density values with parameters above the predicted process window limit. Fig. 8 shows the process map concerning the top surface roughness of the samples. When using the dimensioned values, there is a large overlap of areas with high and low roughness (see Fig. 8 a), similar to the density distribution in Fig. 7 a. The different roughness areas are separated if the dimensionless process map approach is applied. The samples with increased keyhole porosity (see Fig. 4 ) exhibit high roughness values. Above the dimensionless build rate of 20, only a few specimens exhibit a roughness Sa of < 40 µm (see Fig. 8 b). The corresponding samples exhibit lack-of-fusion porosities (see Fig. 7 b). The protruding specimens (gray symbols in Fig. 7 ), which were aborted during the build process, show very high roughness values ( Sa ≥ 70 µm). Remarkably, the roughness increases abruptly in the range predicted by the condition in Eq. (14) , suggesting that this roughness is primarily caused by balling or another surface tension-driven physical phenomenon for long and narrow melt pools produced at high Peclét numbers. At the same time, the high roughness does not directly lead to samples with a reduced roughness but initially results in dense, very rough, or wavy samples. This effect is illustrated in Fig. 7 b and is also known from powder bed fusion using an electron beam [ 34 ]. As already introduced, the dimensionless process maps contain the predicted process limits as line graphs. The graphs show the parameter combinations in which the conditions from Eqs. (13) and 14 were fulfilled for various dimensionless hatch distances. The dimensionless hatch distances shown are examples of the levels of dimensionless hatch distances evaluated experimentally. The predicted process limits for the different dimensionless hatch distances shown are within a small corridor. However, the simulation shows an interesting trend: the limit for constant dimensionless build rates for small dimensionless hatch distances shifts towards larger dimensionless enthalpy per dimensionless hatch distance. This means that the process window for very small dimensionless hatch distances ( h * < 1.0) would be enlarged towards the limit of bulging respectively protruding surfaces. Thus, reducing the dimensional hatch could achieve higher build rates, which may sound counterintuitive. Interestingly, this happened without being explicitly discussed in many publications. In [ 6 ], it was summarized that in studies with larger beam profiles, the hatch distance often remains within the range of the Gaussian state-of-the-art process: about 70 µm [ 12 , 13 ], 90 µm [ 35 ], and 100 µm [ 36 ]. Reducing the hatch distance means the scanning speed would have to be significantly raised to increase the dimensionless build rate. The increased speed would require higher laser power (raised by the square root of the hatch reduction) to maintain dimensionless enthalpies in a suitable range. To summarize the core findings regarding the process maps: When using dimensionless process maps, four general areas can be separated as in classic process maps (consisting of scanning speed and laser power for a Gaussian beam profile). The four emerging categories are shown in Fig. 6 with exemplary cross-sections and briefly described below: • Keyholing (see Fig. 9 a): The process shifts into keyholing in the areas with high energy inputs and low process speed. Keyhole porosities lead to a reduced relative density. In most cases, the top surface has an increased roughness. Beam profile 95/5 has the highest tendency to result in keyhole porosity. • Lack of fusion (see Fig. 9 d): In the area with high process speed and low energy inputs, lack of fusion porosities occurs due to incomplete melting and fusion of powder particles. The top surface is typically slightly rough in these cases. The boundary is independent of a beam shape and can be predicted by a minimum calculated melt track depth at the center line between two adjacent melt tracks. • Protruding specimens (see Fig. 9 b): Specimens with very high top surface roughness values are produced when applying high process speeds and energy inputs (see Fig. 8 ). The specimens may have a high relative density. The manufacturing of the corresponding specimens was mostly aborted during the process due to recoater collisions. This boundary is hardly dependent on the beam profile when using dimensionless parameters. The boundary to this process regime can be well predicted with the condition according to Eq. (14) , which states that the melt pool should be shorter than 2.1 times the melt pool width for a stable process. • Stable process resulting in dense and smooth parts (see Fig. 9 c): In the center of the dimensionless process map is an area where highly dense specimens with a low top surface roughness can be manufactured. In conclusion, especially the process window boundary regarding keyhole porosity depends on the beam profile. Alternative beam profiles have a larger process window in the direction of the keyhole porosity than the almost Gaussian beam profile 95/5. The process limits regarding lack of fusion and protruding surfaces can be determined experimentally and predicted simulatively, independently of the beam profile. 4 Conclusion The present study demonstrates the feasibility of a novel approach to process map representation using dimensionless parameters. A simple heat conduction model is used to predict the process boundaries within the dimensionless process maps. A comprehensive parameter study with seven different beam shapes of various sizes and varied laser power, scanning speed, and hatch distance is conducted and presented to prove the approach. To experimentally determine the process limits, the density and the top surface roughness are determined on cuboidal AISI 316L specimens (7 × 7 × 10 mm 3 ). The results can be summarized as follows: By using dimensionless process maps with the dimensionless enthalpy per dimensionless hatch distance as energy parameter and dimensionless build rate as velocity parameters, the process results using different beam profiles can be compared independent of the spot size: • The experimental deduced process boundaries of all beam profiles investigated overlap in the dimensionless process maps in the direction of lack of fusion and protruding top surfaces. The boundaries can be predicted using the proposed simple heat conduction model. Consequently, the corresponding process boundaries are hardly dependent on the beam profile. • The boundaries of the process windows regarding keyhole porosity differ between the use of almost Gaussian beam profiles and ring-shaped beam profiles. The samples produced using Gaussian beam profiles most frequently exhibit keyhole porosity. This proves on the one hand that alternative beam profiles stabilize the process. On the other hand, it shows that the simple analysis of heat conduction is not sufficient to draw conclusions in a process regime driven by convection and evaporation. The results demonstrate that ring-shaped beam profiles increase the process reliability by reducing the tendency to unstable keyholing in PBF-LB/M manufactured parts. The consideration based on the newly introduced visualization with dimensionless parameters includes the size dependency of the process and, thus, introduces a new holistic approach. Moreover, the simulation results show that it may be possible to increase the build rate, particularly with large beam profiles, by reducing the dimensionless hatch distance and simultaneously increasing the scanning speed overproportionally. This publication validates the model presented by processing the material AISI 316L with a constant layer thickness of 50 µm. However, by including the absorption and thermophysical properties of the processed material, the model and the method are transferable to further materials and beam shapes without extensive calibration effort. Furthermore, an extension of the model concerning thicker layers would be possible in future work, as a variation of the layer thickness has not yet been considered in the approach presented. CRediT authorship contribution statement Jonas Grünewald: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Software, Resources, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Katrin Wudy: Writing – review & editing, Validation, Supervision, Resources, Project administration. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
[
"KHORASANI",
"AHMED",
"KHAIRALLAH",
"HUANG",
"TANG",
"WUDY",
"ZOLLER",
"LI",
"GRUNEWALD",
"WISCHEROPP",
"RASCH",
"GALBUSERA",
"GALBUSERA",
"NAHR",
"GRUNEWALD",
"GRUNEWALD",
"ROTHFELDER",
"BAKHTARI",
"SCIPIONIBERTOLI",
"HANN",
"COEN",
"ION",
"RUBENCHIK",
"EAGAR",
"GRUNEWALD",
"THOMAS",
"PATEL",
"KRUTH",
"MILLS",
"KING",
"YAKOUT",
"YUAN",
"BREUNING",
"PEREZRUIZ",
"TUMKUR"
] |
01c5a7717ec147d885397306278f06a5_Genetic structure and genome-wide association study of the traditional Kazakh horses_10.1016_j.animal.2023.100926.xml
|
Genetic structure and genome-wide association study of the traditional Kazakh horses
|
[
"Pozharskiy, Alexandr",
"Abdrakhmanova, Aisha",
"Beishova, Indira",
"Shamshidin, Alzhan",
"Nametov, Askar",
"Ulyanova, Tatyana",
"Bekova, Gulmira",
"Kikebayev, Nabidulla",
"Kovalchuk, Alexandr",
"Ulyanov, Vadim",
"Turabayev, Amangeldy",
"Khusnitdinova, Marina",
"Zhambakin, Kabyl",
"Sapakhova, Zagipa",
"Shamekova, Malika",
"Gritsenko, Dilyara"
] |
Horses are traditionally used in Kazakhstan as a source of food and as working and saddle animals as well. Here, for the first time, microarray-based medium-density single nucleotide polymorphism (SNP) genotyping of six traditionally defined types and breeds of indigenous Kazakh horses was conducted to reveal their genetic structure and find markers associated with animal size and weight. The results showed that the predefined separation between breeds and sampled populations was not supported by the molecular data. The lack of genetic variation between breeds and populations was revealed by the principal component analysis, ADMIXTURE, and distance-based analyses, as well as the general population parameters expected and observed heterozygosity (He
and Ho
) and between-group fixation index (Fst
). The analysis revealed that the studied types and breeds should be considered as a single breed, namely the ‘Kazakh horse’. The comparison with previously published data on global horse breed diversity revealed the relatively high level of individual diversity of Kazakh horses in comparison with the well-known foreign breeds. The Mongolian and Tuva breeds were identified as the closest horse landraces, demonstrating similar patterns of internal variability. The genome-wide association analysis was performed for animal size and weight as the traits directly related with the meat productivity of horses. The analysis identified a set of 60 SNPs linked with horse genes involved in the regulation of processes of development of connective tissues and the bone system, neural system, immune system regulation, and other processes. The present study is novel and introduces Kazakh horses as a promising genetic source for horse breeding and selection both on the domestic and international levels.
|
Implications The Kazakh horses represent traditional landraces used in Central Asia, understudied from the genetic and genomic points of view. In the present study, we used single nucleotide polymorphisms to evaluate the genetic structure of common breeds and types of Kazakh horses with the account of available data on horse breed diversity. The results showed a high level of individual variability and lack of differentiation between considered breeds and populations. The study introduces the Kazakh horses as an original and poorly studied landrace, which has a potential to be used in international breeding programmes as a genetic pool for potentially valuable traits. Introduction Horses ( Equus caballus L.) have traditionally played a central role in the whole history of Kazakhstan and the Kazakh people. The eneolithic Botai culture (Northern Kazakhstan) contains arguably the earliest evidences of the use of horses by the local tribes ( Levine, 1999 ); however, it remains disputable whether horses were domesticated or obtained by hunting ( Outram et al., 2009, 2021; Taylor and Barrón-Ortiz, 2021 ). Genomic data revealed that Botai horses were closer to Przewalski’s horses than to modern domestic lineages ( Gaunitz et al., 2018 ), thus, even if the Neoilthic horse domestication had taken place at Botai, it had occurred independent of the main course of horse domestication ( Kyselý and Peške, 2022 ). Nevertheless, horses became an important part of the steppe pastoralism and nomadism at the area of modern Kazakhstan and Central Asia as early as the Bronze Age ( Frachetti and Benecke, 2009; Outram et al., 2012 ). From the Saka-Skythian tribes to three Kazakh Hordes, throughout the process of the Kazakh ethnogenesis, horses were not only an essential economical resource for the local Nomadic peoples ( Chang, 2015 ) but also became an important part of the cultural legacy inherited by independent Kazakhstan, with rich symbolic connotations and influence on Kazakh language ( Sarbassova, 2015 ). Traditional Kazakh husbandry practices have not changed significantly for centuries and have been based on grazing and seasonal transhumance (migrations between zhailau and qystau , summer and winter pastures, respectively). The conditions of horse pasturing have been kept close to natural: free grazing under the herdsman’s control; in winter, horses find their fodder from under the snow— tebindeu . Herds of horses were the main measure of one’s wellness; the owners selected the best animals or interchanged them with other breeders to keep and improve the horses’ valuable qualities, such as strength, endurance, and the most important for nomads, meat and milk productivity. As a result of such folk selection over hundreds and thousands of years, the Kazakh horse was formed, with some traditional types distinguished based on their qualities and geographical distribution, the most known of which are Zhabe (variants of transliteration: Jabe, Dzhabe), Adai (variants of transliteration: Aday, Adaev), and Naiman horses. Zhabe and Adai horses are the most common types. The Zhabe type is specialised for meat and milk production, and the Adai type has more expressed qualities of saddle horses; both these types originated in Western Kazakhstan and are used in all regions of the country ( Dmitriev et al., 1989 ). The third type, Naiman, has been traditionally bred by the inhabitants of Dzhungar Alatau as the universal horses for use in mountainous conditions; this type is similar to Mongolian horses and considered the most subtle of Kazakh horses ( Dyussegaliyev, 2022 ). All these types are well adapted for the traditional Kazakh methods of seasonal pasturing and transhumance ( Barmintsev, 1958 ). Traditional Kazakh horses became progenitors of new breeds by crossing with the animals with desirable traits. Kostanay (variant of transliteration: Kustanai) breed was under development since 1887 year by crossing Kazakh mares with stallions of saddle breeds, Don, Astrakhan, Strelets, Thoroughbred, to combine their best qualities ( Dmitriev et al., 1989 ). The breed was finally registered in 1951 ( Nechayev et al., 2005 ), and work on its improvement has continued into the present. These horses have good saddle and draft characteristics and are suitable for both stable and steppe maintenance ( Dmitriev et al., 1989 ). The Kushum horses were bred between 1931 and 1976 by crossing Kazakh horses with Thoroughbreds, Don breeds, and their half-breeds, initially as military horses and later, after World War II, as steppe herd horses for meat and milk production. The resulting breed is versatile; the horses have high endurance and are capable of high live weight gain ( Dmitriev et al., 1989 ). The Mugalzhar breed was developed based on Kazakh horses between 1969 and 1998 to improve meat and milk productivity ( Iskhan et al., 2019 ). Non-specific crossings and selection within the Zhabe breed allowed a significant increase in live weight (80–100 kg for mares, 100–120 kg for stallions, compared to the original horses), without changing the technology of their maintenance ( Dyussegaliyev, 2022 ). A significant damage to the Kazakh horse population was caused by the Soviet policy of the involuntary collectivisation and eradication of private animal ownership, and during the period between 1928 and 1958, the amount of Kazakh horses was reduced from 4 640 000 to 300 000 ( Nechayev et al., 2005 ). According to the data of the Bureau of National statistics of the Agency for Strategic planning and reforms of the Republic of Kazakhstan ( Bureau of National statistics, 2023 ), the total amount of horses (including foreign breeds) in the country was 1 666.4 thousands in 1991; after the drastic decrease in the 1990s, the number of horses was growing steadily and reached 3 489.8 thousands in 2021, due to the development of husbandry in Kazakhstan. Kazakhstan is the second large horse meat producer in the world after China, however, it is mainly limited to the domestic market, as the country is not a significant exporter of horse meat ( Jastrzębska et al., 2019 ). With the growing interest in horse meat as a safe and nutritious alternative to beef, despite a prejudicial attitude existing in many countries ( Stanciu, 2015 ), Kazakhstan has a potential to become an important provider in the global horse meat market. This requires an extensive modernisation of horse breeding to comply with the internationally acclaimed standards. An important aspect of such modernisation is a wide implementation of the contemporary methods of molecular genetics and genomics in breeding practices to better understand the genetic structures of horse lines and breeds, improve classification and management of horse genotypes, assist selection using molecular markers associated with valuable traits, etc. The commercial microarray genotyping panels for animals contain tens or hundreds of thousands of single nucleotide polymorphism ( SNP ) markers selected to reflect the total genetic variability, helping scan genomes for potentially important polymorphisms without expensive whole-genome sequencing. In Kazakhstan, SNP microarray genotyping was previously used to describe genetic structures of the local breeds of sheep, another animal of essential importance for the country ( Pozharskiy et al., 2020; Zhumadillayev et al., 2022 ). For horses, the EquineSNP50 panel was developed and proven to be suitable for genome-wide association analysis and studies of horse diversity ( McCue et al., 2012; Petersen et al., 2013b; 2013a ). The scope of the present work was to explore genetic diversity of Kazakh horses using the Equine80k SNP microarray, the upgraded and expanded version of the EquineSNP50 array. The horses of three traditional types (Zhabe, Adai, and Naiman) and three derivative breeds (Kushum, Kostanay, and Mugalzhar) were sampled from herds in different regions of Kazakhstan to reveal their genetic variability between and within breeds. The previously published data of the genetic diversity of horse breeds from all over the world were also used for comparison. Also, a genome-wide association study ( GWAS ) was performed to find SNPs associated with size and live weight of the horses. This is the first study involving medium-density SNP array into an investigation of traditional Kazakh horses in Kazakhstan. The obtained results will increase understanding of the genetics of Kazakh horses, their place in the global diversity of horse breeds, and the history of horse domestication. The Kazakh horses will be introduced as a promising genetic pool not only in domestic but also in the international horse breeding programmes. Material and methods Sample collection and data acquisition Genetic materials of three traditional types and three derivative breeds of Kazakh horses (for convenience, we will further refer these as six breeds) were collected at 25 horse farms in North-Kazakhstan, West-Kazakhstan, East-Kazakhstan, Mangystau, Akmola, Kostanay, Zhambyl, Almaty, and Aktobe regions of Kazakhstan ( Fig. 1 , Table S1 ). Hairs were sampled from horses’ tails and/or manes and stored at +4°C until further use; the hair follicles were used for DNA extraction. DNA was isolated using the kit “DNK-Extran2” (Syntol, Russian Federation) following the manufacturer’s protocol and quantified using Qubit 4 Fluoremeter with the Qubit dsDNA Broad Range reagent (Thermo Fisher Scientific, USA) for downstream SNP genotyping. A total of 2 020 horses were sampled and processed for DNA isolation and SNP genotyping, including 632 Zhabe, 585 Mugalzhar, 303 Adai, 226 Kushum, 158 Naiman, and 116 Kostanai horses ( Table S1 ). The measurements of horses were taken prior to hair sampling. The measurements included the height at the withers ( HW ), oblique body length ( OBL ), chest ( CC ) circumference, cannon bone circumference ( CBC ), and BW. SNP genotyping data on the genetic diversity of foreign horse breeds ( Petersen et al., 2013a ) were retrieved from the Open Science Foundation repository ( https://osf.io/gx42p/ , accessed 20.07.2022), with the explicit permission from Dr. Jessica Petersen. Single nucleotide polymorphism genotyping and quality control Genotyping was performed using Equine80k SNP array with the iScan system (Illumina, USA) according to the manufacturer’s protocol. The assignments of genotypes and primary quality control were conducted using the Genotyping module of the GenomeStudio software (Illumina). The data were filtered using the following thresholds (the primary quality control): call rate ≥ 0.9, median GC score ≥ 0.8 for samples; call frequency ≥ 0.95 and GT score ≥ 0.7 for SNP. The indel markers presented in the array were also excluded. All data satisfying these criteria were exported and transformed to PLINK text input files (.ped +.map) using general data handling utilities of R ( R Core Team, 2019 ). PLINK1.9 ( Purcell et al., 2007; Chang et al., 2015 ) was further used to exclude SNPs with minor allele frequencies ≤0.05 and those deviating from Hardy-Weinberg equilibrium with a P -value threshold of 1·10 −10 . The missing genotypes were imputed using BEAGLE ( Browning et al., 2018 ). Genetic structure analysis All samples were tested for the internal and between-population structure using ADMIXTURE ( Alexander and Lange, 2011 ) for K from 1 to 10 with 10 cross-validation replicates. Based on the results, the outlier genotypes were identified and taken into account for any further analyses. The general population genetic analysis was performed using PLINK1.9 and summarised using general R functions. It included the evaluation of linkage disequilibrium (LD), expected and observed heterozygosity ( H ), and between-population fixation index ( e, H o F ). The analysis was performed for the whole sample for breeds and populations. Only populations with more than 50 sampled individuals were considered for population-based analysis. The markers in the strong LD ( st r 2 > 0.7) were not considered for the population statistics. The comparative analysis of the Kazakh horse genotypes with respect to the well-known foreign horse breeds was performed using data from ( Petersen et al., 2013a ). The dataset was transformed to fit the data format used throughout the present work using general R scripting tools and merged with our data. As the number of individuals representing the foreign breeds was significantly lower than for Kazakh horse breeds, we have balanced data volumes by limiting the number of local horses to not exceed 10 per population; these individuals were sampled randomly. The outlier genotypes mentioned above were additionally selected to identify the probable source of the admixture. The merged dataset was filtered to include only SNPs with call frequency ≥0.95. A principal component analysis ( PCA ) was performed using PLINK and visualised using R with the ‘ggplot2’ package ( Wickham, 2016 ). The ADMIXTURE analysis was run for K from 1 to 40, with 10 cross-validation replicates and visualised using CLUMPAK ( Kopelman et al., 2015 ). The distance matrix was calculated using Manhattan distance in the ‘dartR’ package, and the neighbour-joining tree was constructed using ‘ape’ package and visualised using FigTree ( Rambaut, 2018 ) software. Genome-wide association analysis The genome-wide association analysis was performed using PLINK1.9 supplemented with R scripting, including use of the ‘ggplot2’ package to create Manhattan plots. The results of the genetic structure analysis were taken into account for the sample selection for GWAS. The association analysis was conducted for weight and size. The animals of the age below three years and the outliers were excluded from the analysis. Pearson’s correlation test was used to ensure the independence of the phenotypic variables of age. The size variable was defined using measurements of the HW, OBL, CC, and CBC. These parameters were normalised by subtracting the mean and dividing by the SD; then, PCA was performed, and the first component was selected as the new size variable. The genome-wide association analysis was performed using a linear regression test with the adaptive Monte-Carlo permutation method of P -value correction for multiple comparisons (PLINK’s ‘--linear perm’ command). SNPs in strong linkage disequilibrium were excluded from the analysis ( r 2 > 0.7). The markers with the resulting corrected P -value below the threshold of 0.001 were annotated using the Variant Effect Predictor tool (VEP) ( McLaren et al., 2016 ) and the DAVID web server ( Huang et al., 2009; Sherman et al., 2022 ). The horse genome assembly EquCab3.0 (GCA_002863925.1) ( Kalbfleisch et al., 2018 ) was used as a reference for annotation. Results Genotyping and genetic structure analysis A total of 74 116 SNPs for 2 020 samples satisfying the selected primary quality control criteria were obtained. Further, filtering based on MAF and HWE left a total of 60 987 markers for further analyses. The data on size and weight measurements were obtained for 1 876 and 1 883 of 2 020 samples, respectively. All breeds and populations with 50 or more sampled individuals were analysed using general population statistics, expected and observed heterozygosity, and pairwise F st between breeds and populations ( Table 1 ). All breeds and populations had very close values of heterozygosity; on average, 0.3462 and 0.3432, SDs of 0.0051 and 0.0043, for expected and observed heterozygosity, respectively. All pairs of breeds and populations demonstrated F st not exceeding 0.001, indicating a very low degree of differentiation between sampled groups. The lowest between-breed value was observed for Kostanay and Adai ( F st = 0.0003), and the highest value was between Kushsum and Adai ( F st = 0.005). The analysis of the genetic structure within the whole sample of Kazakh horses with the ADMIXTURE algorithm confirmed the lack of differentiation between breeds and populations. Although the true K was not revealed by cross-validation, as the standard cross-validation error did not reach the minimum value in K runs from 1 to 10, examination of the results allowed us to select K = 2 as the optimal structure ( Fig. 2 , a). The results of K = 2 demonstrated that the sample was generally homogeneous, with the exception of some outlying genotypes. These outliers were taken into account for further analyses ( Table S2 ). The further results for K from 3 to 10 highlighted within-population variability and did not add information about genetic structure between populations; however, the separation of the outliers was supported. Data from Petersen et al. (2013b and 2013a) were used to put the Kazakh horse samples into a context of global horse diversity ( Table 2 ). The total merged dataset included 35 419 SNPs for 1 176 animals, including 138 Zhabe horses, 74 Kostanay, 66 Mugalzhar, 60 Kushum, and 27 Aday horses. The 10-fold cross-validation test of the ADMIXTURE analysis identified K = 28 as an optimal number of clusters ( Fig. 2 , b). The results demonstrated that Kazakh horses have had higher levels of individual variability compared to well-established foreign breeds. No clustering patterns to distinguish between Kazakh horse breeds were observed. Across the foreign breeds, the most similar to Kazakh horse patterns were observed in Tuva and Mongolian horses. The identified outlier specimens showed high similarity to the Thoroughbred breed, indicated by orange throughout all K ’s, however, with the optimal K = 28, the outliers of Kazakh horses displayed the high probability of the new cluster (shown pale yellow) with only a minor occurrence in Thoroughbred horses. The results of the PCA ( Fig. 3 ) also demonstrated the low level of differentiation between Kazakh breeds. We separately considered the sample with outlier genotypes ( Fig. 3 b) and without them ( Fig. 3 a). Fig. 3 a demonstrates that all Kazakh horses formed one group; the corresponding ellipses and central points were strongly overlapping. In Fig. 3 b, the outlier genotypes were shifted towards Thoroughbred horses (populations from the UK and the USA). The breeds closest to Kazakh horses were Mongolian and Tuva horses, as well as the Caspian breed and the groups of Andalusian (AND, Spain) and South American horses. In the overall structure, Kazakh horses had a central position with respect to the directions of distribution of worldwide breeds. In the neighbour-joining tree, there were also no clear structures corresponding to Kazakh breeds; most samples were combined into a distinct heterogeneous group, including Mongolian and Tuva horses ( Fig. 4 , Fig. S1 ). Unlike the results of other methods described above, only a few outlying genotypes diverged from the main group of horses. Four horses of the Kostanay, Mugalzhar and Aday breeds were placed close to Thoroughbred horses, and five horses of the Kushum, Mugalzhar, Zhabe, and Naiman breeds were placed between the American breed Morgan and a cluster of South American and Spanish horses. Genome-wide association analysis The genome-wide analysis of the association between SNP markers with horse body size and weight was performed for all animals with available phenotypic data, excluding horses less than three years of age and the outlying genotypes identified by the genetic structure analysis. The total number of selected animals was 1 533. The summary of size and weight measurements for all breeds is shown in Table 3 . Weight was the most variable parameter, with an average value from 377.21 for Adai horses to 437.42 for Mugalzhar breed (SDs of 71.32 and 49.12, respectively). The differences between average and median values indicate slight asimmetry of the data distribution across breeds; however, for the whole sample, the mean and median values are almost the same. To ensure that age did not have an impact as a covariate in the selected sample, all phenotypic variables were tested for Pearson’s correlation against age. The standard body measurements, the height at the withers and oblique body length displayed no significant correlation at a P -value threshold of 0.05 ( P -values of 0.8388 and 0.4211, respectively). Only weak correlations were revealed for chest circumference (0.0841, P = 0.000507), cannon bone circumference (0.1011, P = 2.922·10 −5 ), and weight (0.1121, P = 3.496·10 −6 ). Four body measurements were combined into a single variable using PCA after normalisation; the first principal component describing 81% of the total variation was selected as a new size variable. The association analysis was performed using a linear regression algorithm implemented in PLINK software with the adaptive correction of P -values based on a Monte-Carlo permutation test. As a result, a total of 81 and 84 SNPs showed significant associations with size and weight, respectively, at a selected significance level of 0.001 ( Fig. 5 ). Of these combined SNP sets, 60 variants were associated with the known horse genes using Ensembl’s VEP server and with the corresponding biological processes using DAVID ( Table 4 , Table S3 ). Surprisingly, there was almost no overlap between two sets of markers associated with the respective traits. Only two variants, BIEC2_117960 and BIEC2-187196, showed significant association with both traits. While the former marker was linked with the gene OR4C269P , which has had no available gene ontology annotation, the latter was identified in relation to the gene of ecto-5′-nucleotidase ( NT5E ) involved into metabolism of adenosine phosphates. In general, the identified genes play regulatory or signalling roles in a wide range of processes, from the cellular level to the whole organism. The most essential or basic, in our opinion, gene ontology terms for biological processes are listed in Table 4 ; the complete annotations produced by DAVID are available in Table S3 . The genes BMP6 , DDR3 , and CREB3L1 are involved in the development and metabolism of connective tissues, including bones. These genes were linked with SNPs significantly associated with weight. The BMP6 gene had three linked SNPs, which was the highest number across all genes. Several genes, DPF1 , GNAT3 , NEGR1 , etc., have been annotated as involved into the development of the neural system. More specifically, the NEGR1 gene was associated with feeding and locomotory behaviour, and the GNAT3 gene was associated with taste perception. The genes BMP6 , RELA1 , AIM2 , PDE4D , and IGF1R were associated with regulation of the immune system, in addition to other processes. The EIF2AK4 gene was associated with the cellular response to cold stress and protein starvation. Discussion This study of Kazakh horses involved three traditional types (Zhabe, Adai, and Naiman) and three relatively recent breeds (Kostanay, Kushum, Mugalzhar), which were derived from the older lines. Except for the Kostanay horses, which are subjected to stable maintenance, control of breeding, and certification of offspring, the other five breeds are being reproduced freely during perennial herd pasturing. However, the genetic data did not support this difference in breeding strategies. All six breeds, including Kostanay, form one group without a notable subdivision corresponding to breeds and populations. This observation was supported by the distance-based and ADMIXTURE clustering methods as well as principal component analysis. Also, pairwise F values between breeds and populations were low, indicating almost free gene flow between populations and the factual absence of between-breed boundaries. Hence, we conclude that the studied traditional types and breeds cannot be discriminated from each other on the genetic level, and all Kazakh horses should be considered one ‘breed’. Yet, we question the applicability of the term ‘breed’ to Kazakh horses because of the revealed high level of individual variability. We assume that the Kazakh horses represent a sum of relatively unspecialised genetic lines. In some sense, all Kazakh horses are close to the natural population. Indeed, the traditional Kazakh way of pasturing does not imply strict mating control in herds; gene flow between populations (herds) occurs through trading and exchange of animals by horse owners, and the means of modern transporting capabilities reduce the influence of geographical isolation. On the other hand, as the traditional Kazakh economy was self-sufficient with respect to horse production and breeding, the import of foreign horses and its impact on the genetic pool of Kazakh horses were limited. The close values of expected and observed heterozygosity in breeds and populations of sufficient sizes indicate the absence or low impact of possible selective factors. As the probable consequence, we suppose the relatively weak pressure of artificial selection on Kazakh horses and the lack of genetic factors, strongly dominating their characteristic traits. Also, the absence of strong mating constraints in herds favours sexual selection similar to natural populations, rather than driven by the breeder. Hence, the genetic structure of Kazakh horses has not been significantly impacted by artificial selection and sex-specific breeding practices. st Interestingly, the horses of the Tuva and Mongolian breeds ( Petersen et al., 2013a ) display similar patterns of individual variability and close proximity to Kazakh horses. According to the published results ( Petersen et al., 2013a ), these breeds also had higher levels of expected heterozygosity in comparison to many breeds from around the world. Although the small volume of available data for these breeds limits the possible comparisons, we could speculate that Kazakh, Tuva and Mongolian horses could be grouped together. Historically, the nomadic peoples of Central Asia, Mongolia, and Siberia had close interconnections and shared some common ways in horse breeding. Thus, we suggest that the Kazakh, Mongolian, and Tuva horses, and possibly some other breeds not considered here, could be parts of a broader defined landrace, which we would designate as “Nomads’ horses”. In the results of the PCA, all these genotypes had a central position with respect to the main direction of the distribution of breeds. In the study of the Chinese populations of Mongolian horses, a similar PCA pattern was previously observed, with the exception of Kazakh horses ( Han et al., 2019 ). The authors of that study discussed the high homogeneity and the lack of genetic structure between the considered populations and hypothesised that the Mongolian horses may represent the closest descents of the earliest domesticated horse lines. With our results revealing the close proximity of Kazakh horses with the Mongolian and Tuva horses, and similar genetic patterns of Kazakh horses, we consider the mentioned study to be in line with our assumption about “Nomads’ horses”. The analysis of horse mitochondrial DNA revealed that, to a significant extent, horses domesticated in the Central-Asian region have retained their genetic diversity ( Cieslak et al., 2010 ). We assume that the traditional nomadic ways of horse husbandry played an important role in avoiding a bottleneck effect as they imply only the limited involvement of humans in selection and reproduction. The analysis of genetic structure of Kazakh horses revealed the presence of significantly deviating genotypes, mainly in populations of Zhabe, Kushum, and Mugalzhar horses. The ADMIXTURE analysis and PCA helped attribute these horses to hybridisation with Thoroughbred horses. Indeed, this breed has been popular among breeders in the country, as well as in the world, as the reference saddle and breed horse. Thoroughbred stallions have been imported since the beginning of the twentieth century and used in crosses to improve the saddle quality of Kazakh horses. However, prior to sampling, all animals used in the work were attested as purebred Kazakh horses of respective breeds and types. This fact raises important questions about the present state of horse breeding control and certification in Kazakhstan. Our study revealed some significant issues. First, the hybridisation of Kazakh horses with other horse breeds, including foreign ones, has been poorly documented, hence the difficulty of tracing back the admixed lineages and their correct identification. Second, the classification of the types and breeds of Kazakh is based on morphological traits and geographical distribution; however, these features do not provide unambiguous boundaries between breeds and, thus, their definition relies mostly on the subjective experience of the breeders. The lack of genetic structure across the studied breeds and populations allowed us to combine all horses, except for the hybrid genotypes discussed above, for association analysis. To date, most genome-wide association studies on horses have focused on racing performance ( Littiere et al., 2020; Bailey et al., 2022 ) and health ( Raudsepp et al., 2019 ). The traits related with food quality of horses have remained out of focus of horse genomics, as horse products, mainly meat, remain exotic or even marginal in many countries ( Stanciu, 2015 ). Thus, the study of genetics of such traits is novel, not only for Kazakhstan. Here, we tested a set of SNP markers for association to the most general parameters related to meat productivity, live weight and size of an animal. Sixty SNPs were found to be associated with either of these two traits and linked with the functionally annotated horse genes. The set of identified genes included genes involved in various biological processes as regulatory and signal factors. The interesting notion was that almost all significant annotations were related to size or weight independently, despite the obvious correlation between these traits. Among all functionally annotated genes, some certain aspects of biological processes potentially related with the traits of interest can be noted. First, the development of connective tissues and bone system, which are crucial for an animal to support its weight and size. Second, the development of the neural system; the more specific effects of the genes GNAT3 and NEGR1 on the fodder preferences of horses and so, indirectly, on their growth, could be an interesting topic of discussion in future studies. Third, the regulation of immune processes, which influence growth by affecting general health. However, it should be kept in mind that the gene annotations by gene ontology have been based mainly based on the data on humans and the model animals (mouse, rat, etc.); thus, the true physiological roles of the identified genes in horses may vary. Also, the possible associations of variants that remained unannotated require future clarification with the updated annotation data for horse genomes. The previous studies ( Signer-Hasler et al., 2012; Makvandi-Nejad et al., 2012; Tozaki et al., 2017 ) have identified loci LCORL/NCARG and ZFAT as the main genetic factors affecting horse’s body size and weight. However, in our study, we were not able to reveal SNPs linked to these genes, with significant associations. Moreover, the relatively small number of identified markers was distributed over the genome, nor concentrating in potential regions with strong association. Thus, the revealed markers have potentially only a limited impact on horse’s size and weight. Taking into account the variation of phenotype and the lack of genetic diversity between six local breeds, we would suggest that the Kazakh horses, or at least the sample considered here, have size and weight affected more by the living conditions than by genetic factors. The reasons of this could be associated with a weak selection for these traits, as discussed above. To conclude, the Kazakh horses, with traditionally defined types and breeds, represent a single landrace, or breed, with an absence of clearly expressed internal genetic structure. The traditional ways of horse breeding and husbandry in Kazakhstan have led to the formation of a relatively unspecialised landrace with genetic properties similar to a wild-living population. Along with the genetically similar Mongolian and Tuva breeds, the original genetic pool of Kazakh horses can potentially serve as a new source of genetic material for horse breeding, for the development of new breeds, or the improvement of existing lineages, both on the domestic and international levels. Although the initial GWAS for body size and weight has not revealed strongly associated genomic regions, the further in-depth investigations would shed more light on the genetic basis of these and other important traits in Kazakh horses. Supplementary material Supplementary material to this article can be found online at https://doi.org/10.1016/j.animal.2023.100926 . Ethics approval Not applicable; all materials and data were collected without the use of any methods affecting animals’ health. Data and model availability statement None of the data were deposited in an official repository. The data used in this work can be provided upon request. Declaration of Generative AI and AI-assisted technologies in the writing process The authors did not use any artificial intelligence-assisted technologies in the writing process. Author ORCIDs A. Pozharskiy: https://orcid.org/0000-0002-2581-2860 . A. Abdrakhmanova: https://orcid.org/0000-0001-8584-0989 . I. Beishova: https://orcid.org/0000-0001-5293-2190 . A. Shamshidin: https://orcid.org/0000-0001-5457-1720 . A. Nametov: https://orcid.org/0000-0002-8113-1912 . T. Ulyanova: https://orcid.org/0000-0002-4814-2601 . G. Bekova: https://orcid.org/0000-0003-0230-1352 . N. Kikebayev: https://orcid.org/0009-0004-0756-4615 . A. Kovalchuk: https://orcid.org/0000-0002-4106-4954 . V. Ulyanov: https://orcid.org/0000-0002-7500-1601 . A. Turabayev: https://orcid.org/0000-0003-0231-3011 . M. Khusnitdinova: https://orcid.org/0000-0003-0378-9337 . K. Zhambakin: https://orcid.org/0000-0001-5243-145X . Z. Sapakhova: https://orcid.org/0000-0002-8007-5066 . M. Shamekova: https://orcid.org/0000-0002-8746-7484 . D. Gritsenko: https://orcid.org/0000-0001-6377-3711 . Author contributions Conceptualisation: D. Gritsenko, M. Shamekova, I. Beishova . Methodology: T. Ulyanova, G. Bekova, N. Kikebayev, A. Kovalchuk . Formal analysis: A. Pozharskiy . Investigation: A. Abdrakhmanova, M. Khusnitdinova, Z. Sapakhova, V. Ulyanov . Resources: M. Shamekova, A. Turabayev, I. Beishova, A. Shamshidin . Data Curation: A. Pozharskiy. Writing – Original Draft: A. Pozharskiy, M. Shamekova . Writing – Review and Editing: D. Gritsenko, M. Shamekova, K. Zhambakin, A. Nametov, A. Shamshidin . Visualisation: A. Pozharskiy . Supervision: D. Gritsenko, I. Beishova, K. Zhambakin, A. Nametov, A. Shamshidin . Project administration: D. Gritsenko, I. Beishova . Declaration of interest None. Acknowledgements The preprint of this article is available in the BioRxiv repository ( https://www.biorxiv.org ). DOI: https://doi.org/10.1101/2023.03.29.534422 . Financial support statement The study was funded within the framework of the research project AP14870614 «Genetic marking of productive traits of the Kazakh horse of the Dzhabe type based on genome-wide coverage SNP genotyping» (the Ministry of Science and Higher Education of the Republic of Kazakhstan ) and the targeted funding program BR10764999 “Development of technologies for effective management of the breeding process and preservation of the gene pool in horse breeding” (the Ministry of Agriculture of the Republic of Kazakhstan ). Appendix A Supplementary material The following are the Supplementary material to this article: Supplementary Fig. S1 Supplementary Table S1 Supplementary Table S2 Supplementary Table S3
|
[
"ALEXANDER",
"BAILEY",
"BARMINTSEV",
"BROWNING",
"CHANG",
"CHANG",
"CIESLAK",
"DYUSSEGALIYEV",
"FRACHETTI",
"GAUNITZ",
"HAN",
"HUANG",
"ISKHAN",
"JASTRZEBSKA",
"KALBFLEISCH",
"KOPELMAN",
"KYSELY",
"LEVINE",
"LITTIERE",
"MAKVANDINEJAD",
"MCCUE",
"MCLAREN",
"NECHAYEV",
"OUTRAM",
"OUTRAM",
"PETERSEN",
"PETERSEN",
"POZHARSKIY",
"PURCELL",
"RCORETEAM",
"RAUDSEPP",
"SARBASSOVA",
"SHERMAN",
"SIGNERHASLER",
"STANCIU",
"TAYLOR",
"TOZAKI",
"WICKHAM",
"ZHUMADILLAYEV"
] |
a207b43c6c2743fdaf5fd29726bbd06b_CARACTERIZAÇÃO GENÔMICA DE LINHAGENS DO SARS-COV-2 CIRCULANTES NA REGIÃO DO RECÔNCAVO DA BAHIA BRASI_10.1016_j.bjid.2023.102902.xml
|
CARACTERIZAÇÃO GENÔMICA DE LINHAGENS DO SARS-COV-2 CIRCULANTES NA REGIÃO DO RECÔNCAVO DA BAHIA, BRASIL, EM 2022
|
[
"Reis, Jeiza Botelho Leal",
"Klein, Sibele de Oliveira Tozetto",
"da Silva, Isabella de Matos Mendes",
"Vitória, Rebeca da Luz",
"Vicentini, Fernando",
"Nihei, Jorge Sadao",
"de Souza, Flaviane Santos",
"Filho, Hermes Pedreira da Silva"
] |
Introdução
As alterações de um genoma viral, como do SARS-CoV-2, podem desencadear a geração de diferentes variantes virais. Tais variantes podem, por exemplo, apresentar alterações na infectividade e resultar em diferentes espectros de desfechos da doença, de leve a grave e inclusive o óbito.
Objetivo
Caracterizar geneticamente as linhagens de SARS-CoV-2 circulantes na região do Recôncavo da Bahia, em 2022.
Métodos
As amostras nasofaríngeas de pessoas com sintomas gripais foram coletadas e confirmadas no diagnóstico da COVID-19, por RT-qPCR. Foram sequenciadas 32 amostras. O critério de inclusão para o sequenciamento foi considerado as amostras positivas com ciclo de limiar (Ct) abaixo de 30. As bibliotecas foram preparadas usando o COVIDSeq Test (Illumina, Cat. n° 20043675 e 20043137) com o conjunto de primers ARTIC V4. O sequenciamento paired-end foi realizado com Illumina MiSeq (Illumina, Cat. no. SY-410-1003) com um comprimento de leitura de 150 pb. Os arquivos FASTQ foram submetidos ao pipeline com pequenas modificações. A montagem foi realizada por Burrows-Wheeler Aligner (BWA) v.0.7.17 usando o número de acesso NCBI GenBank. MN908947.3 como a referência do genoma.
Resultados
Todas as linhagens observadas foram derivadas da VOC Omicron GRA (B.1.1.529+BA.*). Das 32 amostras de RNA viral sequenciadas, 21 foram de mulheres e 11 homens. Nas amostras deste estudo observou-se a presença de 17 linhagens, com a seguinte distribuição: em fevereiro BA.1 (33,3%; 1/3), BA.1.1 (33,3%; 1/3) e BA.1.5 (33,3%; 1/3), em maio BA.2 (100,0%; 2/2), em junho BA.2 (14,3%; 1/7), BA.4 (28,6%; 2/7), BA.4.1 (28,5%; 2/7), BA.5.1 (14,3%; 1/7) e BA.5.2.1 (14,3%; 1/7), em novembro XBB.3 (7,7%; 1/13), BQ.1.1 (30,8%; 4/13), BQ.1.1.16 (7,7%; 1/13), BQ.1.1.28 (23,0%; 2/13), BQ.1.1.31 (7,7%; 1/13), BQ.1.2 (7,7%; 1/13), BQ.1.23 (7,7%; 1/13), BE.10 (7,7%; 1/13) e em dezembro BQ.1.1 (57,1%; 4/7), BQ.1.23 (28,6%; 2/7) e DL.1 (14,3%; 1/7). Observou-se que a maior variabilidade genômica ocorreu nos meses de junho e novembro de 2022, coincidindo com um número elevado na circulação de pessoas, devido às festividades juninas e período eleitoral.
Conclusão
Este estudo demonstra a grande variedade de linhagens virais circulantes no Recôncavo da Bahia, durante 2022. Ressalta-se a importância do monitoramento e vigilância da COVID-19, pois a disseminação do vírus pode desencadear o surgimento de novas variantes, o que pode inferir em agravamentos da doença.
| null |
[] |
24e0fde59ca84a43bcb1c75b6ffc68a7_Malaria medicines and miles A novel approach to measuring access to treatment from a household persp_10.1016_j.ssmph.2019.100376.xml
|
Malaria, medicines and miles: A novel approach to measuring access to treatment from a household perspective
|
[
"Palafox, Benjamin",
"Goodman, Catherine",
"Hanson, Kara"
] |
Nearly a decade after the adoption of confirmed diagnosis and artemisinin combination therapy (ACT) for the treatment of uncomplicated falciparum malaria, a large treatment gap persists. We describe a novel approach of combining data from households and the universe of treatment sources in their vicinities to produce nationally representative indicators of physical and financial access to malaria care from the household’s perspective in Benin, Nigeria, Uganda and Zambia. We compare differences in access across urban and rural areas, countries, and over time.
In 2009, more urban households had a provider stocking ACT within 5 km than rural households. By 2012, this physical ACT access gap had largely been closed in Uganda, and progress had been made in Benin and Nigeria; but the gap persisted in Zambia. The private sector helped to fill this gap in rural areas. Improvements in Nigeria and Uganda were driven largely by increased ACT availability in licensed drug stores, and in Benin by increased availability in unregulated open-air market stalls. Free or subsidised ACT from public and non-profit facilities continued to be available to many households by 2012, but much less so in rural areas. Where private sector expansion increased physical access to ACT, these additional options were on average more expensive. Also by 2012, the majority of urban households in all four countries had access to a provider nearby offering malaria diagnostic services; however, this access remained low for rural households in Benin, Nigeria and Zambia.
The methods developed in this study could improve how access to healthcare is measured in low- and middle-income country settings, particularly where private for-profit providers are an important source of care, and for conditions that may be treated by informal providers. The method could also lead to better explanations of the performance of complex interventions aiming to improve healthcare access.
|
1 Introduction In 2016, an estimated 216 million cases of malaria worldwide led to more than 445,000 deaths, mostly among children in sub-Saharan Africa ( WHO Global Malaria Programme, 2017 ). Effective management for uncomplicated cases of Plasmodium falciparum malaria, the species causing the majority of fatal infections, requires confirmed diagnosis and, if positive, treatment with artemisinin combination therapy (ACT), ideally prescribed and dispensed from a qualified provider. However, access to appropriate diagnosis and treatment is still inadequate, resulting in a large treatment gap where many cases are managed sub-optimally or even go untreated. To illustrate, although nearly a decade has passed since the World Health Organization (WHO) updated its guidelines for treating uncomplicated falciparum malaria to recommend confirmed diagnosis and ACT, it is estimated that among febrile children in sub-Saharan Africa for whom care was sought, only 30% received a diagnostic test either by microscopy or rapid diagnostic test (RDT) in 2014-16 ( WHO Global Malaria Programme, 2017 ). Moreover, many of those with malaria do not receive ACT. For example, studies in Tanzania found that among cases positive for malaria by reference blood slide, just over half (50.2%) received ACT in government facilities and less than a third of those who sought care from private drug stores ( Briggs et al., 2014; Bruxvoort et al., 2013 ). Many more were not brought for care or received older non-artemisinin therapies (nATs), such as sulphadoxine-pyrimethamine and chloroquine. Widespread parasite resistance has rendered nATs less effective in many endemic regions of sub-Saharan Africa ( Okell, Griffin, & Roper, 2017; Takala-Harrison & Laufer, 2015 ). Poor coverage of effective treatment persists despite considerable investment by Ministries of Health and their partners in a range of strategies to address the issue. These interventions typically aim to reduce critical access barriers in various ways. For example, efforts have been made to increase ACT and RDT availability at government facilities, dispense them free of charge from public sector outlets, train community health workers and private retailers (i.e. pharmacies and drug stores) to conduct diagnostic testing and dispense appropriate treatment, curtail the retailing of non-ACT, and lower private sector ACT prices through subsidies ( Global Fund, 2016; Kabaghe et al., 2016; Rao, Schellenberg, & Ghani, 2013; Visser et al., 2017 ). The most notable of these subsidy programmes was the Affordable Medicines Facility–malaria (AMFm), which was piloted in several endemic countries, including Nigeria and Uganda, from 2010–12 ( Tougher et al., 2012 ). The AMFm was a multi-national subsidy programme for ACTs implemented at a national scale in 7 African countries, funded by the Global Fund to Fight HIV, TB and Malaria. It aimed to increase the appropriate use of quality-assured ACTs and decrease the use of other antimalarials, through a combination of ACT subsidies and supporting interventions such as recommended retail prices and communications campaigns. However, despite all these efforts, the size of the treatment gap indicates that much more still needs to be done to improve malaria case management. Within the international public health community, it is now widely acknowledged that access to health care is a multi-dimensional concept based on the interaction or ‘degree of fit’ or ‘alignment’ between health care systems and individual, household, and community needs, which may either empower or hinder an individual’s use of appropriate health care ( Lévesque, Harris, & Russell, 2013; McIntyre, Thiede, & Birch, 2009 ). One such definition developed by McIntyre and colleagues categorises the many factors that determine access into three dimensions: availability or physical access, affordability or financial access, and acceptability or cultural access ( McIntyre et al., 2009 ). In addition, it has been argued that efforts to understand health care use must account for the broader range of treatment options that an individual might engage with, beyond official medical sources ( MacKian, Bedri, & Lovel, 2004 ). This is of particular relevance for malaria treatment, which is known to involve a diverse array of providers ranging from public, private and faith-based health care facilities, to retail pharmacies, drug stores, and general retailers ( Littrell et al., 2011 ). Efforts to define health care access have tended to focus on its conceptualisation, without much attention to the application of the concept or its measurement ( McIntyre et al., 2009 ). Indeed, this is evident in many of the common indicators used to measure access. For example, access indicators derived from household data such as Demographic and Health Surveys (DHS), Multiple Indicator Cluster Surveys and Malaria Indicator Surveys ( ICF International, 2015; Roll Back Malaria Partnership, 2013; UNICEF, 2015 ), may provide information on distance or travel time to the chosen health care provider, the type of treatment obtained and its price. However, these indicators do not give information on the broader range of provider options available to that individual or to individuals not seeking treatment, or on the range and price of the alternative treatments that providers offer. In particular, these data do not reveal whether a household has at least one provider in their area with the required health products and services. In the case of malaria, such information is critical to understand the choice not to seek care or to obtain sub-optimal treatment. On the other hand, health care facility or provider surveys and administrative datasets may be able to provide comprehensive descriptions of the supply side. For example, the Service Provision Assessment surveys offered by the DHS collect data on the specific health services offered by facility type and whether these facilities have the necessary infrastructure, resources and support systems available ( ICF International, 2015 ). However, such average data on provider readiness cannot reveal what a given household’s access to these services actually is, especially as the better performing facilities may be clustered geographically, and such assessments rarely include all provider types. Such indicators also cannot also account for how well accessible health care products and services align with actual need. Therefore, measures of access that combine both household and provider data to characterise all the treatment options available to an index household could substantially improve our understanding of health care access. This paper describes a novel method to develop such measures by combining supply- and demand-side survey data from the ACTwatch project to produce nationally representative indicators of access to care for malaria. We demonstrate the utility of this approach by estimating a select range of physical and financial access indicators that characterise the malaria treatment options available to households with a febrile child in Benin, Nigeria, Uganda and Zambia, and use these to describe how access has changed over time. 2 Methods 2.1 Data and source The ACTwatch project was designed to generate nationally representative information on antimalarial markets through linked cross-sectional surveys of households, treatment sources and private sector distribution chains in selected endemic countries ( Shewchuk et al., 2011 ). Participating countries were chosen to represent a diverse range of contexts considering variation in malaria burden, the nature of pharmaceutical regulation (e.g. high vs. low regulatory capacity; francophone vs. anglophone settings), public sector coverage, and domestic antimalarial manufacturing capacity. This study uses household and treatment source data from two survey rounds conducted in Benin, Nigeria, Uganda and Zambia. We selected two countries which had received the AMFm antimalarial subsidy (Nigeria and Uganda) and two which had not (Benin and Zambia). The first round (baseline) was conducted in 2009-10 (pre-AMFm), and the second round (endline) in 2011-12 (during AMFm). In each country and during each round, household and treatment source surveys were conducted contemporaneously using a common multi-staged clustered sampling design. Briefly, national samples of households and treatment sources were drawn from the same primary sampling units (PSUs), within which every treatment source that had recently stocked an antimalarial (i.e. within the preceding three months) was eligible for inclusion. Treatment sources were identified using a census approach and included public and not-for-profit health facilities, private (for-profit) health facilities, retail pharmacies, drug stores (apart from in Benin; also known as Patent Proprietary Medicine Vendors, or PPMVs, in Nigeria), and general retailers, such as grocery stores, kiosks and market stalls ( O’Connell et al., 2011 ). As public health facilities and pharmacies are important, but relatively uncommon sources of antimalarials, these provider types were over-sampled by including such providers in the larger administrative area from which a given PSU was selected. For example, if the PSU was defined as the sub-district, all public health facilities and pharmacies in the whole district within which the sub-district was located were sampled. Households containing a recently febrile child were randomly selected from three secondary sampling units drawn within each PSU surveyed. The household surveys collected demand-side information on the treatment choices made for febrile children, and on personal and household characteristics, while the treatment source surveys collected supply-side information on the availability and price of all antimalarials and all malaria diagnostics, and other provider characteristics related to staffing, storage conditions, knowledge, etc. Geographic coordinates were also collected from all surveyed households and treatment sources. Table 1 provides details of the household and treatment source samples in each country (with breakdown by rural/urban strata in Appendix Tables A and B ); sampling and survey procedures are described elsewhere ( Littrell et al., 2011; O’Connell et al., 2011; Shewchuk et al., 2011 ). 2.2 Producing the access dataset and indicators In each country, we merged household and treatment source data for each survey round to create the access datasets. For each surveyed household with a recently febrile child, a treatment choice set was defined by forming pairwise links between each household and all treatment sources surveyed that lay within a 5 km radius, reflecting likely willingness to travel for antimalarial providers, as noted in the literature ( Noor et al., 2006; Noor, Zurovac, Hay, Ochola, & Snow, 2003; Toda et al., 2012 ). Straight-line distances between households and paired treatment sources were calculated using the geodist package in Stata 13 ( StataCorp, 2013 ), and sources located more than 5 km away were removed from that household’s treatment choice set. A 5 km radius was chosen to approximate a one-hour walking distance, which has been used previously to denote reasonable geographic access in developing country contexts ( Skiles, Burgert, Curtis, & Spencer, 2013; Smith, Solanki, & Kimmie, 1999; Tanser, 2006 ). Defining treatment choice sets in this way potentially introduces a measurement bias when surveyed households are located close to the border of the PSU. This would result in underestimation of access because treatment options close to the household, but outside the PSU border, would be excluded from the choice set. Such bias is less for public health facilities and pharmacies as these were oversampled from a larger area surrounding the PSU; however, statistical comparisons between these and other provider types are not appropriate given the difference in sampling approaches. Using the merged household and treatment source data we characterise health care access from the perspective of households. In this paper, we present the following nationally representative indicators by urban and rural location, and by survey round, for an access area defined by a 5 km radius around households: % households with access to any treatment source stocking any antimalarial; % households with access to any treatment source stocking ACT; % households with access to ACT by type of treatment source; % households with access to any treatment source offering malaria diagnostic services; % households with access to any treatment source staffed by a qualified health professional (i.e. medical doctor, nurse, midwife, pharmacist); % households with access to any treatment source with ACT, offering diagnostic services and staffed by a qualified health professional; median number of treatment sources stocking ACT; and median price for ACT, nAT and oral artemisinin monotherapy (AMT) tablets. 2.3 Statistical analysis Indicator estimates were adjusted using inverse-probability sampling weights to account for differences in the household probability of being selected in those countries where samples were stratified. Standard error estimation accounted for clustering within primary and secondary sampling units. Thus, estimates are conservative and reduce the likelihood of incorrectly rejecting the null hypothesis. Percentage-based estimates are presented with 95% confidence intervals (CI), and differences in percentages over time are tested against the null hypothesis of no change. Prices for ACT tablets, the most common dosage form used for treatment, are presented in adult equivalent treatment doses (AETDs), a standardised unit that allows for comparison of products with different treatment regimens. Prices were adjusted to a 2010 base using the World Bank annual consumer price index values, and converted to US dollars (USD) using the average weekly exchange rate in 2010 ( O’Connell et al., 2011; Tougher et al., 2012 ). Price-based indicators are estimated as medians with interquartile range (IQR), and differences over time are examined using the Wilcoxon rank-sum test. Analyses were conducted in Stata 13 and R version 3.0.2 ( StataCorp, 2013; The R Foundation for Statistical Computing, 2013 ). 3 Results 3.1 Physical access to ACT and other antimalarials Differences in physical access to ACTs and other antimalarials were observed across urban and rural areas and over time. In urban areas, ACTs were accessible to the majority of households. In all four countries both at baseline and endline, more than 85% of urban households had access to at least one source of ACTs within a 5 km radius ( Fig. 1 ). Physical access to ACTs in rural areas was lower than in urban areas; however, in Benin, Nigeria and Uganda, household access to ACTs in rural areas improved over time. For example, the percentage of rural households in Benin with access to at least one source of ACTs within 5 km increased from 51% to 76% (p-value: 0.023) between surveys. In Uganda, the urban-rural ACT access gap had largely been eliminated by 2012 where over 95% of households in both urban and rural areas had access to at least one source of ACT within 5 km, whereas in Zambia, household access did not improve in rural areas between baseline and endline. In addition, no notable changes in physical access to any antimalarial were observed over time in any of the study countries. As such, the significant improvements in ACT access noted in rural areas of Benin, Nigeria and Uganda may indicate that over time existing sources previously stocking only non-ACTs have begun to stock ACT as well. Our analysis was based on repeated cross-sectional surveys rather than panel data (i.e. different PSUs were selected in each round). While each round was designed to be nationally representative, it is possible that the selection of outliers in terms of PSU characteristics in one round could have influenced estimates of change over time. Inspection of sample characteristics disaggregated by urban/rural strata ( Appendix Tables A and B ) shows that there were no substantial differences in average outlet numbers per PSU in all cases except Uganda, where at endline the urban PSUs selected had a higher average number of antimalarial outlets than those selected at baseline ( Appendix Tables A and B ). However, this does not affect the patterns described above as the only notable improvements in access in Uganda were seen in rural areas. 3.2 Physical access to ACT by treatment source Examining changes in the composition of treatment sources within the vicinity of households provides further information on factors driving the improvements in physical access to ACTs described above ( Fig. 2 ). During both survey rounds, many urban households had access to a variety of ACT sources within a 5 km radius, including public and not-for-profit facilities, private for-profit facilities and retail pharmacies ( Fig. 2 a). Nearly all urban households in Nigeria had access to ACTs via drug stores, which were also common options for households in Uganda and Zambia. In contrast, households in rural areas had fewer options to access ACTs ( Fig. 2 b). Public and not-for-profit health facilities provided access to ACTs to over half of rural households only in Benin and Uganda, but were still the most common option in rural Zambia. Drug stores were the dominant source of ACT accessible to rural households in Nigeria and for a considerable proportion of households in Uganda. Over time, there is some evidence that the proportion of urban households with access to ACTs increased through private for-profit facilities in Uganda (77% to 96%, p-value: 0.011); drug stores in Nigeria (89% to 96%, p-value: 0.058); and unlicensed general retailers in Benin (32% to 83%, p-value: 0.014), Uganda (0% to 16%, p-value: 0.064) and Zambia (0% to 36%, p-value: 0.001). In the four countries, there were no significant increases in household access to ACTs through public or not-for-profit providers observed in urban areas; however, in rural areas of Uganda, the proportion of households with access to ACT through these providers increased from 56% to 85% (p-value: <0.001). There is also evidence that rural household access improved through private health facilities in Uganda (15% to 52%, p-value: <0.001); drug stores in Nigeria (46% to 70%, p-value: 0.020), Uganda (24% to 92%, p-value: <0.001) and Zambia (0% to 12%, p-value: 0.029); and unlicensed general retailers in Benin (11% to 46%, p-value: 0.015) and Nigeria (1% to 8%, p-value: 0.094). In Table 2 , we present figures for the median number of treatment sources stocking ACT within 5 km of households over time. In Uganda, the number of ACT providers that a typical household could choose from increased eightfold between 2009-10 and 2011-12, from a median of 10 to 80 providers for urban households, and from a median of 1 to 8 providers for rural households. More modest increases were observed for rural households in Benin and Nigeria, where the median rose from 1 in 2009-10 to 2 by 2011-12. There were no notable changes in rural Zambia. These households had access to a median of 0 treatment sources stocking ACT within 5 km in both survey rounds. 3.3 Physical access to malaria diagnostic services and qualified health professionals Urban households had ready access to health professionals and malaria diagnostic services ( Fig. 3 a). More than 90% of these households in all four study countries had at least one antimalarial source staffed by a qualified health professional within 5 km, and similar proportions of urban households also had at least one antimalarial source that offered malaria diagnostic services within 5 km in all countries except Nigeria. In rural areas, households in Uganda had a comparable level of access to health professionals as in urban areas, but access to health professionals was lower in rural areas in the remaining countries ( Fig. 3 b). Household access to diagnostic services in rural areas was poor at just over 50% of households in Benin and over 40% in Zambia. Rural access to diagnostics in Nigeria increased from 12% to 24% of households between surveys (though this change was not statistically significant); and increased from 52% to 89% of rural households in Uganda (p-value: <0.001). Fig. 3 also illustrates changes in household access within 5 km to any treatment sources stocking both ACT and malaria diagnostics, and staffed by a qualified health professional - the three core elements of malaria case management. More than half of urban households in all four countries had access to at least one of these providers in 2009-10 and 2011-12, and more than 90% of urban households in Uganda and Zambia ( Fig. 3 a). This access was much lower for rural households ( Fig. 3 b). Less than a quarter of rural households in Benin and Nigeria, and approximately 40% in rural Zambia had access to any treatment source stocking both ACT and malaria diagnostics, and staffed by a qualified health professional in 2009-10 and 2011-12. Over time improved access to these providers was observed only in rural Uganda, where the proportion of households with such access increased from 38% in 2009-10 to 86% by 2011-12 (p<0.001). 3.4 Price of antimalarials accessible to households Because public and not-for-profit health facilities typically dispense ACT free of charge or at heavily subsidised prices (as in Benin), low-cost ACT was accessible to a large majority of urban households in the study countries with access via these treatment sources ( Fig. 2 a). In contrast, fewer rural households had ready access to affordable ACTs via public and not-for-profit health facilities ( Fig. 2 b), which could result in delayed treatment or seeking care from more expensive private for-profit providers. Fig. 4 illustrates the median price of ACT (per AETD, 2010 USD) available from private for-profit providers only among those households with access to them by country, urban-rural location and over time. At baseline, the median price of private sector ACT tablets accessible to urban households ranged from 4.94 USD per AETD in Nigeria to 8.33 USD in Zambia, and for rural households from 3.01 USD in Nigeria to 8.44 USD in Benin. In all countries except Benin, prices for ACT tended to be higher in urban than in rural areas. Over time, the median prices of ACT accessible to households decreased in all areas of the study countries. However, these changes were statistically significant for urban and rural households in Nigeria (rural: from 4.94 to 2.16 USD, p-value: <0.001; urban: from 3.01 to 1.71 USD, p-value: <0.001), rural households in Benin (from 8.44 to 2.55 USD, p-value: <0.001), and urban households in Zambia (from 8.33 to 6.75 USD, p-value: 0.001). No significant changes were observed in Uganda. Median prices for private sector AMT and nAT, and addition details for ACT tablets are presented in Appendix Table C . 4 Discussion 4.1 Summary of findings and policy implications By combining data from households with information on the complete range of treatment sources in their vicinities, we have produced a variety of nationally representative indicators that describe malaria treatment access from the household perspective, and how this has changed over time in dynamic, pluralistic health care markets. We characterise two out of the three dimensions of access (i.e. physical and financial), but do not address the acceptability dimension. Our findings show that by 2011-12, although the urban-rural gap in physical ACT access persisted in Zambia, progress had been made in Benin and Nigeria, and this gap had largely closed in Uganda. The results also demonstrate that the private sector helped reduce this gap in underserved areas. This was seen in Nigeria and Uganda where increased household access was driven largely by increased ACT availability in licensed drug stores, which was in large part due to these countries’ participation in the AMFm ACT subsidy programme that explicitly aimed to increase ACT availability and affordability among private retailers ( Tougher et al., 2012 ). Although Benin did not participate in AMFm previous research has shown that Benin’s private retail sector is heavily dependent on Nigeria for antimalarial supplies ( Palafox et al., 2014 ), so that the impact of AMFm was indirectly felt in Benin as well. After the official end of AMFm, ACT subsidies were maintained in Nigeria and Uganda under the Private Sector Co-payment Mechanism, which sustained the improvements in access achieved during the AMFm period ( ACTwatch Group, Tougher, Hanson, & Goodman, 2017 ). However, increasing physical access through private for-profit providers carries potential consequences for the affordability of care. Although we do not measure ability to pay, our data on price are an important component of financial access. To illustrate, access to ACT for rural households in Uganda in 2009-10 was predominantly via public and not-for-profit health facilities where treatment should be dispensed free of charge. The increases in physical access seen by 2011-12 were driven largely through more private for-profit health facilities and drug stores stocking ACT. Although free treatment from public and not-for-profit facilities was still available to rural households in Uganda in 2011-12, individuals may choose to purchase more expensive ACT from private providers because they may be more convenient in terms of proximity, waiting times and opening hours. However, the observed decreases in private sector ACT prices demonstrate the role that interventions, like the AMFm ACT subsidy programme, continue to play in ensuring wide access to more affordable ACTs ( ACTwatch Group, Tougher, et al., 2017 ). Nonetheless, improving physical access to affordable care in public facilities must remain as a core objective of equitable health system strengthening in these settings as even subsidised ACT will still be out of reach for many. Results from Benin and rural Zambia also illustrate some less desirable effects of broader private sector involvement, where rising ACT availability among general retailers contributed to the observed increase in household access. In Benin, previous studies have shown that unlicensed and unregulated open-air market stalls selling antimalarials dominate this class of provider ( O’Connell et al., 2011 ), among whom it would be difficult to ensure the quality of medicines and case management. On the other hand, it could be argued that it is unrealistic to expect most people to seek care through Benin’s mainly urban private facilities and pharmacies, and therefore, steps should be taken to increase the quality of alternative treatment sources. Such measures could include introducing a category of regulated medicine dispenser akin to drug stores in other countries, support for market stall vendors who wish to upgrade, and training on malaria case management for drug store operators ( Goodman et al., 2007; Rao et al., 2013 ). While our findings show that the ACT access gap is closing, they also illustrate that, apart from in Uganda, much more could be done to improve rural access to the other core elements of appropriate malaria case management - diagnostics and qualified health professionals. By 2011-12, among rural households in Benin, Nigeria and Zambia, less than 70% had access to a treatment source within 5 km staffed by a qualified health professional, and less than half had access to malaria diagnostics. Access to an outlet within 5 km with all three core elements of malaria case management in rural areas was 40% of households in Zambia, 27% in Benin and only 16% in Nigeria. Public and private health facilities continue to be the dominant providers of diagnostic services in endemic countries ( ACTwatch Group, Hanson, & Goodman, 2017 ). Given the positive impact that engagement with private providers has had on improving access to ACT, careful consideration needs to be given to the role private retailers should play in ensuring that malaria diagnostics, and rapid diagnostic tests in particular, cover the last mile ( Visser et al., 2017 ). 4.2 Methodological insights into measuring health care access We believe that the methods presented here produce indicators of access that improve upon those previously used, and that they have many useful applications. First, we have demonstrated how examining treatment availability and prices charged from the perspective of households provides more intuitive and meaningful descriptions of access. As described in the introduction, typical access indicators are often limited to demand- or supply-side descriptions, typically averaged over broad strata (e.g. urban/rural), while our indicators provide direct measures of the services obtainable within a reasonable distance from households, thus providing evidence on ‘last mile’ coverage ( Lévesque et al., 2013; McIntyre et al., 2009 ). A further advantage of this approach is that access to health services can be aligned with actual need . To do this for our malaria case study, we use information from the ACTwatch household survey on reported fever for a child under the age of 5 years, which other open access resources, such as maps of populations or structures, do not provide. While reported fever in the study countries was widely distributed, this aspect of the approach could be more important for other conditions that are more geographically clustered, such as HIV and tuberculosis. When such fine-grained descriptions of access are examined over time, these indicators also provide much clearer understanding of whether household access has improved, and the changes driving those improvements. When considered alongside the acceptability dimension of access, this can indicate how such changes may impact treatment utilisation and ultimately, health outcomes. In applying these methods to malaria treatment, we have probed such access changes in Benin, Nigeria and Uganda, and the lack thereof in Zambia. These indicators could then be used as explanatory variables in analysing changes in treatment seeking behaviour. Given this explanatory potential, such indicators could be used to better evaluate the impact of interventions designed to improve access to care. These particular strengths of our method rely not only on the ability to combine contemporaneous demand- and supply-side data from the same locations, but also on the comprehensiveness of the supply-side data. A number of previous analyses have linked Service Provision Assessment data with DHS household data in novel and interesting ways ( Akin, Guilkey, Hutchinson, & McIntosh, 1998; Skiles et al., 2013 ). However, Service Provision Assessments focus on public and private facilities only and, therefore, do not provide information on the availability of other treatment options, particularly in the retail sector which is such an important source of malaria treatment. By contrast, the ACTwatch treatment source surveys provide a complete picture of the treatment landscape. However, conducting simultaneous household and treatment source surveys is logistically challenging and costly. This is particularly the case when including all treatment sources as the presence of less qualified providers and retailers is generally not well-documented, meaning that a detailed census must be conducted. Since the end of the ACTwatch project, such total market surveys of malaria treatment sources are not being conducted, so alternative sources of data would need to be found to produce comprehensive assessments of access to malaria treatment. One option could be to expand Service Provision Assessments to include a census of all provider types in PSUs and collect a broader range of supply-side data in countries where DHS household surveys include the malaria module. The methods in this paper could also be applied using data from ACTwatch’s sister project FPwatch, which involves surveys of all family planning providers alongside household surveys ( PSI, 2018 ). Conversely, for cases where concurrent, co-located household data are not available, open source maps of populations or structures could be used to locate households and then be merged with supply-side data from sources such as those described above. However, since these open access resource typically do not include information on actual household need for care, this approach would be better suited to measuring access to care for conditions that are more or less evenly distributed within a population. Our approach of using a 5 km radius around households to define their treatment choice sets represents a fairly simplistic use of geospatial data, and we recognise that more sophisticated methods, such as those that estimate road distance and travel time, could be used to assess physical access ( Al-Taiar, Clark, Longenecker, & Whitty, 2010; Islam & Aktar, 2011 ). As described in methods, defining treatment choice sets in this way also risks underestimating access for households located close to the borders of sampled PSUs, although the effect of this bias could be minimised by reducing the radius length used. Other important biases to consider are those typical when using self-reported data. For example, social desirability bias may have led to under-reporting of antimalarial prices and the availability of undesirable treatments by providers (i.e. oral AMT). While a methodological strength of the approach is the full census of treatment sources in PSUs, some providers with the potential to sell antimalarials may have been missed, leading to underestimates of treatment availability. 5 Conclusion In this paper, we have described an approach to operationalise the physical and financial dimensions of access to health care from the household perspective. Applying these methods to examine access to malaria treatment in four endemic countries, we have also illustrated how this novel approach provides a more useful understanding of access to care in pluralist and varied health care markets. This approach also facilitates an understanding of the drivers of changes in access over time and could lead to better explanations of the performance of complex interventions aiming to improve healthcare access. Acknowledgements and funding statement Data were provided by ACTwatch, a research project led by Population Services International in partnership with the London School of Hygiene & Tropical Medicine, and funded by the Bill & Melinda Gates Foundation and UKAID. Funding for this work was provided by the UK Economic and Social Research Council though the Secondary Data Analysis Initiative [grant number ES/K00381X/1 ]. The funding sources had no role in study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. BP, KH and CG are members of the LSHTM Malaria Centre ( http://malaria.lshtm.ac.uk/ ). KH and CG are members of the Centre for Health Economics in London ( https://www.lshtm.ac.uk/research/centres-projects-groups/chil ). Declarations of interest None. Ethical statement This research was reviewed and approved by the Observational/Interventions Research Ethics Committee of the London School of Hygiene & Tropical Medicine (Reference no: 7420). The authors do not have any conflict of interest or any competing financial interests in relation to the work described. ACTwatch was funded by the Bill & Melinda Gates Foundation and UKAID. Funding for this work was provided by the UK Economic and Social Research Council though the Secondary Data Analysis Initiative [grant number ES/K00381X/1]. The funding sources had no role in study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. Appendix See Table A . See Table B . See Table C .
|
[
"HANSON",
"TOUGHER",
"AKIN",
"ALTAIAR",
"BRIGGS",
"BRUXVOORT",
"GLOBALFUND",
"GOODMAN",
"ISLAM",
"KABAGHE",
"LEVESQUE",
"LITTRELL",
"MACKIAN",
"MCINTYRE",
"NOOR",
"NOOR",
"OCONNELL",
"OKELL",
"PALAFOX",
"RAO",
"SHEWCHUK",
"SKILES",
"SMITH",
"STATACORP",
"TAKALAHARRISON",
"TANSER",
"TODA",
"TOUGHER",
"VISSER",
"WHOGLOBALMALARIAPROGRAMME"
] |
02e39ddbc5024a76ae610761ea301cbb_Effect of equilibrium constant for carbon dioxide recombination in hypersonic flow analysis_10.1016_j.csite.2023.102947.xml
|
Effect of equilibrium constant for carbon dioxide recombination in hypersonic flow analysis
|
[
"Yang, Yosheph",
"Petha Sethuraman, Vignesh Ram",
"Kim, Jae Gang"
] |
An equilibrium constant is an important parameter in regard to determining the backward reaction rate constant in chemical kinetics modeling for a hypersonic flow. Three common approaches for the equilibrium constant determination are based on the partition function, Gibbs free energy, and the experimental reaction rate measurement. The present study conducted a computational fluid dynamics (CFD) analysis with different equilibrium constant formulations in a thermochemical nonequilibrium hypersonic flow in order to study the influence of the equilibrium constant in carbon dioxide flow during the Martian entry. The equilibrium constant for the carbon dioxide molecule dissociation differs from one method to another among the reactions that are considered in the carbon dioxide flow. Three different flow conditions, which are based on the experimental data that is provided in the literature, are considered in the detailed comparison analysis using CFD. The variation of the flow properties in terms of pressure, temperature, and mass fraction along the stagnation line is compared for different cases of the equilibrium constant computation. The results that are obtained from the present study confirm that the equilibrium constant influences the numerical computation in the thermochemical nonequilibrium flow especially for the non-catalytic wall boundary condition.
|
1 Introduction The exploration of Mars has become more popular in the past decades. The Mars 2020 mission [1] by NASA and China’s Tianwen-1 [2] show the growing interest with investigating this planet. The Martian atmosphere contains a high number of carbon dioxide gas, which is unlike the Earth’s atmosphere. The Martian entry vehicles need to be equipped with a thermal protection system (TPS), which includes a design that depends on an accurate prediction of the aerothermal load that is experienced by the vehicle in a carbon dioxide flow environment. The peak heat flux will drive the selection of the TPS material, whereas the total heat load will influence the thickness of the TPS [3] . It is required to conduct the thermal analysis for the re-entry vehicle based on the experimental studies in order to support the Mars exploration missions. The numerical analyses are also considered as alternatives in regard to studying the hypersonic flow of the Mars entry analysis, which is due to the high cost that is required in regard to operating the experimental ground tunnel facility for the Mars flow experiment [4] . A downscaled 70 degree sphere-cone model that is similar to the Mars Science Laboratory (MSL) is commonly used as the model geometry in the experimental analysis. The ground facilities that are commonly applied in the experimental measurement may include a reflected shock tunnel [5–8] , an expansion tube [9] , and an expansion tunnel [10,11] . The experimental studies were mostly conducted in order to investigate the role of the surface catalytic [12–14] and the flow transition [15] by measuring the surface heat transfer. The role of turbulent heating is important for the Mars blunt-body vehicles because the vehicles experience the Martian atmospheric entry at a high speed with a high angle-of-attack [16–18] . The thermochemical nonequilibrium phenomena [19,20] plays an important role in the computational fluid dynamics (CFD) calculation that is used for the numerical analysis. The chemical kinetics for the thermochemical nonequilibrium analysis is usually modeled based on the two-temperature model [21] . The chemical reaction rate parameters that were developed by Park [22] with the backward reaction rates that are obtained from the equilibrium constants are commonly applied in order to model the gas–gas interaction in the carbon dioxide flow. The gas–surface interaction modeling is also required in order to achieve an accurate estimation for a proper aerothermal load estimation of the Mars missions in addition to the gas–gas interaction [23–25] . In regard to the uncertainty analysis in the computational methods, Bose et al. [26] conducted an analysis by varying the input parameters for the numerical calculation. In the chemical kinetics modeling, Bose et al. [26] assumed a similar uncertainty for both the forward reaction rate constant and the backward reaction rate constant. Hollis and Prabhu [27] also conducted the uncertainty studies by comparing the computational results with the experimental data. In dealing with all the computational methods or the uncertainty analysis for the Martian entry, the backward reaction rates are computed by using the equilibrium constant. To the best of the authors’ knowledge, the influence of the equilibrium constant computation for the chemical reactions that involve carbon dioxide gases has not been extensively studied. This study aims to investigate how the equilibrium constant influences the thermal analysis during the Martian atmospheric entry. The equilibrium constants that are based on Gibbs free energy, the partition function, and the measured backward reaction rates are considered. A thermal analysis using an in-house CFD flow solver, which is called SHOCK2D [28,29] , is conducted in order to observe the influence the equilibrium constant formulations. Three different flow conditions for carbon dioxide gas that are based on the available experimental ground facilities data are considered [7–9] . The calculations, which are based on each equilibrium constant, are compared in terms of the flow variation along the stagnation line and the surface heat transfer. 2 Numerical computation This section briefly describes the kinetic modeling and the equilibrium constant formulation that is applied in the SHOCK2D CFD solver. A detailed explanation that concerns the governing equations in the CFD formulation for the solver can be found in Ref. [28] . 2.1 Thermochemical nonequilibrium CFD formulation The SHOCK2D flow solver implements a cell-centered finite volume approach in order to discretize the governing equations in the hypersonic flow. The Steger–Warming flux vector splitting [30] is applied for the convective flux, whereas the viscous flux is modeled using the variables values at the cell’s center [31] . The time integration is conducted using a line-implicit relaxation algorithm [32] . It is important to include the thermochemical nonequilibrium formulation in order to accurately model the flow phenomena in the hypersonic flow. The two-temperature (2-T) model that was proposed by Park [21] is applied in order to model the nonequilibrium phenomena. The translational and rotational energies of all species are described using a single trans-rotational temperature . The vibrational and electronic energy of all species are also modeled using a single vibrational electronic temperature T t r . The classical Landau–Teller equation is applied in order to model the energy exchange between the vibrational and translational energy. The vibrational relaxation is modeled using the Millikan–White correlation T v e [33] with a correction at a high temperature. The thermal conductivity and viscosity of the considered species are modeled using Gupta’s mixing rule [34] with the collision integrals defined in Ref. [35] . The chemical kinetics formulations that involve carbon dioxide gas are modeled using four different species (CO 2 , CO, O 2 , O). The production of a carbon atom is negligible, so it is not included in the species list for the considered experimental condition [27] . Table 1 shows the chemical kinetics formulation for the considered reaction rate, which is based on the given chemical species. The forward reaction rate is determined based on the controlling temperature , which is shown in Eq. T c (1) . The variables , C , and η are the parameters that are used in order to describe the forward reaction rate in terms of the Arrhenius equation. The control temperature Θ is defined as the geometrical mean between the trans-rotational temperature T c and the vibrational-electronic temperature T t r for the dissociation reaction. The controlling temperature is assumed to be equal to the trans-rotational temperature T v e for the exchange reaction. T t r (1) k f T a = C T c η exp − Θ / T c The species production rate for the w ̇ chemical reaction is computed based on the law of mass action, which is shown in Eq. k t h (2) [36] . The variables is the stoichiometric coefficient of the reactant species α s , k in the s th chemical reaction, and the variable k is the stoichiometric coefficient of the product species β s , k in the s chemical reaction. The variable k t h describes the total number of species in the reaction. The parameters N S describes the density of the species, and ρ describes the molar mass of the species. M The backward reaction rate (2) w ̇ s , k = β s , k − α s , k k f , k ∏ j = 1 N S ρ j M j α j , k − k b , k ∏ j = 1 N S ρ j M j β j , k is obtained from the equilibrium constant, which is shown in Eq. k b (3) . Unlike the forward reaction rate, the backward reaction rate uses the trans-rotational temperature as the control temperature. The equilibrium constants are given as a curve-fitted equation, which is shown in Eq. (4) [21] . The parameters , and A 1 , A 2 , A 3 , A 4 are determined following the methods that are described in the next subsection. A 5 (3) k b , k T t r = k f , k T t r K e q , k T t r (4) K e q , k T = exp A 1 T 10000 + A 2 + A 3 log 10000 T + A 4 10000 T + A 5 10000 T 2 2.2 Equilibrium constant formulation Three different equilibrium constant computational methods are applied in the present study. The first method applied in the equilibrium constant computation is based on the partition function calculation. The equilibrium constant for the dissociation reaction is evaluated as follows for a diatomic molecule A + M → B + C + M . A The parameter (5) K e T = Q t B Q B Q t C Q C Q t A ∑ i , v , J Q A i , v , J exp E A D i , J k B T describes the translational partition function, Q t is the atomic partition function for species Q B , and B is the atomic partition function for species Q C . C describes the molecular partition function of species Q A with the A , i , and v indices describing the electronic, vibrational, and rotational states of species J , respectively. A describes the dissociation energy for the molecule species E A D i , J . A The equilibrium constant for the dissociation reaction is evaluated as follows for a polyatomic molecule A + M → B + C + M . A The partition function is modeled based on the Rigid Rotor Harmonic Oscillator (RRHO) model (6) K e T = Q t B Q B Q t C Q C Q t A Q A exp − E R k B T [37] . The term describes the reaction energy per particle in Joules. In the RRHO approximation, the contributions from the rotation, vibration, and electronic are considered separately and are shown in Eq. E R (7) . The rotational, vibrational, and electronic partition functions are shown in Eq. (7) Q i = Q i r o t T Q i v i b T Q i e l T (8) , Eq. (9) , and Eq. (10) , respectively. The term describes the characteristic temperature for the rotation, whereas the θ i r o t describes the characteristic temperature for each vibrational mode θ k i v . The term k is the characteristic temperature that is associated with electronic level θ k i e l = E k i e l / k B , energy k , and degeneracy E k i e l . The constants g k i e l and σ i describe the symmetry and linearity of the molecule. L i (8) Q i r T = 1 σ i T θ i r o t L i 2 (9) Q i v T = ∏ k 1 − exp − θ k i v T − 1 (10) Q i e l T = ∑ k g k i e l exp − θ k i e l T The equilibrium constant is computed as follows for the neutral exchange reaction K e . A + B → C + D The term (11) K e = Q t C Q C Q t D Q D Q t A Q A Q t B Q B exp − E R k B T describes the energy of the reaction per particle in Joules. The required parameters for the electronic, vibrational, and rotational energies in the equilibrium constant computations are obtained from the National Institute of Standards and Technology (NIST) database E R [38] . The equilibrium constants parameters are shown in Table 2 , which are based on the partition function computation. The equilibrium constant parameters are based on the curve-fitted equation that is shown in Eq. (4) . The parameters and A in the third body reaction describe the atomic and molecule species, respectively. M The second method is based on the minimization of Gibbs free energy [39,40] , which is shown in Eq. (12) . The variable describes the reference pressure defined as 1 bar. p 0 The term (12) K e q , k T = p 0 R T ν k exp − ∑ s = 1 N S β s , k − α s , k h s R T − s s R is defined as the difference stoichiometric coefficients for the products and the reactants of the ν k th reaction, which is shown in Eq. k (13) . The variables (13) ν k = ∑ s N S β s , k − α s , k describes the enthalpy of the species h s , and the variables s describes the entropy of the species s s . These values are calculated based on the curve fits, which are shown in Eq. s (14) and (15) . The parameters for both enthalpy and entropy are obtained from the NIST database [38] . (14) h s R T = − a 1 , s T 2 + a 2 , s log T T + a 3 , s + a 4 , s T 2 + a 5 , s T 2 3 + a 6 , s T 3 4 + a 7 , s T 4 5 + a 8 , s + a 9 , s T (15) s s R = − a 1 , s 2 T 2 − a 2 , s T + a 3 , s log T + a 4 , s T + a 5 , s T 2 2 + a 6 , s T 3 3 + a 7 , s T 4 5 + a 8 , s log T + a 10 , s Table 3 shows the equilibrium constant coefficients that are based on the minimization of Gibbs free energy. This method also applies a similar curve-fitting equation for the equilibrium constant parameters, which is shown in Eq. (4) . An additional equilibrium constant computation is conducted based on the experimentally measured backward reaction rates that are provided by Baulch [41] and NIST chemical kinetics database [42] . The equilibrium constant can be computed using the relation that is shown in Eq. (3) based on the experimentally measured backward reaction rates in conjunction with the forward reaction rates that are provided in Table 1 . Table 4 provides the curve-fitting parameters for the equilibrium constants, which are based on the experimental data that are given in Refs. [41,42] . Fig. 1 shows the comparison between the backward reaction for all the considered chemical reactions. The experimentally measured reaction rate data are also plotted together for a better comparison. The NIST chemical kinetics databases are obtained from the studies in Refs. [43–47] for the oxygen recombination. The studies in Refs. [46–53] are used for the carbon dioxide recombination, and the studies in Ref. [46] is used for the neutral exchange reaction. For the oxygen recombination reaction rates shown in Fig. 1 (a), the experimental measured data are given with argon gas as the third species. All the proposed methods in regards to computing the equilibrium constant provide similar trends in the recombination rates with almost similar values. The oxygen recombination rates that are obtained from Gibbs free energy and the partition function methods do not significantly vary with the experimentally measured reaction rates. The agreement between the partition method-based equilibrium constant and the experimental measured reaction rates for the oxygen recombination with argon gas as the third species can also be observed in Ref. [54] . For the neutral exchange reaction between CO 2 and O shown in Fig. 1 (c), all the applied methods in the computations agree one to another. The difference in the equilibrium computation is mainly observed for the case of the carbon dioxide recombination (CO + O + M CO → 2 + M). Both Gibbs free energy and partition function computations exhibit different trends compared to the values obtained from the experimentally measured data. The recombination rate from the experimental measurement increases as temperature increases, whereas the recombination rate from both Gibbs free energy and the partition functions decreases as temperature increases. The recombination rate for this exchange reaction additionally differs in high order of magnitude from one to another at the temperature region between 300 and 1000 K. Based on the observation in the equilibrium constant behaviors, the present study focuses on the influence of the equilibrium computation for the carbon dioxide recombination in the numerical flow computation. There are in total three different cases, which are shown in Table 5 . Case 1 considers the equilibrium constant computations solely based on the partition function for all the reactions. In the Case 2, the equilibrium constants for the reactions involving the carbon dioxide gas are calculated based on the Gibbs free energy. The equilibrium constant for the oxygen molecule is not altered because there is not much variation in the backward reaction of the oxygen recombination reaction as shown in Fig. 1 . Additionally, there is not much also difference in the backward reaction rates of the CO 2 and O exchange reaction for all the equilibrium constant computational methods. Considering these observations, in Case 3 only the equilibrium constant for the carbon dioxide molecule dissociation is calculated using the curve fitting based on the experimentally measured reaction rates. By comparing the results obtained from Case 2 and Case 3, the influence of the equilibrium constant for the carbon dioxide dissociation reaction can be observed. It is expected that the variation in the equilibrium constant for the carbon dioxide dissociation will influence the flow computation. A detailed comparison study among these three cases using the numerical CFD analysis is conducted in order to confirm this hypothesis. 3 Experimental flow calculation Three different experimental flow conditions given in the literature are considered in order to observe the influence of the equilibrium constant computation in more detail. The considered geometries in these experiments are similar to the geometry that is applied for the 70 degree sphere-cone Mars Science Laboratory (MSL). The geometry applied in one experiment differs from another one in terms of the nose radius and the corner radius. Table 6 summarizes the flow conditions that are considered for the equilibrium constant verification. A detailed summary for the experimental test facility can be found in Refs. [7–9] . The parameter describes the freestream species mass fraction. For both the experimental cases in Calspan University at Buffalo Research Center Large Energy National Shock Tube (CUBRC LENS I) and Caltech T5 reflected shock tunnel, the freestream mass fraction contains non-zero component values for both carbon monoxide and oxygen molecules. This may indicate that the thermochemical nonequilibrium may be present in the freestream flow properties. For the Hypervelocity of Expansion Tube (HET) data, the freestream flow condition is in the thermochemical equilibrium because the only component in the mixture is carbon dioxide. Due to the short duration during the experimental measurement, the wall temperatures for all the experimental conditions are assumed to be 300 K c ∞ [27] . The influence of the equilibrium constant in the flow field computation is also checked using the surface heat transfer data that is provided in Ref. [27] based on the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) software [55] . The surface heat transfer comparison is conducted for both the non-catalytic and the super-catalytic wall boundary conditions. The non-catalytic wall assumes that no atomic recombination occurs at the wall, so there will not be any additional heat transfer due to the diffusion recombination process. The obtained heat transfer from this boundary conditions gives the minimum surface heat transfer. On the other hand, the super-catalytic wall assumes that all the atoms recombine completely to carbon dioxide with no amount from other species. The super-catalytic wall boundary condition gives the prediction for the maximum surface heat transfer. 3.1 CUBRC LENS I Fig. 2 shows the pressure contour comparison between the applied methods in the equilibrium constant computation. The calculations are conducted using the non-catalytic wall boundary condition assumption. It is known that the catalytic boundary condition does not significantly affect the pressure contour distribution [28] . All three different equilibrium constant computations give similar pressure contour behavior. The computation using the equilibrium constant, which is based on the experimental curve fitted data, is used as the reference for a comparison in both subfigures. The flow properties along the stagnation line are compared in order to see clearly how the shock standoff distance varies between each method. Fig. 3 illustrates the flow properties variation along the stagnation line in terms of pressure, temperature, and the species mass fraction for the CUBRC LENS I flow condition with both the non-catalytic and fully-catalytic wall boundary conditions. The shock stand-off distances do not vary significantly between the considered cases in the computation. The shock stand-off distance obtained from Case 3 exhibits a slightly bigger value than the other two cases. In regard to the temperature variation along the stagnation line show in Fig. 3 (b), the peak temperature behind the shock wave is observed to be around 5000 K. At this high temperature, the mass fraction of carbon monoxide (CO) and oxygen atom (O) increases due to the dissociation of carbon dioxide molecule. All the considered cases for the equilibrium constant computation provide a similar trend of decreasing temperature near the wall. However, a significant difference is observed for the mass fraction near the wall especially for the carbon dioxide (CO 2 ) mass fraction. Among the applied equilibrium constant computational methods, the partition function method applied in Case 1 for the CO 2 recombination gives the highest CO 2 wall mass fraction and the smallest CO wall mass fraction. There are no sufficient O atoms to recombine between each other in order to produce O 2 molecules, which is due to the high recombination rate between CO and O in Case 1. Case 3 produces a slightly lower CO 2 wall mass fraction and a higher CO wall mass fraction compared to Case 2. This observation is aligned with the CO 2 recombination rate, which is shown in Fig. 1 . Fig. 4 shows the surface heat transfer along the experimental model, which is used in the CUBRC LENS I facility [7] . The heat transfer data that is provided in Ref. [27] using LAURA software is used for a comparison with both the non-catalytic (NC) and the super-catalytic (SC) wall boundary conditions. In the non-catalytic wall boundary condition, the trend in the surface heat transfer from the present computation using SHOCK2D follows a similar trend with the CO 2 wall mass fraction, which is shown in Fig. 3 (b). Case 1 exhibits the maximum surface heat transfer, whereas Case 3 gives the minimum surface heat transfer among the considered cases. The heat transfer that is observed from the LAURA computation [27] with the non-catalytic wall boundary condition is quite similar to the present computation using Case 3. In the super catalytic wall boundary condition, all the considered cases in the present simulation give similar surface heat transfer values. This is because all the considered cases imposed similar wall mass fraction compositions. A slight difference from the present computations with the LAURA computation for the super-catalytic wall boundary condition may be a result of the difference in the forward reaction rate parameters that are applied in the chemical kinetics. For the CO 2 dissociation, LAURA implemented the parameters that are given by Fujita et al. [56] , whereas the present computation applied Park’s parameters [22] . The experimental heat transfer measurement observed in this flow condition is in the range of the computed heat transfer between the non-catalytic and the super-catalytic wall boundary conditions. This observation illustrates that the stainless-steel material that is used in the experiment may promote a wall catalytic recombination. 3.2 Caltech T5 Fig. 5 shows the pressure contour variations for the experimental flow condition in the Caltech T5 experimental facility [8] . Similar to the CUBRC LENS I experimental condition, the flow condition in the Caltech T5 facility does not exhibit a strong difference in the pressure distribution between three considered cases. Unlike the previous CUBRC LENS I flow condition, the Caltech T5 condition exhibits a difference in the shock stand-off distance. The computation based on Case 3 gives the biggest shock stand-off distance. Fig. 6 shows the flow properties variation along the stagnation line in terms of pressure, temperature, and the species mass fraction for the Caltech T5 flow condition. A strong difference in the shock stand-off distance can be observed in this flow condition. The difference between them is a result of the chemical reaction that occurs behind the shock wave. The flow condition in the Caltech T5 experiment has a strong chemical nonequilibrium in freestream flow as shown by the high amount of O atom. The mass fraction variation along the stagnation line is strongly depended on the applied cases for the equilibrium constant computation. Near the wall, the mass fraction amount for CO 2 molecule is highest when Case 1 is applied and lowest when Case 2 is applied. On the other hand, the formation of O 2 molecule near the wall is highest when Case 2 is applied and the lowest when Case 1 is applied. These trends are different from the ones observed in the CUBRC LENS I flow condition. The higher amount of O atom in the freestream and behind the shock wave in the Caltech T5 experimental condition also contributes to the formation of O 2 through the exchange reaction of CO 2 + O O → 2 + CO. Thus, although the recombination rate for CO 2 is lowest when the curve-fitted equilibrium constant in Case 3 is applied, the composition of O atom behind the shock wave may promote the consumption of the CO 2 molecule, which is due to the exchange reaction. Fig. 7 shows the surface heat transfer along the experimental model used in the Caltech T5 facility. In addition to the experimentally measured heat transfer value, the numerically computed heat transfer using LAURA is also plotted for the comparison. Similar to the previous case, the experimentally measured heat transfer is between the non-catalytic and super-catalytic heat transfer predictions. This observation may indicate that the stainless-steel material used for the model promotes the surface catalytic recombination. In Fig. 7 , the computational method using LAURA gives a lower surface heat transfer than all three cases given in the present computation for the non-catalytic wall boundary condition. The order of the surface heat transfer in the present computation follows the similar order for the CO 2 mass fraction composition near the wall shown in Fig. 6 (b). This observation is due to the formation of CO 2 molecule is an exothermic reaction that increases the temperature near wall. The differences in the heat transfer trends between the CALTECH T5 and CUBRC LENS I signify the importance of the equilibrium constant composition and the freestream flow composition. It is important to note that a flow with high dissociated O atoms in the freestream may influence the thermal analysis due to the exchange reaction between CO 2 molecule and O atom. In the super-catalytic wall, all the considered cases in the present study exhibit a similar heat transfer. The similarity between them is due to all cases have a similar species wall composition, which is 100% CO 2 . 3.3 University of Illinois HET Fig. 8 shows the pressure contour comparison for the experimental flow condition in the HET facility [9] with the non-catalytic wall boundary condition. For the considered flow condition, the equilibrium constant computational methods do not significantly influence the shock stand-off distance values. Indeed, all three cases give the similar shock stand-off distance. Fig. 9 shows the flow properties along the stagnation line in terms of the pressure, temperature, and species mass fraction for the University of Illinois HET flow condition [9] with the non-catalytic wall boundary condition. Unlike the Caltech T5 experimental condition, the HET flow condition does not show any strong variation in the pressure and temperature variation along the stagnation line. The closed similarity in the shock stand-off distance between cases is due to the equilibrium condition at the freestream flow condition. At the freestream, the flow is 100% CO 2 with no dissociated CO molecules or O atoms. Behind the shock wave, unlike the other flow conditions in CUBRC LENS I and Caltech T5, only a little amount of CO molecule and O atom is present. Consequently, for the given flow condition, the dominant reaction behind the shock wave is presumably the dissociation of CO 2 molecule (CO 2 + M CO + O + M). Because all the considered cases for the equilibrium constant computation apply a similar dissociation reaction rate, the shock stand-off distances between three cases do not vary from one to another. → Considering the atomic mass fraction variation along the stagnation line shown in Fig. 9 (b), only a slight variation of the mass fraction near the wall is observed for both CO 2 and CO molecules. Case 1 exhibits the highest amount of CO 2 wall mass fraction and the lowest amount of CO wall mass fraction. This observation is aligned with the recombination reaction rate comparison shown in Fig. 1 . It is important to note that both Case 2 (Gibbs-energy based equilibrium constant) and Case 3 (experimental curve-fitted based equilibrium constant) for the CO 2 dissociation exhibit a similar wall mass fraction for both CO 2 and CO molecules behind the shock wave. It can be considered that for a flow with 100% CO 2 molecule in the freestream condition, both equilibrium constants based on the Gibbs free energy and experimental data curve fitting give similar mass fraction variation. Fig. 10 shows the surface heat transfer along the experimental model used in the University of Illinois HET [9] . Both Case 2 and Case 3 give similar heat transfer profile for the non-catalytic wall boundary condition along the model geometry. Furthermore, the computed heat transfer values from these two cases are very similar to the one obtained from the LAURA computation. The similarity between the computations in the present study with different equilibrium constant is presumably due to the similarity in the wall mass fraction near the wall as illustrated in Fig. 9 . The difference in the heat transfer estimation between Case 1 and other cases signifies that the partition function-based equilibrium constant overpredicts the surface heat transfer. This observation is due to the high reaction rate constant predicted for the CO 2 recombination as shown in Fig. 1 (b). Similar to other experimental conditions, all cases give a similar heat transfer for the super-catalytic wall boundary condition due to the similarity in the wall mass fraction composition. At the end of discussion, it is important to recall back the motivation of the present study. The study aims to observe the influence of the equilibrium constant for a polyatomic molecule of CO 2 in the flow thermal analysis. For the non-catalytic wall boundary condition, the trend of the calculated surface heat transfer depends not only on the chosen method for the equilibrium constant computation but also on the freestream flow composition. Considering the chosen method for the equilibrium constant, it is advisable to implement the experimental data curve-fitted equilibrium constant because the values are directly obtained from the experimental measurement. Compared to the partition function-based equilibrium constant, Gibbs energy-based equilibrium constant gives a closer value to the experimental data curve fitted equilibrium constant. Considering the freestream flow composition, it is observed that the degree of the chemical non-equilibrium influences the heat transfer computation. The presence of O atom at the freestream condition promotes the exchange reaction between CO 2 molecule and O atom. Due to this exchange reaction, the carbon dioxide wall mass fraction between all three cases may not follow the same order of the recombination rate constant for the CO 2 recombination. For the super-catalytic wall boundary condition, due to the similar wall mass fraction composition that consists of 100% CO 2 , all the considered cases in the present study give similar heat transfer calculation. This observation indicates that the equilibrium constant does not influence the surface heat transfer estimation for the super-catalytic wall boundary condition if similar chemical kinetics parameters are applied for the forward chemical reactions. 4 Conclusions The influence of equilibrium constant on the thermochemical non-equilibrium flow analysis for carbon dioxide gas has been investigated numerically using an in-house CFD solver. The equilibrium constants were used to obtain the backward reaction rate. Three different equilibrium constants formulation were applied based on the partition function, Gibbs free energy, and the experimental reaction rates measurement. The backward reaction rates for the CO 2 dissociation exhibited different trends from one equilibrium constant computational method to the other. Based on these equilibrium constant approaches, a thermal analysis using the CFD computation was carried out for both the non-catalytic and super-catalytic wall boundary conditions. The amount of surface heat transfer was directly influenced by the carbon dioxide wall mass fraction. For the non-catalytic wall boundary condition, the surface heat transfer computation depended not only on the chosen equilibrium constant but also on the freestream flow condition. The computation with the equilibrium constant based on the partition function gave the highest surface heat transfer due to the high recombination rate constant predicted from this method. In the case where the freestream flow was pure carbon dioxide gas, no difference in the surface heat transfer data was observed between the Gibbs free energy-based equilibrium constant and the experiment curve fitted equilibrium constant. For the super-catalytic wall boundary condition, the equilibrium constant did not influence the surface heat transfer prediction if similar chemical kinetics parameter were applied. Throughout this study, it cannot be implied directly one equilibrium constant is better than the other due to the limitation for a direct comparison with the experimental data. The experiments were carried out in catalytic environment, whereas the numerical simulation was conducted with a non-catalytic wall boundary condition. However, it is suggested that the equilibrium constant from the measured backward reaction rates should be used for a better heat transfer prediction because the rates are directly obtained from the experimental data. CRediT authorship contribution statement Yosheph Yang: Conceptualization, Methodology, Data analysis, Writing – original draft, Writing – review & editing, Funding acquisition. Vignesh Ram Petha Sethuraman: Data analysis, Methodology, Writing – review & editing. Jae Gang Kim: Conceptualization, Methodology, Writing – review & editing, Funding acquisition. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work was supported by Korea Research Institute for defense Technology planning and advancement (KRIT) grant funded by the Korea government(DAPA(Defense Acquisition Program Administration)) ( KRIT-CT-22-030 , Reusable Unmanned Space Vehicle Research Center, 2023) and by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) (No. NRF-2021R1G1A1006344 ).
|
[
"FARLEY",
"ZOU",
"QIAO",
"BUR",
"HOLLIS",
"MACLEAN",
"MACLEAN",
"WRIGHT",
"SHARMA",
"MACLEAN",
"HOLLIS",
"MARSCHALL",
"YANG",
"YANG",
"MACLEAN",
"HOLLIS",
"HOLLIS",
"HOLLIS",
"CANDLER",
"ARMENISE",
"PARK",
"PARK",
"WRIGHT",
"YANG",
"YANG",
"BOSE",
"HOLLIS",
"YANG",
"KIM",
"STEGER",
"JAWAHAR",
"SCALABRIN",
"MILLIKAN",
"GUPTA",
"WRIGHT",
"SCOGGINS",
"LINSTROM",
"VINCENTI",
"MCQUARRIE",
"BAULCH",
"JAVOY",
"CAMPBELL",
"CAMPBELL",
"TSANG",
"WARNATZ",
"KONDRATIEV",
"FUJII",
"FUJII",
"SUGAWARA",
"HARDY",
"KIM",
"FUJITA"
] |
5a028ccca8ad41db843aad119bf44d1b_Variable droop gain frequency supporting control with maximum rotor kinetic energy utilization for w_10.1016_j.ijepes.2024.110289.xml
|
Variable droop gain frequency supporting control with maximum rotor kinetic energy utilization for wind-storage system
|
[
"Li, Wenbo",
"Li, Yujun",
"Li, Jiapeng",
"Zhang, Yang",
"Chang, Xiqiang",
"Sun, Zhongqing"
] |
To address the emerging frequency stability issue brought by the large replacement of synchronous generators with renewable generations, wind turbine generators are required to possess frequency-supporting capability. However, existing frequency-supporting control strategies lack the assessment of the frequency support capability of wind turbine generators, leading to degraded control performance under various situations. Aiming to solve this problem, this paper proposes a variable-droop-gain control for wind turbine generators with maximum rotor kinetic energy utilization. Firstly, an analytical relationship was established between droop gain, disturbance scale, and rotor speed. Subsequently, the released energy of the wind turbine generator is evaluated, which equals the difference in the rotor kinetic energy under the initial and the post-disturbance steady-state rotor speed. It was proved that the released kinetic energy cannot exceed a certain proportion of total rotor kinetic energy. Accordingly, a variable initial gain scheme is proposed, which determines the initial droop gain as per the disturbance scale for maximizing the kinetic energy release of wind turbines. Moreover, an additional real-time droop gain adjustment rule is added to prevent the over-deceleration of wind turbines. The simulation results show that the proposed scheme may provide the maximum KE release and effectively improve the system frequency nadir while ensuring the safe operation of wind turbine generators.
|
1 Introduction In recent years, traditional fossil power generation has been gradually replaced by renewable energy generation due to its numerous advantages, such as eliminating pollution and rich resources [1] . Permanent magnetic synchronous generator (PMSG) based wind turbine generator (WTG), as a commonly used wind power generation, normally performs maximum power point tracking (MPPT) to fully utilize captured wind energy. However, WTGs do not have the natural ability to provide frequency response like synchronous generators due to the decoupling of its output power and system frequency, resulting in an excessively low inertia level of the power system with a high wind power penetration rate [2] . The low system inertia accelerates the frequency-changing speed and exacerbates the risk of over high or low frequency, making the frequency stability a burning issue for the modern power system [3] . The frequency-supporting control of the wind turbine generator is an effective and economical way to improve the system frequency dynamics, which can be achieved by releasing the kinetic energy stored in the rotating mass. The earliest control schemes for variable speed WTG to provide frequency response can mainly be divided into two categories [4] : frequency measure based control [5] and temporary overproduction (TOP) [6] . Frequency measure based control couples the power reference of wind turbine generators with system frequency to mimic the inertial response and droop characteristic of synchronous generators, while TOP control injects predetermined additional power after disturbance to reduce the unbalanced active power of the system. Both of them can achieve sounded frequency support for the system. On this basis, numerous researchers have further improved frequency-supporting strategies for WTG, and then numerous advanced strategies have been proposed. Frequency measure based control usually uses an additional frequency deviation loop and rate-of-change-of-frequency (RoCoF) loop to achieve frequency response. The operation point of WTG under inertial control will deviate from the MPPT point, causing persistent loss of captured wind energy. A wash-out filter is added to the droop loop to solve the problem [7] . In [8] , a coordinated inertial control strategy is proposed to combine the kinetic energy (KE) of rotating mass and the DC side capacitor to participate in frequency support and further derive the equivalent virtual inertia induced by WTG. The above literature mainly focuses on providing frequency support, generally using fixed droop and virtual inertial gains. On the one hand, setting the same control parameters for each WTG in the wind farm cannot fully utilize the frequency support capability of the wind farm. On the other hand, it may cause over-deceleration (OD) in specific situations. Then, KE-based adaptive gain schemes [9,10] and time-varying droop scheme [11] were proposed. Furthermore, a sequential scheme is proposed further in [12] that can effectively collaborate energy storage systems (ESSs) with double-fed induction generators (DFIG) to participate in primary frequency regulation. In these schemes, adaptive gains are proportional to KE (square difference between the real-time and the minimum rotor speed of the WTG) or proportional to rotor speed or time-varying to release more KE while preventing OD. Subsequently, it is pointed out in [13] that introducing RoCoF directly into the power reference may lead to power oscillations due to the impact of measurement noise, and the power response of the converter-interfaced generator (CIG) is already fast enough. So, droop control can be an alternative to the inertial response. Accordingly, adaptive droop schemes based on KE and RoCoF are proposed [14,15] , which avoids directly introducing the frequency derivative in a control loop. The above works mainly focus on preventing OD of WTGs during the frequency support stage. In addition, frequency measure based controls primarily provide frequency support to the system by emulating the inertia and droop characteristics of synchronous generators, thereby enabling them to achieve bilateral frequency support. The case study in [8,12] has reported the effectiveness of the frequency measure based methods in bilateral frequency support scenarios. However, due to limited frequency support resources, providing frequency support under “under-frequency” scenarios is more challenging. Many existing works mainly focus on this scenario. After the frequency support stage ends, the WTGs should be controlled to return to MPPT mode, thus avoiding persistent reduction of captured wind energy. However, directly switching to MPPT mode will cause a sudden power drop, resulting in a secondary frequency drop (SFD). [16] divides WTGs in the wind farm into two groups, one in MPPT mode and the other in deloading mode. By taking a consecutive power dispatch scheme for two groups of WTGs, the depth of SFD has been effectively reduced. [15] proposed an additional control strategy to draw the WTG back to the MPPT point smoothly. By gradually decreasing the droop gain, the active power output of the WTG is smoothed to reduce the depth of the SFD during the rotor speed recovery stage. Similarly, [17] constructs a power reference scheme based on the real-time rotor speed to gradually recover the rotor speed to the MPPT point, effectively avoiding the SFD. In general, when the WTG adopts droop or inertial control, SFD can be avoided through additional control schemes. The main focus of existing research is still on parameter tuning for WTGs. [18] proposes a distributed control-based wind farm frequency support strategy, which calculates each wind farm’s optimal droop gain first, and then each WTG’s active output is allocated based on its wind speed and reserve capacity. [19] studied the synchronous stability mechanism of the WTG embedded system during frequency support. It is pointed out that the droop coefficient of WTG should also meet certain requirements to ensure the system’s equilibrium exists. On this basis, [20] proposes a collaborative optimization method for control parameters considering frequency dynamic constraints, which effectively suppresses the power oscillation of WTGs during the frequency support. However, these works did not provide analytical approaches or numerical calculation methods for the droop gains boundary. Unlike inertial control, TOP-type control adds a predetermined power-time (or rotor speed) function to the power reference for WTG to replace the frequency control loop, which is conducive to fully utilizing the KE. TOP-type controls can be activated immediately once the disturbance is detected and effectively provide frequency support for the system. However, the traditional TOP control [6] will bring an SFD, which may be more severe than the first frequency drop in some situations. To solve this, one feasible approach is to refine the shape function: three schemes based on different power-rotor speed curves have been proposed [21–23] . However, the performance of these schemes is highly correlated with parameter tuning, yet it eventually relies on simulation and the scale of the disturbance. The development of system power imbalance estimation methods based on wide-area information [24,25] or local information [26,27] enables another approach to improve the frequency nadir (FN) by optimizing the control parameters of TOP-type controls. Machine learning [28] and analytical approaches [29,30] are utilized to provide parameter tuning rules that maximize the enhancement of FN under specific disturbances, respectively. Subsequently, an optimized scheme was proposed [31] considering multiple wind farms and battery energy storage systems (BESSs). Existing parameter tuning methods rely heavily on system models and mainly utilize optimization, which is significantly affected by model accuracy and time-consuming considering the huge number of WTGs in a wind farm. Besides, few works have explored the potential of applying system power imbalance estimation (SPIE) in frequency measure based controls. A simple but effective short term frequency-supporting control scheme is proposed in this work, which aims to fully utilize the frequency support capability of the wind farm while ensuring safe operation under different situations. Firstly, the relationship between droop gain, power imbalance and rotor speed is established, and the releasable KE of WTGs is derived analytically. Subsequently, the total energy released by the wind farm is determined, which synthesizes the disturbance scale and frequency support capability of the wind farm. Correspondingly, the initial droop gain for each WTG is calculated. Moreover, a rotor speed-based adjusting scheme is designed to ensure the safe operation of WTGs during the frequency support stage. After the frequency support stage ends, a time-varying power reference scheme is applied to recover the rotor speed. Finally, the effectiveness of the proposed scheme is validated through simulation. This paper is organized as follows. In Section 2 , the modeling of a PMSG-based wind farm is presented. Then, Section 3 establishes the analytical relationship between steady-state rotor speed, droop gain, and initial rotor speed. Furthermore, the releasable KE under droop control is obtained by solving the non-linear equations analytically. Subsequently, Section 4 proposes a variable gain scheme based on total releasable KE to fully use the frequency support capability while ensuring the safe operation of WTGs. In Section 5 , case studies considering various situations have verified the effectiveness of the proposed scheme. Finally, the conclusion is given in Section 6 . 2 Modeling of PMSG-based WTGs 2.1 PMSG-based WTG model The basic configuration of the PMSG-based WTG is shown in Fig. 1 . It comprises a wind turbine, a PMSG, two full-scale back-to-back voltage source converters and WTG controllers. Wind turbines can capture the wind energy flowing through the rotor-swept area and convert it into mechanical energy. According to the aerodynamics theory, the mechanical power captured by a wind turbine can be described as where (1) P m = 1 2 ρ π R 2 v w i n d 3 C P ( λ , β ) , ρ , R , v w i n d and λ are air density, the radius of the wind turbine blade, wind speed, tip-speed ratio, and pitch angle, respectively. The aerodynamic performance coefficient β represents the ratio of wind energy captured by wind turbines. In this paper, it can be calculated by the following equation C P [9] . where, (2) C P ( λ , β ) = 0 . 5176 ⋅ 116 λ i − 0 . 4 β − 5 ⋅ e − 21 λ i + 0 . 0068 λ and (3) 1 λ i = 1 λ + 0 . 08 β − 0 . 035 β 3 + 1 , λ = R ω r v w i n d represents the rotor speed of the wind turbine. According to aerodynamics, ω r reaches its maximum when C P . Meanwhile, mechanical power reaches its maximum. λ = λ o p t During nominal operation, WTG performs MPPT to pursue maximum output power, thus its active power reference is where (4) P M P P T = k g ⋅ ω r 3 , k g = 1 2 ρ π R 5 λ o p t 3 C P max is the maximum value of the aerodynamic performance coefficient C P max . C P The power characteristic of the WTG is shown in Fig. 2 . The mechanical power captured by wind turbines varies with wind speed, while the active power needs to be limited in a specific range due to the limitation of torque and rotor speed, i.e., the light red area. The rotor speed range is normally around the rated value, usually taken as 0.7–1.25 p.u ± 30 % [9] . Additionally, a conventional grid-side converter (GSC) and machine-side converter (MSC) vector control is applied where the GSC controller maintains the constant voltage of the DC link and the MSC controller controls the active power of the WTG. Generally, the time constant of the GSC and MSC is pretty small (no much than 100 ms), so it is assumed that the actual active power equals its reference value when studying the frequency stability issue. The motion equation of the one-mass shaft model of a single wind turbine generator [8] can be represented by where (5) 2 H s ⋅ ω r ⋅ d ω r d t = P m − P e is electrical power, and total mechanical inertia P e equals the sum of turbine and generator inertia in per unit values. H s 2.2 Wake effect model A wind farm typically contains numerous wind turbine generators. Since turbulence is generated when the wind flows through the wind turbine, the mechanical power captured by the following wind turbine will be reduced. This phenomenon is called the wake effect. The equivalent wind speed for each wind turbine [9] can be represented by where (6) v i = v 0 1 − 2 ∑ j = 1 j ≠ i n a j D j D i + 2 k x j i 2 β j i 2 are the diameter of the swept area of wind turbine D i , x j i , β j i , a j , n , the radial distance between wind turbine i and wind turbine i , the ratio between the overlapping area and swept area of the wind turbine j and i , the axial induction factor of wind turbine j , and the number of total wind turbines, respectively. j 3 Analysis of releasable KE of WTGs under droop control 3.1 Power-frequency droop control To provide frequency response, additional control loops are added to the active power reference of RSC. Droop control is simple and theoretically conducive to suppressing the FN. Since the converter could control its output power to track the power reference almost instantaneously, the droop loop is an alternative for inertial response [13] . The active power reference of WTG diverges from the MPPT point in proportion to the frequency deviation, which can be expressed as where (7) P r e f = − K Δ f + P M P P T is locally measured frequency deviation and Δ f is the droop gain. K The conventional fixed gain scheme is difficult to adapt to the uncertainty of the disturbance scale. A large gain may cause OD, while a small gain may not fully utilize the frequency support capability of WTG. Tuning the droop gain reasonably based on the disturbance scale is a promising approach to dealing with this issue. 3.2 Relationship between droop gain, disturbance scale and rotor speed When the power system is disturbed by frequency events (e.g., tripping of generators, blocking of high voltage direct current transmission systems), the system frequency immediately begins to decrease. After a few seconds, the mechanical power of the synchronous generators (SGs) is increased due to the intervention of the governor and gradually meets the electrical power of SGs. Finally, the system frequency reaches a steady state, resulting in a permanent deviation from the initial value. The steady-state frequency deviation is related to the power imbalance and can be described by where (8) Δ f s = R s y s [ Δ P d + Δ P W F ] stands for total primary frequency response coefficient, including primary frequency regulation of synchronous generators and load damping R s y s [32] . and Δ P d are initial active power imbalance caused by the frequency event and total output power decrease of the wind farm due to the deviation of WTG operating point from MPPT point, respectively. Δ P W F Based on (4) , (7) and (8) , the steady-state mechanical power is only related to initial and steady-state rotor speeds. Therefore, the total output power decrease of the wind farm can be represented by where (9) Δ P W F = ∑ i = 1 n Δ P W T i = ∑ i = 1 n k g ⋅ ( ω r i 0 ) 3 − P m i ( ω r i 0 , ω r i s ) are index, the total number of WTGs, mechanical power of WTG i , n , P m i , ω r i 0 , ω r i s , initial rotor speed of WTG i , and steady-state rotor speed of WTG i , respectively. Since the mechanical power equals the electrical power of the WTG in steady-state, the steady-state rotor speed of each WTG satisfies i (10) − K i Δ f s + k g ⋅ ( ω r i s ) 3 = P m i ( ω r i 0 , ω r i s ) , i ∈ I = { 1 , 2 , … , n } By synthesizing (4) , (8) - (10) , the relationship between droop gain, disturbance scale and steady-state rotor speed is established. The droop gain of each WTG can be calculated by (11) K i = − P m i ( ω r i 0 , ω r i s ) − k g ⋅ ( ω r i s ) 3 R s y s [ Δ P d + ∑ j = 1 n k g ⋅ ( ω r j 0 ) 3 − P m j ( ω r j 0 , ω r j s ) ] = f i ( Δ P d , ω r i 0 , ω r i s ) , i ∈ I Based on (11) , the rotor speed of WTGs can be controlled to converge into the expected position by selecting a proper droop gain, so that the expected KE can be released. Fig. 3 shows the transient trajectories of the rotor speed of WTG under different conditions. It is noticed that steady-state rotor speed is significantly affected by both droop gain and disturbance scale. 3.3 Analytical analysis of releasable KE of WTGs under droop control A large droop gain is usually expected to be set to effectively improve FN and avoid system frequency exceeding its limit when the disturbance is relatively severe. However, setting an excessively large droop gain may result in Eq. (10) not solving within the allowable operating range of the wind turbine (or even lead to the inexistence of equilibrium), leading to OD and loss of stability of the WTG. This section analyzes the maximum KE that can be released under droop control from the perspective of whether the equilibrium point exists. Under the given disturbance scale and initial rotor speed, variation of with respect to f i is shown in ω r i s Fig. 4 . It can be seen in the figure that has a maximum. When the applied droop gain f i , K i > sup f i (10) cannot be satisfied. It causes the absence of the equilibrium point of the nonlinear dynamic system, i.e., the WTG, as shown in Fig. 5 , the green curve. Besides, if the expected is configured in the gray area and the droop gain is further calculated by ω r i s (11) , the actual steady-state operating point will converge to another equilibrium in the green area (In fact, this droop gain corresponds to two equilibrium points, the left equilibrium point is an unstable equilibrium point, and the right equilibrium point is a stable equilibrium point), as shown in Figs. 4 and 5 , the orange dots and curves. Therefore, the steady-state operating points of the WTG will not fall into the gray area marked as the unreachable zone in Fig. 4 . In this sense, there exists another upper limit to the releasable KE of WTG, which is related to the steady-state speed at the maximum point of ω r i s l . Correspondingly, KE released by WTG reaches its maximum value when the partial derivative of f i equals zero, as follows. f i (12) ∂ f i ∂ ω r i s = 0 , i ∈ I The above equation is a set of coupled nonlinear equations, which makes it difficult to obtain the analytical solution. However, due to the large number of WTGs in a wind farm, the active power of a single WTG is much smaller than that of the entire wind farm. So, it can be assumed that the denominator of is regarded as a constant when considering a single equation in f i (12) separately. It yields (13) Δ P d + Δ P W F = Δ P d + ∑ j = 1 n k g ⋅ ( ω r j 0 ) 3 − P m j ( ω r j 0 , ω r j s ) = Δ P d + ∑ j ∈ I , j ≠ i k g ⋅ ( ω r j 0 ) 3 − P m j ( ω r j 0 , ω r j s ) ︸ p a r t 1 : Constant + k g ⋅ ( ω r i 0 ) 3 − P m i ( ω r i 0 , ω r i s ) ︸ p a r t 2 : Much smaller than p a r t 1 ≈ Constant By taking (13) , (12) can be simplified into independent nonlinear equations, as follows. n (14) ∂ P m i ( ω r i 0 , ω r i s ) − k g ⋅ ( ω r i s ) 3 ∂ ω r i s = 0 , i ∈ I Fig. 6 explains the meaning of (14) further. According to (10) , the droop gain is related to the difference between the mechanical power and the MPPT curve. Assuming that the output power of a single wind turbine has little contribution to the active power imbalance overall, so as to say, the steady-state frequency deviation remains unchanged when the steady-state rotor speed changes. Therefore, droop gain is only related to the distance between these two curves at steady-state rotor speed. It is obvious that the droop gain reaches its maximum only if the distance between the two curves reaches its maximum. The solution of (14) can be written as follows. (15) ω r i s l = c ⋅ ω r i 0 The detailed derivation of (15) can be referred to Appendix A . Eq. (15) manifests the maximum proportion of releasable KE to the total KE under droop control. It is noticed that the constant in c (15) is only related to and not to the rotor speed so that it can be predetermined based on relevant parameters. The maximum releasable KE of WTG is derived by considering the existence constraint of equilibrium points, that is C P (16) E i = H s ( ( ω r i 0 ) 2 − ( ω r i s l ) 2 ) = H s ( ω r i 0 ) 2 ( 1 − c 2 ) , i ∈ I Eq. (16) indicates that the releasable KE of WTGs takes a fixed proportion of the total KE. In addition, it should be emphasized that the steady-state rotor speed must be set higher than the minimum allowable rotor speed of the wind turbine , usually taken as 0.7p.u. Taking the bound of the rotor speed into ω r m i n (16) , we have (17) E i W T = min ( H s ( ω r i 0 ) 2 ( 1 − c 2 ) , H s ( ( ω r i 0 ) 2 − ω r min 2 ) ) , i ∈ I Eq. (17) depicts the maximum releasable KE of the WTG under droop control. Analysis of releasable KE makes it possible to utilize the releasable KE of WTGs fully. 4 Proposed variable initial gain scheme This section proposes a novel frequency-supporting control scheme based on releasable KE, which aims to provide effective frequency support under various wind and disturbance conditions. The proposed scheme involves two levels: the wind farm level determines the initial droop gain for each WTG in the wind farm, and the WTG level adjusts the droop gain based on rotor speed to ensure the safe operation of WTG. 4.1 Wind farm level: configuration of initial droop gain Due to the wake effect, the initial rotor speed of each WTG varies. The total releasable energy of wind farm and proportional coefficient E r e l e a s a b l e W F are defined based on η i (17) to evaluate the frequency support capability of the entire wind farm and the portion of each WTGs’ capability. (18) E r e l e a s a b l e W F = ∑ i = 1 n E i W T η i = E i W T E r e l e a s a b l e W F , i ∈ I The wind farm is not expected to participate in frequency support when the load disturbance scale is relatively small since releasing KE (the operating point deviates from the MPPT point) may lead to unnecessary wind energy loss. When the disturbance scale is large, it is necessary to release enough KE to provide effective frequency support for the system. Therefore, the expected energy released by the wind farm is designed related to system power imbalance. E e x p e c t W F where (19) E e x p e c t W F ∝ max ( 0 , Δ P ˆ d − Δ P ̄ ) = T e q ⋅ max ( 0 , Δ P ˆ d − Δ P ̄ ) is the threshold for the wind farm to activate frequency support control, i.e., the maximum allowable power imbalance if only traditional SGs participate in primary frequency regulation. Δ P ̄ can be derived based on the classical system frequency expression Δ P ̄ [32] and maximum allowable frequency deviation which usually takes 0.2 Hz or 0.5 Hz, the detailed derivations can be referred to Appendix B . represents the equivalent frequency support time of the wind farm, which usually takes 10–15 s according to the requirements of transmission system operators (TSOs). T e q represents the estimated system power imbalance by a certain method, e.g. Δ P ˆ d [27] . By comparing the expected and releasable energy, the actual energy released by the wind farm is calculated by (20) E a c t u a l W F = min ( E e x p e c t W F , E r e l e a s a b l e W F ) Fig. 7 shows the relationship between and E a c t u a l W F . After the total energy released by the wind farm is determined, the KE released by each WTG is further calculated based on proportional coefficients Δ P ˆ d . It yields η i (21) E ̃ i = η i ⋅ E a c t u a l W F , i ∈ I The proportion of KE released by each WTG is equal, all of which are . By synthesizing the initial rotor speed, the expected steady-state rotor speed of each WTG is calculated by E a c t u a l W F / E r e l e a s a b l e W F (22) ω ̃ r i s = ( ω r i 0 ) 2 − E ̃ i H s , i ∈ I By substituting (22) into (11) , the initial droop gain for each WTG is obtained by (23) K i 0 = f i ( Δ P ˆ d , ω r i 0 , ω ̃ r i s ) , i ∈ I Eq. (23) determines the initial droop gain of WTGs under different wind and disturbance conditions. 4.2 WTG level: droop gain adjustment based on rotor speed As shown in (23) and (19) , system parameters and estimated system power imbalance are involved in calculating the initial droop gains. The error of these parameters may lead to excessive droop gain calculated by (23) and cause the WTG to OD in extreme cases. Therefore, a droop gain adjustment scheme is applied to ensure safe operation. Conventional methods usually apply adaptive schemes based on linear or quadratic rotor speed functions to release more KE while ensuring safe operation. However, the KE released has been determined previously. The purpose of the adjustment scheme in this section is to maintain a relatively constant droop gain during the transient while ensuring safe operation. Therefore, the inverse proportional function of the speed is introduced to the droop gain. It yields where (24) K i = K i 0 ⋅ g ( ω r i ) = K i 0 ⋅ ( 1 + b ( 1 − ω r i 0 − a ω r i − a ) ) , i ∈ I and a = ω ̃ r i s − b ( ω r i 0 − ω ̃ r i s ) is a constant. b It can be seen in Fig. 8 that the additional gain reaches its maximum value of 1 at and its minimum value of 0 at ω r i 0 . This leads to the inevitable solution of Eq. ω ̃ r i s (10) , forcing the rotor speed to converge a little higher than the expected steady-state rotor speed, thus ensuring safe operation. The active power reference of WTG during the frequency support stage is given by (25) P i r e f = k g ⋅ ω r i 3 + K i 0 ⋅ g ( ω r i ) ⋅ Δ f , i ∈ I After the frequency support stage ends, the WTGs must be controlled to return to the initial MPPT point to avoid persistent generation loss. A simple time-varying scheme similar to [22] is applied to recover the rotor speed. In the recovery stage, power reference is given by where (26) P i r e f = k g ⋅ ω r i 3 + Δ P C ⋅ ( − 1 Δ T ( t − t c ) + 1 ) , t c < t < t c + Δ T , i ∈ I , Δ P C = P i r e f ( t c ) − P M P P T ( t c ) is the instant of the rotor speed converge, and t c is the duration of the rotor speed recovery stage. Δ T and P i r e f ( t c ) are the active power reference and MPPT power at P M P P T ( t c ) (the end of the frequency support stage), respectively. t c 4.3 Overall structure of the proposed control scheme The diagram of the overall structure of the proposed control scheme is shown in Fig. 9 . After the disturbance is detected, the power imbalance estimation is initiated immediately, and the wind farm controller collects the initial speed of each WTG to evaluate the frequency support capability of the wind farm. Then, based on the disturbance information, the actual energy released by each WTG is reasonably determined, and the corresponding droop gain is calculated and distributed to the WTG controllers. During the frequency support stage, each WTG controller monitors the real-time rotor speed and adjusts the droop gain to ensure the safe operation of the WTG. After the frequency support stage ends, the WTG returns to the initial operating point. 5 Case study 5.1 Test system configuration As shown in Fig. 10 , a wind farm containing nine aggregated wind turbine generators is connected to bus 21 of the modified New England 39 bus test system, and the proposed scheme is applied to verify its effectiveness under various sizes of disturbance and wind speed. The detailed data of the test system can be referred to [33,34] . The single aggregated wind turbine generators is composed of 40 wind turbine generators, each with a capacity of 2 MW. The rated output active power of the wind farm is about 700 MW. All SGs in the test system are equipped with governors and partially replaced by CIGs (75% of which are under grid-following controls and 25% under grid-forming controls); the overall renewable energy penetration ratio is 40%. The total initial load is 6254.2 MW. The activation time delay of frequency support control is considered to be 0.3 s. The base capacity of the case study is 100 MW. The comparison involves the existing adaptive gain scheme (AGS) [10] , and the rotor speed recovery scheme similar to the proposed scheme has been added for the convenience of comparison. The droop gain of AGS can be described by where maximum gain (27) K i = C ω r i 2 − ω r min 2 ω r max 2 − ω r min 2 is set to 100 and 200 for small gain AGS (marked as small AGS) and large gain AGS (marked as large AGS), respectively. The case studies focus on investigating the effectiveness of the proposed scheme under different wind conditions and disturbance scales. C 5.2 Verification under higher wind speed, v 0 = 11 m/s The initial rotor speed of each WTG is shown in Table 1 . The total active power output of the wind farm is 425MW. More KE can be released when the wind speed is high; the KE released by the WTGs must be reasonably arranged to avoid excessive deceleration. 5.2.1 Case 1: wind speed , load disturbance scale v 0 = 11 m/s . Δ P d = 5 p.u At a certain instant, the system power imbalance increases to 5p.u. due to load disturbances, the expected positions of steady-state rotor speed and initial droop gains are shown in Table 2 . As shown in Fig. 11 (a), the FN is 59.541 Hz when the WTGs perform MPPT. The FN is 59.546 Hz, 59.583 Hz, and 59.608 Hz when large AGS, small AGS, and the proposed scheme are applied, respectively. The proposed scheme increases FN by 0.067 Hz to the situation when the WTGs perform MPPT. Compared with small AGS (increases FN by 0.042 Hz), the FN improvement of the proposed scheme is about 60% better. As shown in Fig. 11 (b) and (c), due to the excessive gain of the large AGS in the initial stage after the disturbance, the active power of WTGs increased excessively, even reaching the torque limit. Then, the wind turbine rapidly decelerated, resulting in a sharp decrease in active power and droop gain. When the frequency reaches the nadir, its active power drops (due to the reduction of rotor speed and droop gain), resulting in poor FN improvement under large AGS. 5.2.2 Case 2: wind speed , load disturbance scale v 0 = 11 m/s . Δ P d = 2 p.u At a certain instant, the system power imbalance increases to 2p.u. due to load disturbances, the expected positions of steady-state rotor speed and initial droop gains are shown in Table 3 . It can be seen from Fig. 12 (c) and Table 3 that the KE released by each WTG is proportionally reduced to avoid unnecessary energy release when the disturbance is slight and the releasable KE is relatively large, resulting in a shorter recovery time. Fig. 12 (a) shows the system frequency after the load disturbance. When the proposed scheme and small AGS are applied, the FN is improved by 0.033 Hz and 0.029 Hz, respectively. Compared with small AGS, the proposed scheme improves FN by 13.8% better. Besides, as shown in Fig. 12 (b) and (c), although the small AGS releases less KE than the large AGS, it causes a lower rate of change of active power and more considerable active power when the FN occurs, resulting in a higher FN. 5.3 Verification under lower wind speed, v 0 = 9 m/s The initial rotor speed of each WTG is shown in Table 4 . The total active power output of the wind farm is 226.2 MW. Less KE can be released when the wind speed is low; frequency support capability must be fully utilized to improve frequency performance. 5.3.1 Case 3: wind speed , load disturbance scale v 0 = 9 m/s . Δ P d = 5 p.u At a specific instant, the system power imbalance increases to 5p.u. due to load disturbances, the expected positions of steady-state rotor speed and initial droop gains are shown in Table 5 . As shown in Fig. 13 (a), the FN is increased by 0.023 Hz, 0.030 Hz, and 0.044 Hz when large AGS, small AGS, and the proposed scheme are applied, respectively. Compared to the situation when small AGS is applied, the FN improvement of the proposed scheme is 46.6% better. Furthermore, it can be seen in Table 5 that the expected position of steady-state rotor speed has reached the lower limit, indicating that the frequency support capability of the WTG has been fully utilized when little KE can be released and the disturbance is severe. Besides, the minimum rotor speed of the wind turbines ( - T 1 ) is 0.7092 p.u. in this case, which is fully approaching but not exceeding the rotor speed limit, verifies the effectiveness of the proposed rotor speed-based adjusting scheme. Besides, the droop gain of the proposed method is almost maintained near the initial value in transient, as shown in T 9 Fig. 13 (d). In contrast, the droop gain of the traditional schemes has experienced significant changes. The performance in bad situations (low releasable KE but severe disturbance) is precisely what we need to pay attention to the most. 5.3.2 Case 4: wind speed , load disturbance scale v 0 = 9 m/s . Δ P d = 2 p.u At a certain instant, the system power imbalance increases to 2p.u. due to load disturbances, the expected positions of steady-state rotor speed and initial droop gains are shown in Table 6 . Fig. 14 (a) shows that the FN is lifted by 0.022 Hz, 0.018 Hz, and 0.034 Hz when large AGS, small AGS, and the proposed scheme are applied, respectively. Compared with large AGS, the FN improvement of the proposed scheme is 54.54% better. Unlike the above cases, when a large AGS is applied, the FN is higher than the situation when a small AGS is applied. The reason is that small AGS cannot fully utilize the KE stored in the wind turbine because the initial gain is too small. Compared to AGS, the proposed scheme fully releases KE under poor releasable KE and different disturbance scales, resulting in better transient frequency performances. Generally, the maximum gain of AGS is relatively challenging to tune and eventually becomes contradictory under different circumstances. A small gain releases less KE, while a large gain may release too much KE in the early stages of disturbance, which may cover the actual disturbance scale to traditional SGs and limit the improvement of FN. C 5.4 Case 5: Parameter with errors of ± 10 % The proposed scheme involves system parameters such as and Δ P d in the initial droop gain, and parameter errors may lead to excessive initial droop gain. This section takes the case of errors in R s y s as an example to verify the performance of the proposed control scheme. Δ P d Fig. 15 shows the simulation results when there is a error between the estimated value ± 10 % and the actual value Δ P ˆ d . As shown in Δ P d Fig. 15 (a), when the estimated system power imbalance is subjected to an error of ±10%, the proposed scheme can still effectively improve FN, the improvement of FN is 0.044 Hz (No error), 0.041 Hz (+10% error), 0.048 Hz (−10% error), respectively. The effectiveness only deteriorated by 7.66%. Moreover, Fig. 15 (c) shows the rotor speed of under different circumstances; the minimum value of rotor speed is 0.7183p.u. (No error), 0.7365p.u. (+10% error), 0.7081p.u. (−10% error), respectively. The three of them are very close. It can be seen from the simulation results that the power imbalance estimation error causes the deviation of initial droop gain. Still, the proposed scheme rapidly adjusts the droop gain based on the rotor speed of WTGs, ensuring that the rotor speed does not exceed the limit. T 1 5.5 Case 6: Coordination with energy storage system TSOs usually require newly built wind farms to be equipped with ESSs to alleviate the fluctuation of wind power generation. ESSs can participate in the system’s primary frequency regulation through additional frequency controls. Fig. 16 shows the diagram of the modified test system, and Fig. 17 shows the simulation results when the WTG and droop control-based ESS are coordinated. It can be seen from Fig. 17 (a) that the FN and steady-state frequency achieve better performance with ESS participated in. This is because ESS can levitate its output power for a long time (longer than the interest time scope of frequency stability issues), but the WTG, which performs MPPT before disturbance, cannot persistently levitate its output power, as shown in Fig. 17 (b) and (d). Besides, the rotor speed trajectories of WTG are very close, regardless of whether ESS participates. Both do not exceed the rotor speed limit, as shown in Fig. 17 (c). The simulation results show that WTGs under the proposed scheme can effectively coordinate with the ESS to achieve system frequency support. 6 Conclusions This paper proposes a novel control scheme for short-term frequency support based on the analysis of releasable KE. The relationship between droop gain, rotor speed, and system power imbalance is established and analyzed. It is found that the KE released by WTG exists another kind of boundary, which is depicted by the constraint of the existence of the equilibrium. Furthermore, the boundary, i.e., the releasable KE, is derived analytically. On this basis, a variable initial droop gain scheme is proposed. The proposed frequency support scheme obtains the initial droop gain of each WTG based on the estimated disturbance scale and releasable KE. During the frequency support stage, the droop gain is adjusted based on rotor speed to ensure safe operation. After the frequency support stage, the WTG is controlled to return to the initial operating point. The superiority of the proposed scheme includes: 1. Adaptively set and adjust the droop gain based on power imbalance and rotor speed; 2. Conducive to fully utilize the frequency capability of the wind farm; 3. Rapidly recover the rotor speed to the MPPT point to avoid persistent wind power loss. Several numerical results have verified the effectiveness of the proposed scheme. The proposed scheme adjusts the droop gain based on the disturbance scale and real-time rotor speed, which is conducive to fully utilizing releasable KE and improving FN while ensuring safe operation. CRediT authorship contribution statement Wenbo Li: Writing – original draft, Conceptualization, Writing – review & editing. Yujun Li: Supervision, Methodology. Jiapeng Li: Writing – review & editing. Yang Zhang: Data curation. Xiqiang Chang: Supervision, Funding acquisition. Zhongqing Sun: Supervision. Acknowledgments The authors would like to thank the support of the State Grid Corporation of China ( 5419-202340787A-3-8-KJ ). Declaration of competing interest No potential conflict of interest was reported by the authors. Appendix A Derivation of the solution of (14) Denote the solution of (14) as follows. (A.1) ω r i s l = h ( ω r i 0 ) Notice that the wind turbine performs MPPT before the disturbance, so the tip-speed ratio holds . Therefore, the wind speed and initial rotor speed satisfy the following equation. λ = λ o p t (A.2) v i = R ω r i 0 λ o p t According to (1) and (3) , the mechanical power of WTG can be calculated by i where (A.3) P m i = 1 2 ρ π R 2 v i 3 C P ( λ i , β ) = 1 2 ρ π R 5 λ o p t 3 ( ω r i 0 ) 3 C P ( λ o p t ω r i ω r i 0 , β ) = k g C P max ( ω r i 0 ) 3 C P ( λ o p t ω r i ω r i 0 , β ) is the rotor speed of WTG ω r i . Based on i (A.3) , the steady-state mechanical power of WTG can be calculated by i (A.4) P m i ( ω r i 0 , ω r i s ) = k g C P max ( ω r i 0 ) 3 C P ( λ o p t ω r i s ω r i 0 , β ) Since , substitute ω r i 0 ≠ 0 (A.4) into (14) . It provides (A.5) ∂ k g C P max ( ω r i 0 ) 3 C P ( λ o p t ω r i s ω r i 0 , β ) − k g ⋅ ( ω r i s ) 3 ∂ ω r i s = 0 ⇔ ∂ 1 C P max C P ( λ o p t ω r i s ω r i 0 , β ) − ( ω r i s ω r i 0 ) 3 ∂ ω r i s ω r i 0 = 0 Or equally, where, (A.6) d d x F ( x ) = 0 (A.7) F ( x ) = 1 C P max C P ( λ o p t x , β ) − x 3 , x = ω r i s ω r i 0 The solution for (A.6) is obtained through numerical methods, and represented by (A.8) x = c By synthesizing (A.1) , (A.7) and (A.8) , it yields (A.9) ω r i s l = h ( ω r i 0 ) = c ⋅ ω r i 0 In this paper, constant equals 0.74 by solving c (A.6) numerically. Appendix B Derivation of the threshold power The threshold power can be calculated based on the system’s allowable maximum frequency deviation, the maximum RoCoF, and the system frequency model. A low-order system frequency response model can be obtained through parameter identification or by aggregation. Additionally, if there exist other wind farms, the model can be revised based on the corresponding control parameters. The frequency response expression can be written as follows. where, (B.1) f ( t ) = f 0 − Δ P d R 1 + a e − ξ ω n t sin ω d t + φ 1 where (B.2) a = 1 − 2 T R ξ ω n + ω n 2 T R 2 1 − ξ 2 φ 1 = arctan ( 1 − ξ 2 ξ − ω n T R ) , Δ P d , R , ω n , ω d , ξ are the power imbalance, equivalent primary frequency regulation gain, natural oscillation frequency, damped oscillation frequency, damping ratio and phase. By letting the derivative of φ 1 (B.1) equal to zero, the frequency nadir is derived as follows. where, (B.3) Δ f n = − Δ P d R ⋅ ( 1 + e − ξ ω n t n ⋅ 1 − 2 T R ξ ω n + ω n 2 T R 2 ) (B.4) t n = − φ 1 − φ 2 ω d By differentiating (B.1) and setting , the relationship between the maximum RoCoF and the power imbalance can be derived as follows. t = 0 where, (B.5) R o C o F max = Δ P d ω n R a sin φ 1 + φ 2 (B.6) φ 2 = arctan − 1 − ξ 2 ξ Based on (B.3) , (B.5) , and the allowable maximum frequency deviation , the allowable RoCoF Δ f a l l o w required by grid codes, the power threshold for initiating frequency control can be finally determined as follows. R o C o F a l l o w (B.7) Δ P ̄ = min ( R o C o F a l l o w ω n R a sin φ 1 + φ 2 , Δ f a l l o w − R ⋅ ( 1 + e − ξ ω n t n ⋅ 1 − 2 T R ξ ω n + ω n 2 T R 2 ) ) When the estimated power imbalance is less than the power threshold, the system frequency will not exceed the limitations solely through the primary frequency regulation of synchronous generators, and there is no need for the wind farm to initiate frequency control. The wind farm can maintain MPPT mode to enhance economic efficiency. However, when the estimated power imbalance exceeds the power threshold, the frequency control of the wind farm should be activated to provide frequency support for the system.
|
[
"KHESHTI",
"MAHISH",
"WU",
"LIU",
"MORREN",
"ULLAH",
"MAURICIO",
"LI",
"LEE",
"LEE",
"WU",
"YANG",
"VANDEVYVER",
"HWANG",
"CHEN",
"CHENG",
"YUAN",
"MAHISH",
"HUANG",
"HUANG",
"KANG",
"YANG",
"KHESHTI",
"MILANO",
"SHAMS",
"AZIZI",
"LI",
"KHESHTI",
"BAO",
"GUO",
"BAO",
"YANG",
"ATHAY",
"MOEINI"
] |
8e480b8d70c64423bab861f381947217_Public Interest in Hyaluronic Acid Injections for Knee Osteoarthritis in the United States and Europ_10.1016_j.artd.2022.09.003.xml
|
Public Interest in Hyaluronic Acid Injections for Knee Osteoarthritis in the United States and Europe: An International Google Trends Analysis
|
[
"Cohen, Samuel A.",
"Brophy, Robert H.",
"Chen, Antonia F.",
"Roberts, Karl C.",
"Quinn, Robert H.",
"Shea, Kevin G."
] |
Background
Hyaluronic acid injections remain a common nonsurgical alternative for the treatment of knee osteoarthritis despite limited clinical evidence and varying global recommendations regarding its use. We used the Google Trends tool to provide a quantitative analysis of public interest in hyaluronic acid injections for knee osteoarthritis in the United States and Europe.
Methods
We customized Google Trends parameters to obtain search data from January 2009 to December 2019 in both the United States and Europe. Combinations of “arthritis”, “osteoarthritis”, “hyaluronic acid”, “knee arthritis”, “knee osteoarthritis”, and “knee injection” were entered into the Google Trends tool, and trend analyses were performed.
Results
The models generated to describe public interest in hyaluronic acid for knee injections in both the United States and Europe showed increased Google queries as time progressed (P < .001). The United States growth model displayed linear growth (r2 = 0.90) while the European growth model displayed exponential growth (r2 = 0.90).
Conclusions
Our results indicate a significant increase in Google queries related to hyaluronic acid injections for knee osteoarthritis since 2009 in both the United States and Europe. Our models suggest that despite mixed evidence supporting its use, orthopedic surgeons should expect continued public interest in hyaluronic acid for knee osteoarthritis. The results of our study may help to prepare surgeons for patient inquiries, inform the creation of evidence-based shared decision-making tools, and direct future research.
|
Introduction Knee osteoarthritis is a top contributor to global disability, with significant economic burden stemming from both direct treatment costs and indirect costs due to a loss of productivity [ 1 , 2 ]. The incidence of knee osteoarthritis is projected to rise in the future given obesity and aging trends in the United States and abroad [ 3 ]. There is currently no cure for osteoarthritis, so the development of safe, effective treatments for knee osteoarthritis has the potential to significantly impact disease progression for millions of people worldwide. One alternative to surgical treatment for knee osteoarthritis that has received increased attention in recent years is viscosupplementation with intra-articular hyaluronic acid (HA). HA is a naturally occurring nonsulfated glycosaminoglycan nonprotein compound with repeating β-1,4-D-glucuronic acid and β-1,3-N-acetylglucosamine units [ 4 ]. HA has been used as part of the treatment plan for various dermatological, ophthalmological, and musculoskeletal conditions [ 5 ]. Evidence regarding the effectiveness of HA injections for knee arthritis is mixed, with varying recommendations in the United States and Europe. In the United States, the American Academy of Orthopaedic Surgery (AAOS) released an evidenced-based clinical practice guideline on the treatment of knee osteoarthritis in 2013 which strongly recommended against the routine use of HA for knee osteoarthritis, and this recommendation was downgraded to a moderate recommendation in the 2021 update [ 6 ]. The European National Institute for Health and Care Excellence (NICE) released a similar evidenced-based recommendation against the use of HA for treatment of knee arthritis in 2014 [ 7 ]. Despite these evidence-based guidelines, the 7 European countries that comprise the EUROpean VIScosupplementation Consensus group (EUROVISCO) have stood by their 2015 recommendation supporting its use [ 8 ]. While the AAOS and NICE Clinical Practice Guideline Process does not allow committee members with financial conflicts of interest to participate in the voting process for guideline recommendations and follows rigorous standards for guideline development, EUROVISCO allows recommendations to be developed with multiple committee members having industry conflicts directly related to HA viscosupplementation [ 9 , 10 ]. The increased use of HA injections for knee osteoarthritis despite varying recommendations and inconclusive clinical evidence may stem from a combination of the industry and direct-to-consumer marketing creating the public’s request for HA injections, as well as the lucrative market available for physicians who provide HA treatments [ 6–8,11 ]. HA injections are not covered by many insurance providers, leading to steep out-of-pocket costs for those willing to pay [ 11 ]. Increased numbers of publications describing the effectiveness of HA for knee osteoarthritis in recent years suggest increasing popularity of HA injections for knee osteoarthritis [ 12 ]; however, public interest in using HA to mitigate knee osteoarthritis pain has not been previously quantified. Internet search traffic data are one mechanism that can be used to quantify public interest in a novel treatment such as HA for knee osteoarthritis symptoms. Google Trends is an open-source tool that is used to track the frequency with which search terms are entered into the Google search engine. Previous research indicates that Google Trends data describing public interest in various surgical procedures have correlated with actual health-care utilization [ 13–18 ]. Furthermore, the Google Trends tool has recently been used to track public interest in 2 other nonoperative treatments for knee osteoarthritis—stem cell injections and platelet-rich plasma therapy [ 19 , 20 ]. Trends regarding public interest in HA for knee osteoarthritis may help to guide patient counseling, inform the creation of evidence-based decision aids, and direct future research. The purpose of our study was to utilize the Google Trends tool to quantify public interest in information related to HA injections for knee osteoarthritis in the United States and Europe. We assessed whether public interest in HA therapy for knee osteoarthritis showed temporal, seasonal, income-related, or geographic trends. Material and methods The methodology was derived from the study by Cohen et al. describing public interest in platelet-rich plasma therapy for knee and hip osteoarthritis [ 20 ]. Google Trends The Google Trends tool can provide customizable analysis regarding public interest in a given search term over a specified time period in a specified geographical location. After the search term of interest is entered into the Google Trends database and the time period and location are selected, the Google Trends tool provides visuals and outputs depicting the relative popularity of the search term over the specified time period. The data are provided as relative search volume (RSV) values, which are reported on a scale of 0-100. An RSV of 100 indicates the highest percentage of searches for the topic of interest relative to all Google queries, whereas an RSV of 0 indicates that the relative interest in the search term was less than 1% of its maximum RSV [ 21 ]. Search terms Potential search terms were identified after a literature review of previous studies evaluating the use of HA for knee osteoarthritis [ 8 , 12 , 22 ]. Additionally, popular search engine inputs related to HA injections for knee osteoarthritis were discovered using the “related queries” feature of the Google Trends tool. Ultimately, the combination of search terms incorporated into linear, quadratic, and exponential models describing public interest in HA for knee osteoarthritis included the following keywords: “arthritis”, “osteoarthritis”, “hyaluronic acid”, “knee arthritis”, “knee osteoarthritis”, and “knee injection”. Of note, all combinations of search terms included “hyaluronic acid” in the query. Temporal trends To study temporal trends in public interest in HA for knee osteoarthritis, we entered combinations and permutations of the search terms selected into the Google Trends tool. We used the data provided by the Google Trends tool to generate a database describing public interest per search term from January 2009 to December 2019 within the United States and Europe. To identify potential geographic differences in public interest in HA for knee osteoarthritis within the United States and Europe, geographic parameters specified in the Google Trends tool were “United States of America” to describe American public interest and the 7 European countries which constitute EUROVISCO (Belgium, France, Germany, Italy, Spain, Turkey, and the United Kingdom) to represent European public interest. We created linear, quadratic, and exponential growth models describing changing public interest in HA for knee osteoarthritis over time for the search terms included in our study. We determined model strength using standard measures of accuracy—mean absolute percentage error, mean absolute deviation, and mean squared deviation. We used regression analysis to determine whether public interest in HA for knee osteoarthritis significantly increased from January 2009 to December 2019. Seasonal trends To evaluate seasonal variations in public interest in HA treatment for knee osteoarthritis, we grouped Google Trends values from January 2009 to December 2019 for the search terms used to generate the HA growth model (“arthritis”, “osteoarthritis”, “hyaluronic acid”, “knee arthritis”, “knee osteoarthritis”, and “knee injection”) by month and season (winter: December-February, spring: March-May, summer: June-August, fall: September-November) in both the United States and Europe. Income-related trends To describe potential income-related differences in the public interest in HA for knee osteoarthritis treatment, public interest in HA for knee osteoarthritis was recorded in the 5 highest median-income states (Maryland, New Jersey, Hawaii, Massachusetts, and Connecticut) and the 5 lowest median-income states in the United States (Mississippi, West Virginia, Arkansas, New Mexico, and Louisiana) [ 23 ]. We subsequently averaged Google Trends data from the 5 highest-income states and 5 lowest-income states and created a “high-income growth model” and “low-income growth model” for public interest in HA for knee osteoarthritis. Geographic trends To describe potential geographic differences in public interest in HA for knee osteoarthritis in the United States, we generated models describing public interest in HA for knee osteoarthritis in the 5 most populous cities in the United States (New York, NY; Los Angeles, CA; Chicago, IL; Houston, TX; and Phoenix, AZ), each of which is located in a different geographic region of the country. We created linear, quadratic, and exponential growth models describing changing public interest over time for each city. Results Temporal trends The models generated to describe public interest in HA for knee injections in both the United States and Europe demonstrated a consistent increase in search volume from January 2009 to December 2019 ( P < .0001) ( Fig. 1 ) with no noticeable decline or slowdown following the publication of the AAOS and NICE recommendations against its use. For the United States growth model, the linear model had the strongest measures of accuracy, with a mean absolute percent error of 7.3% and an r 2 = 0.90. For the European growth model, the exponential model had the strongest measures of accuracy, with a mean absolute percent error of 17.9% and an r 2 = 0.90 ( Fig. 1 ). The linear and exponential lines of best fit used to describe growth in public interest in the United States and Europe, respectively, reflect varying growth rates of public interest over the study period. Seasonal trends In both the United States and Europe, public interest in HA for knee osteoarthritis was greatest in the month of October and least in the month of December ( Table 1 ). Seasonal Google Trends analyses showed similar public interest in HA for knee osteoarthritis in the spring, summer, and fall seasons, with decreased public interest in the winter season in both the United States and Europe ( Table 2 ). Income-related trends The growth model generated to describe public interest in HA for knee osteoarthritis in the 5 highest-income states demonstrated faster growth than the model generated to describe public interest in HA for knee osteoarthritis in the 5 lowest-income states ( Fig. 2 ). Geographic trends New York City and Los Angeles showed the most consistent growth in public interest in HA for knee osteoarthritis followed by Chicago, Phoenix, and Houston ( Fig. 3 ). Discussion Our results reveal that in both the United States and Europe, there has been a significant increase in Google searches related to HA for knee osteoarthritis from 2009 to 2019. Our models predict continued growth in public interest in HA for knee osteoarthritis in both the United States and Europe despite conflicting clinical recommendation guidelines in both locations. In Europe, where the use of HA for knee osteoarthritis was recommended by EUROVISCO in 2015, there was exponential growth in public interest in HA injections for knee osteoarthritis in the years included in our study [ 8 ]. In the United States, despite recommendations from the AAOS against the use of HA for knee osteoarthritis in 2013, a linear increase in public interest in HA for knee arthritis was still observed throughout the study period [ 24 ]. While quantifying the incidence of HA use for knee osteoarthritis in the United States is difficult due to dynamic clinical recommendation guidelines, varying insurance coverage, and a lack of centralized data collection, the results of our study align with previously published literature that demonstrates increased insurance claims for HA use for knee osteoarthritis over the years that were included in our study. This suggests that the Google Trends tool may serve as an effective barometer to gauge public interest in HA for knee osteoarthritis in the future [ 25 ]. We identified seasonal, income, and geographic variations in public interest in HA for knee osteoarthritis. In both the United States and Europe, public interest was greatest in the fall season and least in winter season. Additionally, in the United States, public interest in HA for knee arthritis was greater in the 5 highest-income states than in the 5 lowest-income states. Income-related trends in public interest align with the results of a recent study that examined public interest in platelet-rich plasma therapy for knee osteoarthritis, another nonsurgical alternative for knee osteoarthritis patients seeking pain relief [ 20 ]. Income-related trends may be related to the inconsistency with which HA injections are covered by insurance companies. While Medicare often covers HA injections for knee osteoarthritis, 17 major insurance carriers that cover more than 64 million Americans (approximately 30% of all privately insured Americans) will not cover the cost of HA for knee osteoarthritis [ 11 ]. For patients whose insurance will not cover the cost of treatment, a sequence of 3 injections of HA for knee osteoarthritis may cost more than $2000, compared to an average of $320 for those with insurance that can be applied to the treatment [ 11 ]. Furthermore, in many clinics, surgeons are not the only providers administering HA injections. Nonoperative medical personnel who are incentivized to fill their clinic with procedures could be more likely to suggest a series of HA injections vs a single steroid injection, for example, when counseling patients in order to increase revenue. The extraordinary costs associated with HA treatment of osteoarthritis may partially explain the increased public interest in the 5 highest-income states when compared with the 5 lowest-income states. However, it is important to note that other factors including health education and social determinants of health likely also influenced the trends observed in this study. Recently, the AAOS released updated guidance regarding the use of HA for knee osteoarthritis for the first time since 2013. In 2013, the AAOS gave a strong recommendation against using HA for symptomatic osteoarthritis of the knee, a shift from 2008 when the AAOS was “unable to make a recommendation for or against the use of intra-articular HA for patients with mild to moderate symptomatic knee osteoarthritis. [ 26 ]. In August 2021, the AAOS declared that “hyaluronic acid intra-articular injection is not recommended for routine use in the treatment of symptomatic osteoarthritis of the knee. [ 6 ]. Ideally, guidelines recommending against the use of HA treatment for knee osteoarthritis would reduce the frequency with which patients receive such injections. However, Bedard et al. revealed that despite temporary changes in the frequency of HA injections for knee osteoarthritis after revised guidelines were released by the AAOS in 2013, the practice remains in common use, which aligns with the increased public interest observed in our study in the years following the 2013 AAOS announcement [ 25 ]. Bedard et al. concluded that “further interventions beyond publishing clinical practice guidelines are needed to change practice patterns” [ 25 ]. One reason why simply providing new clinical guidelines may not be effective in changing practice patterns is because requests for HA injections may come from patients themselves, often after hearing about the benefits of the therapy from media sources (not AAOS guidelines) that rarely discuss the lack of evidence supporting its use. This “implicit hype” associated with media coverage of unproven medical therapies has been observed for another nonsurgical alternative for knee osteoarthritis, platelet-rich plasma [ 27 ]. It is likely that the same phenomenon is affecting how patients consume information about the efficacy of HA injections for knee osteoarthritis, as the information patients encounter online regrading osteoarthritis is often not credible and difficult to understand for the average reader [ 28 ]. Our findings that patients are increasingly curious about HA for knee osteoarthritis (as evidenced by temporal trends in Google searches) in conjunction with the fact that the information patients encounter online is often subject to “implicit hype” regarding its effectiveness means that orthopedic surgeons must be prepared to properly counsel patients regarding the efficacy of HA injections. Proper counseling may come in the form of the creation of decision aids that discuss the risks and benefits of HA injections for knee osteoarthritis and outline which subset of patients may benefit from its use. Orthopedic surgeons who anticipate public inquiries regarding popular treatment options with debatable clinical benefits such as HA can also prepare patient education materials that convey the evidence-based recommendations that are often missing from online searches. For example, patients may not know that 63% of studies on the therapeutic effects of HA injections for treatment of knee osteoarthritis were industry-funded and that none of the studies with at least 1 company employee as an author reported negative conclusions about the efficacy of HA for knee osteoarthritis [ 10 ]. Discussing with patients the potential conflicts of interest that often introduce bias into the information they are finding online may help to influence their opinions on the subject. Our findings demonstrating increased public interest in HA for knee osteoarthritis over the last 10 years—despite limited, placebo-controlled studies demonstrating its efficacy—illustrate the need for further research on the topic. The AAOS provided its updated recommendations regarding the use of HA intra-articular injections for symptomatic osteoarthritis of the knee after reviewing 28 studies comparing the effectiveness of HA injections to controls [ 6 ]. However, while some studies demonstrated a statistically significant benefit with the use of HA, these studies could not reach the significance for a minimally clinical meaningful difference. Furthermore, there are concerns about conflicts of interest with the sponsors and authors of some of the studies that were in favor of viscosupplementation. While developing clinical practice guidelines, the AAOS ensures that experts who may have relevant conflicts of interest (viscosupplementation) may not actively participate in the guideline recommendation voting process, while the EUROVISCO 2015 guideline did not have the same restrictions. Future research regarding the effectiveness of HA for knee osteoarthritis should include subgroup analyses and osteoarthritis severity stratification, elements often missing from prior studies [ 6 ]. There are several limitations to our study. First, while Google Trends data can evaluate online interest in HA for knee osteoarthritis, we cannot directly connect increased public interest observed online to increased volumes of HA injections to treat knee osteoarthritis symptoms. However, trends in public interest observed in this study do align with the limited information available on the frequency of HA injections in the United States throughout the study period [ 25 ]. Second, although Google accounts for more than 90% of internet search traffic, the Google Trends tool cannot evaluate public interest in HA for knee osteoarthritis on other search engines [ 29 ]. Additionally, there is limited demographic information provided by Google about the users whose searches are reflected in our study results. However, prior research from both the United States and Europe indicates that the internet is a frequent health information source for older patients in the age range of the typical osteoarthritis patient, so it is likely the demographics of those seeking information related to osteoarthritis on Google are representative of the patient population as a whole [ 30–32 ]. Conclusions Our findings demonstrate increased online public interest in HA injections for knee osteoarthritis from 2009 to 2019 in both the United States and Europe despite mixed clinical evidence regarding its efficacy and inconsistent recommendations regarding its use from governing bodies in both locations. Our models suggest that public interest in HA for knee osteoarthritis is expected to continue to increase in upcoming years. Inconsistencies in recommendations regarding its effectiveness illustrate the potential benefit of more high-level placebo-controlled studies evaluating its effectiveness in order to prepare orthopedic surgeons to counsel an increasingly curious public. Additionally, measures must be implemented to encourage the adoption of responsible and evidence-based marketing such that direct-to-consumer marketing and science align to improve the quality and value of effective treatments in health care, thereby reducing the utilization of expensive and ineffective treatments. Further discussions and awareness of financial conflicts of interest and how these impact recommendations would be valuable for both the general public and medical professionals. Conflicts of interest The authors declare there are no conflicts of interest. For full disclosure statements refer to https://doi.org/10.1016/j.artd.2022.09.003 . Appendix A Supplementary data Conflict of Interest Statement for Cohen Conflict of Interest Statement for Brophy Conflict of Interest Statement for Chen Conflict of Interest Statement for Roberts Conflict of Interest Statement for Quinn Conflict of Interest Statement for Shea
|
[
"CROSS",
"MURPHY",
"GUPTA",
"HUYNH",
"HENROTIN",
"PRINTZ",
"VANGSNESS",
"BOWMAN",
"TIJERINA",
"TIJERINA",
"COHEN",
"COHEN",
"COHEN",
"TIJERINA",
"STROTMAN",
"COHEN",
"RICHETTE",
"JEVSEVAR",
"BEDARD",
"RACHUL",
"CHOU",
"HALL",
"QUITTSCHALLE",
"CZAJA"
] |
3a57357005e747a0aa8d8d8b041ded2f_Cross-Sectional Study of Recurrent Disc Herniation Risk Factors and Predictors of Outcomes After Pri_10.1016_j.bas.2023.101828.xml
|
Cross-Sectional Study of Recurrent Disc Herniation Risk Factors and Predictors of Outcomes After Primary Lumbar Discectomy: A STROBE Compliance
|
[
"Hugues Dokponou, Yao Christian",
"Ontsi Obame, Fresnel Lutèce",
"Mouhssani, Mohamed",
"El Akroud, Sofia",
"Siba, Zineb",
"Elmi Saad, Moussa",
"Imad-Eddine, Sahri",
"Chandide Tlemcani, Zakaria",
"Imbunhe, Napoleao",
"Yero, Diakite",
"Abderrahmane, Housni",
"Laaguili, Jawad",
"El Kacemi, Inas",
"Mohcine, Salami",
"Belhachmi, Adil",
"Chérif El Asri, Abad",
"Mostarchid, Brahim",
"Gazzaz, Miloudi"
] | null |
Lumbar Degenerative Disease (Spine Parallel Session v.2), September 26, 2023, 8:30 AM - 10:00 AM Background: The purpose of this study is to determine if lifting weight, smoking status, occupational work, and diabetes were predictors for recurrent lumbar disk herniation (rLDH) leading to reoperation and if the outcome can be influenced by the reoperated level and side. Methods: From June 2010 to July 2019, the 2196 consecutive patients who underwent first-time single-level lumbar discectomy at our institution were revised. Data on first lumbar spine surgery, reoperation, and preoperative data were brought into the analysis. Multivariable Logistic Regression Analysis was performed in JAMOVI 2.2.5 with the Cox-regression Kaplan–Meier analysis for rLDH excision at the L4L5 and L5S1 levels. Results: From the 101(4.59%) patients that presented with recurrent lumbar disc herniation (rLDH), 75 cases (3.41%) met the inclusion criteria. There were 54 cases of ipsilateral recurrent herniation and 21 contralateral with a male predominance of 64% (n = 48). The average age at the time of recurrence was 48 ± 9.34 years (age range 29-67 years). The group of diabetes patients who smoke is at high-risk, Odds 2.77, 95%CI [0.82 - 9.43], of rapid recurrence for lumbar disc prolapse; about 3 months after the first surgery followed by the group of diabetes who lift a weight, Odds 0.83, 95%CI [0.28 - 2.42], about 4 months after the first surgery. At the L4L5 level, only the group of patients operated for opposite side recurrence, Odd ratio 1.01, 95%CI [0.30 - 3.33], did well and were pain-free immediately after surgery compared to the group of patients operated for recurrence on the same side, Odd ratio 6.73, 95%CI [2.13 - 21.21]. Conclusions: Coexisting diabetes and smoking status in the same patient increase the risk of rLDH and the patient’s outcome is favorable with pain-free after reoperation without the need for physiotherapy when the recurrence is on the same level and opposite side. Optional Image Image 1
|
[] |
8494478e5d4b4d5facbd0c7a26100e26_Direct evidence of fadeout of collective enhancement in nuclear level density_10.1016_j.physletb.2017.06.033.xml
|
Direct evidence of fadeout of collective enhancement in nuclear level density
|
[
"Banerjee, K.",
"Roy, Pratap",
"Pandit, Deepak",
"Sadhukhan, Jhilam",
"Bhattacharya, S.",
"Bhattacharya, C.",
"Mukherjee, G.",
"Ghosh, T.K.",
"Kundu, S.",
"Sen, A.",
"Rana, T.K.",
"Manna, S.",
"Pandey, R.",
"Roy, T.",
"Dhal, A.",
"Asgar, Md.A.",
"Mukhopadhyay, S."
] |
The phenomenon of collective enhancement in nuclear level density and its fadeout has been probed using neutron evaporation study of two strongly deformed (173Lu, 185Re), and one spherical (201Tl) compound nuclei over the excitation energy (
E
⁎
) range of
∼
22
–
56
MeV
. Clear signature of the fadeout of collective enhancement in nuclear level density was observed for the first time in both the deformed evaporation residues 172Lu and 184Re at an excitation energy range
∼
14
–
21
MeV
. Calculations based on finite temperature density functional theory, as well as macroscopic–microscopic shape transition model, have strongly established a close correlation between the observed fadeout of collective enhancement and a deformed to spherical nuclear shape transition in these nuclei occurring in the same excitation energy zone. Interestingly, a weak signature of fadeout has also been observed for the spherical residue 200Tl. This is due to a similar shape transition of the deformed excited state configuration of 200Tl.
|
Understanding the single particle and collective properties of atomic nuclei in general, and nuclear level density (NLD) in particular is of utmost importance for proper quantitative explanation of a wide range of physical processes in nuclear physics, astrophysics as well as nuclear technology. The manifestation of the two (single particle and collective) properties may sometimes be closely interlinked; this is at least the case for nuclear level density, where the degree of mixing is decided by the intricate interplay of single-particle and collective excitations [1–3] . Consequently, it was predicted, both phenomenologically as well as microscopically [4–7] , that there should be an enhancement of NLD over its single particle value due to collectivity, which would subsequently get damped at higher excitation. This phenomenon of enhancement and its fadeout in NLD is assumed to depend on various factors like ground state deformation, excitation energy of the nucleus under consideration. The fadeout of collective enhancement in NLD is however, yet to be ‘observed’ experimentally. An unambiguous experimental confirmation of its existence is crucial for the validation of theoretical models as well as for realistic prediction of important reaction rates, cross sections, which are required in various areas of current interest, from the synthesis of superheavy nuclei to stellar nucleosynthesis problems. In the present letter, we report such a direct experimental evidence of collective enhancement in NLD and its fadeout in highly deformed nuclei. Microscopic origin of this phenomenon is also explained with theoretical calculations. Phenomenologically, collective contribution in the nuclear level density at excitation energy ρ ( E ⁎ , J ) and angular momentum E ⁎ J is expressed as [4] , where (1) ρ ( E ⁎ , J ) = ρ i n t ( E ⁎ , J ) K c o l l ( E ⁎ ) , is the intrinsic single particle level density, and ρ i n t ( E ⁎ , J ) , K c o l l ( = K r o t K v i b ) , K r o t are the total, rotational and vibrational enhancement factors, respectively. Microscopic shell model studies K v i b [5] have predicted that, for nuclei with finite ground state deformation, rotational collectivity causes large enhancement of NLD ( ) up to moderate excitation (typically, K r o t ∼ 100 ). In comparison, ∼ 20 – 30 MeV is negligible (≃1) except at very low excitations (typically K v i b < < K r o t ). Beyond a critical value of excitation energy (temperature), ≲ 5 MeV ( E c r ⁎ ), the enhancement fades out ( T c r ) and NLD is purely of single particle in nature. This phenomenon is predicted to be due to the deformed to spherical shape phase transition of the nucleus when it can no longer support rotational bands K c o l l ( E c r ⁎ ) ≃ 1 [6,8,9,7] . Microscopic calculations indicated that this fadeout transition is fairly sharp and takes place at a critical energy E c r ⁎ ∼ 18 – 25 MeV [6,9] . Phenomenologically it was estimated that this transition may be represented by a Fermi distribution-like function with a critical energy E c r ⁎ ∼ 120 A 1 / 3 β 2 [5] . In an another independent work Björnholm et al. have estimated the critical temperature for this transition, where T c r ∼ 40 A − 1 / 3 β MeV β is the ground-state deformation [10] . However, the sharpness of this transition may be considerably blurred if thermal shape fluctuations are incorporated in the calculation, as will be shown later. On the experimental front, however, the phenomenon of damping of collectivity and vis-a-vis the fadeout of collective enhancement in NLD has so far eluded direct detection. An indirect signature of collective enhancement was obtained by Junghans et al. to explain the production cross sections of projectile-like fragments produced in high energy fragmentation of uranium and lead [11] . On the other hand, the attempt to extract direct evidence of enhancement and its fadeout from α -particle evaporation study of the compound nucleus 178 Hf ⁎ ( ) yielded a null result β = 0.278 [12] . However, our recent neutron evaporation study on axially deformed nuclei, 185 Re ⁎ and 169 Tm ⁎ , has provided positive indications of the onset of the phenomenon, though proper identification of the transition zone could not be possible due to limited range of the data [13] . Therefore, the challenge is two-fold: firstly, to extract direct experimental evidence on the fadeout of enhancement of NLD at higher excitation by identifying the transition zone, and secondly to establish its link with the damping of collectivity and, vis-a-vis, a deformed to spherical shape transition of the nucleus. In this letter, we present the first direct evidence on the existence of fadeout of collective enhancement in NLD in deformed 172 Lu and 184 Re nuclei from an experimental study of respective evaporation neutron energy spectra from the corresponding compound nuclei. The correlation between the observed fadeout of collectivity and shape transition in these nuclei has also been investigated in the framework of two theoretical approaches: the Finite Temperature Density Functional Theory (FT-DFT) [14–17] and the macroscopic–microscopic shape phase transition model (MMSTM) [18–20] . Surprisingly, even in the case of spherical 200 Tl, a weak but distinct signature of enhancement and the fadeout was visible; this has been explained in terms of nuclear structure consideration [21,22] . The prime objective of the present experiment was to probe the variation of level density parameter a directly from the respective backward angle neutron evaporation data for both deformed and non-deformed nuclei over the whole range of excitation energy of interest (encompassing the transition zone: ). The experiment was carried out using E ⁎ ∼ 20 – 50 MeV 4 He ion beams of incident energies in the range of 26–60 MeV from the K130 cyclotron at the Variable Energy Cyclotron Centre (VECC), Kolkata. Self-supporting foils of 169 Tm (thickness ), ∼ 1.15 mg / cm 2 181 Ta (thickness ) and ∼ 1.3 mg / cm 2 197 Au (thickness ) were used as targets to populate the compound nuclei ∼ 3.1 mg / cm 2 173 Lu ⁎ ( ), β ∼ 0.286 185 Re ⁎ ( ) and β ∼ 0.221 201 Tl ⁎ ( ), respectively in the excitation energy range β ∼ − 0.044 ∼ 22 – 56 MeV [23] . The emitted neutrons were detected using four liquid scintillator detectors [24] placed at the laboratory angles of 90 ∘ , 105 ∘ , 120 ∘ and 150 ∘ at a distance of 1.5 m from the target except for the measurement at lowest beam energy of 26 MeV, where the detectors were kept at 75 cm from the target. Energies of the emitted neutrons were measured by the time-of-flight (TOF) technique, where each valid start of the TOF was generated from a 50-element BaF 2 γ -ray detector array when at least two detectors of the array fired simultaneously [25] . The BaF 2 array was split into two equal parts which were placed in staggered-castle type geometry, on top and bottom sides of the thin walled target chamber. The neutron and γ separation was achieved by both TOF and pulse shape measurements. Details of the experimental technique have already been described in our earlier papers [13,26–28] . The neutron data at the most-backward angle (150 ∘ ) were used for the present analysis as the contribution of any direct component is minimum at this angle. To focus on the possible transition zone, the extracted centre of mass (c.m.) neutron kinetic-energy spectra from the decay of 173 Lu ⁎ , 185 Re ⁎ and 201 Tl ⁎ at four lowermost incident energies of 26, 30, 35 and 40 MeV have been displayed in Fig. 1 . The slopes of the spectra at 26 and 30 MeV are distinctly different from those at 35 and 40 MeV for all the three cases. The experimental neutron energy spectra were compared with the respective statistical model (SM) calculations using the code GEMINI++ [29] . Here, is calculated using back shifted Fermi gas model ρ i n t ( E ⁎ , J ) [30] . Shell effect and its washing out with excitation energy was incorporated using the energy dependent level density parameter , a = a ˜ f ( U , J , δ W , γ ) , U = E ⁎ − E r o t ( J ) + δ P being the rotational energy and E r o t ( J ) δP the pairing correction. Function incorporates the effects of shell correction and its damping at higher excitation, where f ( U , J , δ W , γ ) δW and γ are the shell correction energy and shell damping coefficient, respectively [29,31] . The shape of the neutron evaporation spectrum is mostly determined by the value of the level density parameter which was estimated in terms of the best-fit values of , where a ˜ = A / k k is called the inverse level density parameter and is the asymptotic (intrinsic) value of a ˜ a at high excitation energies. The best-fit values of k for all the three systems at various excitation energies are shown in Fig. 2 . In the compound nuclear decay process, neutrons are emitted from different stages of the decay cascade. Therefore, the average thermal excitation energy < U > was estimated using , where < U > = ∑ ( U i w i ) / ∑ ( w i ) is the excitation energy of the i-th nuclei in the decay chain and U i is the corresponding yield of neutrons. The average residue < w i A > was calculated in the same the way, which were 172 Lu, 184 Re and 200 Tl up to 35 MeV beam energy, and 171 Lu, 183 Re and 199 Tl above 35 MeV (except 60 MeV). However, it is interesting to note that all the isotopes of Lu, Re, and Tl in the decay chain are having similar ground state deformation [23] . It is evident from the Fig. 2 that, for the decay of deformed nuclei 173 Lu ⁎ and 185 Re ⁎ , there is a sharp change (relative increase) in the value of inverse level density parameter k within the compound nuclear excitation energy interval of 27–37 MeV, which corresponds to for evaporation residues (ER) < U > ∼ 14 – 21 MeV 172 Lu and 184 Re. This amounts to an abrupt decrease in NLD in both the deformed cases at . This sudden fall of NLD is a signature of fadeout of collective enhancement in NLD. Interestingly, in case of spherical nucleus < U > ∼ 14 – 21 MeV 201 Tl ⁎ too, a weaker but distinctly abrupt variation of k is observed in the same excitation energy region. Excluding this transition zone, the overall trend of k as a function of excitation energy matches with the standard empirical systematics , as shown by the continuous line in the k s ( U ) = k 0 + κ ( U / A ) Fig. 2 [28,29] . This signifies that, beyond the fadeout region, NLD is purely single particle in nature. This is a clear and most direct signature of fadeout of collectivity in NLD in the two deformed nuclei – provided the observation of similar, though weaker, signature of fadeout for spherical nucleus 200 Tl can be explained properly. Interestingly, though all three systems were having different ground state deformations, the transition seems to occur at nearly same excitation energy region. This will be discussed further in the following paragraphs. In statistical model, the temperature is related to NLD by the relation, So any abrupt variation in NLD would reflect a similar variation in the temperature, which is likely to provide another direct signature of fadeout of collective enhancement. Assuming complete thermalisation, the apparent temperatures (2) 1 T = d l n ρ d < U > . have been extracted by fitting evaporated neutron energy spectra using Maxwell distribution, which are shown as function of < T a p p U > in the Fig. 3 . Data were also fitted with distribution. It is clear from T a p p ∝ < U > Fig. 3 that there is a significant deviation (rise) in from the empirical systematics ( T a p p ) for both T a p p ∝ < U > 172 Lu and 184 Re nuclei in the excitation energy range 14–21 MeV, whereas a weaker (but identifiable) deviation is also observed for the nucleus 200 Tl ⁎ . The observed hump in corresponds to a sudden change in NLD (from Eq. T a p p (2) ), which provides another clear and straightforward signature of fadeout of collective enhancement in NLD for the deformed Lu and Re nuclei. At the same time, the conjecture of something similar also happening for 200 Tl, though on a weaker scale, is not ruled out. It is very interesting to note here that the change over in inverse level density parameter k and temperature T is taking place in the excitation energy region where Giant Dipole Resonance (GDR) occurs [32] , which is a collective phenomenon in atomic nuclei. GDR decay built on excited states competes with the neutron decay but with small branching ratio. Thus, in order to study the effect of GDR emission, the statistical model calculations were carried out by including the GDR decay using our recent measurements with alpha beams [33] . It was observed that the GDR branching ratio is very small for both Γ γ / Γ n ∼ 10 − 4 173 Lu and 201 Tl at 26 and 50 MeV, which clearly indicates that GDR decay has negligible effect on the neutron evaporation spectra. The microscopic origin of the fadeout of collective enhancement in NLD was investigated under the framework of finite temperature density functional theory [14–17] using the symmetry-unrestricted DFT solver HFODD (v 2.73y) [34] . The static (equilibrium) deformation of the system at each temperature was extracted by minimising β e q , the difference between the (deformed) ground-state free energy and that corresponding to spherical shape. V 0 is the measure of dynamical hindrance for the excited nucleus to reach the spherical configuration. The transition point corresponds to the temperature when V 0 , where the effective hindrance vanishes and the system can move towards spherical shape due to thermal fluctuation. V T c r ∼ V 0 0 as a function of temperature is shown in Fig. 4 (inset). The evolution of as a function of temperature has been shown in β e q Fig. 4 (upper). It is evident from the figure that such transition takes place as the temperature goes above 1.7 MeV for 172 Lu and for ∼ 1.3 MeV 184 Re, which is very close to the temperature of our concern. In addition, an excited system can also access different configurations leading to thermal shape fluctuations (Δ β ) around . The evolution of Δ β e q β as a function of temperature has been illustrated in Fig. 4 . It is seen that Δ β grows with temperature and beyond some point ( ), shape evolution profiles of the two systems significantly overlap, leading to washing out of the variation of ≳ 0.8 MeV between the two systems. This explains the present experimental observation of same fadeout transition zone in both systems. Independent theoretical calculation using macroscopic–microscopic shape phase transition model T c r [18–20] also produced almost identical result (see Fig. 4 (lower)). At this point, the case of spherical 200 Tl needs special attention. As expected, the above theoretical calculations do not predict any signature of shape transition in 200 Tl, though the data indicate the presence of weak shape transition in this case too. The ground state shape of all Tl nuclei are known to be spherical; however, 200 Tl becomes deformed at very low excitation energy of about 1 MeV due to the large deformation driving effect of the intruder orbital. The effect of the high-j intruder proton h 9 / 2 orbital and neutron h 9 / 2 orbital is inducing deformed shapes in both odd–even and odd-nuclei persists up to i 13 / 2 201 Tl [22] . The deformed shapes are experimentally realised from the observation of rotational bands in 200 Tl having oblate deformation of β ∼ 0.1 [21] . This explains the observation weak fadeout signature in 200 Tl though it is spherical in the ground state; it further establishes the correlation between the enhancement of NLD and deformation. In summary, sudden increase in inverse level density parameter k and temperature T indicates the fadeout of collective enhancement in NLD for deformed nuclei 172 Lu and 184 Re. In the case of spherical nucleus 200 Tl too, a weaker signature of enhancement and fadeout of NLD was seen. The definite signature of fadeout (sudden drop of NLD) was observed in the average excitation energy range of 14–21 MeV, irrespective of the mass number or deformation of the nuclei. The experimental trends have been qualitatively confirmed by two microscopic theories (FT-DFT and MMSTM), both of which predict deformed to spherical shape transition for 172 Lu and 184 Re at a temperature which is close to the observed fadeout temperature. The presence of thermal shape fluctuation leads to blurring of the sharpness of the transition, which explains the observation of the almost same fadeout zone irrespective of deformation. Moreover the admixture of higher chance neutron emission in the evaporation spectra leads to further blurring of transition zone information. In the case of spherical nucleus 200 Tl also, the apparently contradictory observation of weak signature of enhancement of NLD can be explained in terms of shape transition of the deformed excited state configuration of 200 Tl originating from its shell structure. Therefore, it may be concluded that the observation of fadeout transition in all the three cases and their correlation with the deformed to spherical shape transition are unequivocally established through the present experimental study for the first time. The authors thank VECC Cyclotron staff for providing high quality beams during experiment and thankfully acknowledge the computing support received from the Lawrence Livermore National Laboratory (LLNL) Institutional Computing Grand Challenge program. J.S. acknowledges Niolas Schunck of LLNL for fruitful discussion. S.B. acknowledges with thanks the financial support received as Raja Ramanna Fellow from the Department of Atomic Energy, Government of India .
|
[
"BOHR",
"CAPOTE",
"KONING",
"IGNATYUK",
"HANSEN",
"OZEN",
"KARAMPAGIA",
"GOODMAN",
"ALHASSID",
"BJORNHOLM",
"JUNGHANS",
"KOMAROV",
"ROY",
"EGIDO",
"PEI",
"MCDONNELL",
"SCHUNCK",
"ALHASSID",
"DUBREY",
"PANDIT",
"BHATTACHARRYA",
"DASGUPTA",
"MOLLER",
"BANERJEE",
"PANDIT",
"BANERJEE",
"ROY",
"ROY",
"CHARITY",
"BETHE",
"BETHE",
"IGNATYUK",
"SNOVER",
"PANDIT",
"SCHUNCK"
] |
3a483c9c029a4773a35cb5841810ae00_Evaluation of the knowledge of hematologists about the management of infectious complications in hem_10.1016_j.htct.2023.01.003.xml
|
Evaluation of the knowledge of hematologists about the management of infectious complications in hematologic patients
|
[
"Guarana, Mariana",
"Nucci, Marcio"
] |
Introduction
Infection is a serious complication among patients with hematologic malignancies (HMs) and in hematopoietic cell transplant (HCT) recipients. In most centers, the management of these complications is provided by the hematologist in person, thus demanding a knowledge of basic aspects of infection.
Methods
To evaluate the knowledge of the hematologist on infections, we invited clinicians to answer two questionnaires with 20 multiple-choice questions covering epidemiology, prophylaxis, diagnosis and treatment of infection in patients with HMs and HCT.
Results
We obtained 289 answers: 223 in survey 1 (febrile neutropenia) and 66 in survey 2 (infection in HCT). The median score was 5.0 in both surveys (range 0.5 - 9.0). In survey 1, the questions with the lowest number of correct answers were Q3 (8%), concerning the cefepime dose, and Q1 (9%), which asked about the epidemiologic link between the use of high dose cytarabine and viridans streptococcal bacteremia. In survey 2, two questions about cytomegalovirus (CMV) infection had the lowest percentage of correct answers (Q4, 12% and Q11, 18%). Clinicians attending to HCT recipients had higher scores, compared to clinicians attending to patients with HM only (median score of 5.0 and 4.5, p = 0.03, in survey 1 and 6.0 and 4.5, p = 0.001, in survey 2). In both surveys staff clinicians, residents and professors had similar scores.
Conclusion
This is the first study in Brazil assessing the knowledge of hematologists on infectious complications. The low median score overall indicates an urgent need for continuous education. Such initiatives will eventually result in better patient care.
|
Introduction Infection is a major complication in patients with hematologic malignancies (HMs) receiving intensive chemotherapy or hematopoietic cell transplantation (HCT), with high morbidity and mortality rates. 1 , Infection in this scenario may be caused by bacteria, fungi, viruses and parasites, with clinical manifestations that are usually non-specific. At most centers, the management of infectious complications is provided by the hematologist in person, thus demanding a knowledge of basic aspects of infection. However, hematologists are already overwhelmed by the large amount of new information regarding the management of the underlying hematologic disease. On the other hand, major advances in the management of infectious diseases have occurred, including improvements in culture and identification of microorganisms, 2 3 , new biomarkers and diagnostic tools, 4 new antimicrobial drugs, 5 concepts of pharmacokinetics and pharmacodynamics of antimicrobial agents 6 and therapeutic drug monitoring, 7 among others. Therefore, managing infection in hematologic patients represents a great challenge. 8 One of the most important activities to improve the quality of patient care is education. However, to promote adequate educational programs, it is important to know possible gaps in the knowledge of different aspects of infection to develop targeted educational activities. With this aim, we performed a web-based survey with two questionnaires to evaluate the level of knowledge of the hematology community on infectious complications in febrile neutropenia and HCT. Materials and methods Study population We invited clinicians from different parts of Brazil to answer a survey to evaluate the level of knowledge on the management of infectious complications in high-risk hematologic patients. The clinicians had to have experience in treating patients with hematologic malignancies and/or patients undergoing HCT. The recruiting of responders was performed by an announcement in the ABHH (“Associação Brasileira de Hematologia e Hemoterapia” – Brazilian Society of Hematology and Blood Transfusions) website. The participation in the survey was voluntary and anonymous and included hematologists from public and private centers. Survey Two questionnaires were developed, both with 20 multiple-choice questions covering areas of the epidemiology, prophylaxis, diagnosis and treatment of infectious complications in hematologic patients. The first questionnaire (survey 1) was intended to evaluate the knowledge of hematologists in the management of febrile neutropenia. This included the most frequent pathogens causing infection and the recognition of clinical syndromes and strategies of antibiotic and antifungal prophylaxis and treatment. The second questionnaire (survey 2) covered topics related to the management of infectious complications in autologous and allogeneic HCT. The questions were built by one of the authors (M.N.) and the selection of the correct answers was made by the same author, based on his personal experience. We also collected basic sociodemographic data on hematologists, such as age, gender, region, hospital type (public or private), clinician category (resident, staff clinician or professor) and the main area of clinical practice (HM or HCT). Each correct answer was scored as 0.5, up to the maximum score of 10 points. The full survey instrument is available in Supplementary files 1 and 2. The questionnaires were provided to hematologists as an online tool, using the Survey Monkey platform. Statistical analysis We calculated the median score obtained by each participant and compared scores according to the main area of clinical practice, clinician category and age group (< 30 years, 31 - 40, 41 - 50, 51 - 60 or > 61 years old). Categorical variables were expressed as absolute numbers and percentage and were compared using Chi-square or Fisher's exact test, as appropriate. Continuous variables were summarized as medians and ranges and compared using the Mann-Whitney and the Kruskal-Wallis test. A p -value < 0.05 was considered statistically significant. Database creation and statistical analyses were performed using the SPSS version 21.0 (IBM, Armonk, NY, USA). This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Results During the study period, we obtained 289 answers: 223 in survey 1 and 66 in survey 2. Demographic characteristics of participants are summarized in Table 1 . The median age of participants in surveys 1 and 2 was 38 (range 23 - 68) and 36 (range 26 - 63), respectively. Most clinicians were staff physicians working with HM in both public and private hospitals. The majority of respondents were from the southeast region of Brazil. Overall, the median score was 5.0 in both surveys (range 0.5 - 9.0). Survey 1: febrile neutropenia Overall, seven out of the 20 questions evaluated the knowledge of clinicians regarding the epidemiology, diagnosis and management of bacterial infections and 13 focused on invasive fungal diseases (IFDs). Among the seven questions dealing with bacterial disease, the lowest percentage of correct answers (8%) was question Q3. The question asked if cefepime should be given at a fixed dose and schedule or if the dose should be individualized, based on body weight and creatinine clearance. The second question with a low percentage of correct answers (9%) was question Q1, which asked about the epidemiologic link between the use of high-dose cytarabine and viridans streptococcal bacteremia. Concerning IFDs, question Q20 had the lowest percentage of correct answers (16%). In this question, we asked if secondary prophylaxis was indicated for patients with a previous episode of candidemia. Most of the hematologists (83%) answered that secondary prophylaxis with fluconazole was needed. The question with the highest percentage of correct answers was Q7; 82% of clinicians answered that Pseudomonas aeruginosa, Klebsiella sp. e Escherichia coli are the leading agents of Gram-negative bacteremia in febrile neutropenia. The second questions with the highest percentage of correct answers (80%) tested the skills of clinicians in the management of patients with a positive blood culture for yeast (Q18) and Q14 (primary therapy for invasive aspergillosis). Overall, staff clinicians (4.5, range 0.5 - 9.0), residents (5.0, range 2.0 - 8.0) and professors had similar scores (4.0, range 1.5 - 7.5, p = 0.56). On the other hand, when we analyzed individual questions, some differences were observed. In Q12 (discontinuation of antibiotics after engraftment in autologous HCT) the rates of correct answers were 56%, 32% and 21% for residents, staff clinicians and professors, respectively ( p = 0.009). Likewise, in Q5 (knowledge on characteristics of different antifungal agents), staff clinicians had the highest rates of correct answers (51%, compared to 47% for residents and only 14% for professors, p = 0.02) ( Table 2 ). The median scores of clinicians attending to HCT recipients were higher, compared to clinicians attending to patients with HM only (5.0, range 1.5 – 7.0 vs . 4.5, range 0.5 – 9.0, respectively, p = 0.03). As shown in Table 2 , in four questions, clinicians attending to HCT recipients had significantly higher percentages of correct answers, compared to clinicians attending to HM only: question Q8, testing knowledge on amphotericin B (60% vs. 46%, p = 0.04); question Q10, which asked about the management of fever, skin rash and dyspnea in autologous HCT (67% vs. 49%, p = 0.01); question Q14 (primary therapy of invasive aspergillosis, 88% vs. 75%, p = 0.02), and; question Q15 (skin nodules representing the first clinical manifestation of invasive fusariosis (78% vs. 65%, p = 0.05). We also observed a significant difference in the median scores by age group: 5.0 (range 2.0 - 8.0) for clinicians < 30 years old, 5.0 (range 1.5 - 9.0) for those between 31 and 40 years old, 4.5 (range 0.5 - 8.0) for those between 41 and 50 years, 4.0 (range 1.5 - 7.5) for those between 51 and 60 years and 3.0 (range 1.5 - 6.5) for clinicians > 60 years old ( p < 0.001). Survey 2: infectious complications in HCT The 20 questions of survey 2 covered topics on bacterial (6 questions), fungal (7 questions) and viral (7 questions) infections. As shown in Table 3 , among the three questions with the lowest percentages of correct answers, two tested the knowledge of clinicians on the management of cytomegalovirus (CMV) infection in HCT (Q4, 12% and Q11, 18%), the third being on the treatment of IFDs (Q18, 17%), and most respondents did not know that isavuconazole is an option for the primary treatment of mucormycosis. On the other hand, questions on the correct diagnosis of bacterial infections and antibiotic use had the highest percentages of correct answers: 80% in Q19 (proper diagnosis and management of neutropenic enterocolitis), 79% in Q8 (no activity of meropenem against methicillin-resistant Staphylococcus aureus ) and 77% in Q10 (engraftment syndrome post autologous HCT). When we analyzed the scores across different groups, no statistically significant difference was observed, when comparing staff clinicians, residents and professors (5.0, range 0.5 - 9.0, 5.0, range 3.0 - 6.5, and 4.5, range 3.5 - 8.5, respectively, p = 0.93). However, looking at individual questions, some differences were observed ( Table 3 ). In Q2 (positive blood culture for C. krusei ), all nine professors answered correctly, compared to 47% of staff clinicians and 35% of residents (18%, p = 0.005). Comparing clinicians working with HCT or HM only, the median scores were 6.0 (range 3.5 – 9.0) and 4.5 (range 0.5 – 9.0), respectively ( p = 0.002). In three questions regarding viral infections, clinicians working with HCT had higher scores. These questions were Q6 (herpes zoster as the most frequent viral infection in the post-engraftment period of autologous HCT, 66% vs. 26%, p = 0.001), Q9 (risk factors for EBV reactivation, 80% vs. 58%, p = 0.05) and Q20 (respiratory viruses in allogeneic HCT, 83% vs. 52%, p = 0.007). Other questions with significant differences were Q13 (positive blood culture for a mold, 71% vs. 39%, p = 0.008), Q3 (risk factor for invasive aspergillosis after HCT, 51% vs. 26%, p = 0.03) and Q5 (causes of diffuse infiltrates in allogeneic HCT, 57% vs. 29%, p = 0.02). Analyzing age groups, the median scores were: 5.0 (range 3.0 - 6.5) for those aged ≤ 30 years, 5.5 (range 1.0 - 9.0) for those between 31 and 40 years, 5.0 (0.5 - 8.5) for those between 41 and 50 years, 5.5 (range 2.5 - 7.5) for those between 51 and 60 years and 5.0 (range 4.5 - 5.5) for clinicians > 60 years old, p = 0.68. Discussion In the present study, we observed that hematologists with daily practice in managing febrile neutropenia and infection in HCT had a low overall score, reflecting the urgent need for continuous education. In general, we identified gaps in the management of all types of infection (bacterial, fungal and viral), with wrong answers in diagnosis, treatment and prophylaxis. Other studies have addressed the use of surveys to evaluate the level of the physician knowledge in different scenarios. This type of study is of great importance to identify gaps in the knowledge, helping to tailor educational activities to a certain community of physicians. Regarding infection in hematology, two studies used surveys to evaluate the practices of antimicrobial management in adults and children. 9-12 13 , 14 Our study focused only on hematologists working with HM and HCT, yielding several findings regarding gaps in the knowledge of infectious complications. In survey 1, we observed that most clinicians did not know that the dose of betalactam antibiotics should be calculated on the basis of weight and creatinine clearance (Q3). In this regard, the use of a fixed dose may result in overexposure to the antibiotics, increasing the risks of adverse events, or underexposure, resulting in poor response to infection. We also noted a gap in the knowledge regarding the epidemiology of bacterial infections, as most clinicians were not aware that patients receiving high doses of cytarabine are at higher risk of developing viridans streptococci bacteremia. These bacteria are colonizers of the oral cavity and the presence of mucositis induced by high-dose cytarabine increases the risk of bloodstream infection by this pathogen. 15 16 , 17 We also observed that clinicians had the mistaken idea that secondary antifungal prophylaxis is needed in all IFDs. The majority answered that secondary prophylaxis is indicated for patients with a previous episode of candidemia, when in fact, there is no data to indicate that secondary prophylaxis is needed. On the other hand, 80% of clinicians were aware that a positive blood culture for yeast in a patient with febrile neutropenia should prompt the immediate initiation of appropriate antifungal therapy. 18 In survey 2, the questions with the lowest rate of correct answers were on CMV infection, which might reflect the heterogeneity in current clinical practices across different institutions. In Q4, almost 30% of clinicians did not know that ganciclovir increases the risk of bacterial and fungal infection and 43% did not know that acyclovir at high doses may prevent CMV infection. 19 Another aspect that deserves attention is the CMV surveillance in allogeneic HCT, as only 18% answered question Q11 correctly. The CMV surveillance after allogeneic HCT should be performed weekly, at least until day +100, and should be extended beyond day +100 in patients with graft-versus-host disease. Our survey also identified a gap in knowledge on new drugs to treat IFDs (Q7 and Q18). Isavuconazole is a broad-spectrum azole approved as primary therapy for both invasive aspergillosis and mucormycosis. 20 21 Finally, we found some differences in knowledge when we analyzed scores across groups. In both surveys, although we did not find a statistically significant difference in overall scores between staff clinicians, residents and professors, we observed that in individual questions, staff clinicians seemed to have more experience than residents and professors. This might be explained by the fact that staff clinicians attend to a larger number of patients, which gives them more expertise. Furthermore, as we expected, hematologists working in both areas (HCT and HM) had the highest score. In addition, if we look into individual questions, hematologists working with HCT had the highest percentage of adequate answer regarding viral infections, probably because CMV and EBV infections are more common in this setting. Age groups were also analyzed and, in survey 1, we observed that clinicians < 40 years of age had the highest scores, probably reflecting the fact that younger hematologists, having graduated recently, may be more updated with new information. In our survey, the questions and the selection of the correct answers were made by one of the authors, based on his personal experience in managing infection in hematologic patients in Brazil over 30 years. This likely reduced potential influences of local epidemiologic differences on the selection of correct answers by the participants. A major limitation of our study is that since the participation was voluntary, clinicians with less expertise could have declined the invitation. In this regard, it is possible that the overall score could be even lower if there was no selection bias. Moreover, we did not have the information about the number of years of experience of clinicians in treating patients with HM and/or undergoing HCT. Conclusion In conclusion, our study allowed us to identify important gaps in the knowledge of Brazilian hematologists regarding the management of infectious complications in patients undergoing chemotherapy or HCT. These data indicate that there is an urgent need for continuous medical education in the field, as well as guidance for management of infection which takes into account local epidemiologic aspects. In this regard, the development of Brazilian guideline for the management of febrile neutropenia and the creation of an educational program addressing the management of infection in hematologic patients may improve clinician knowledge and patient care. Conflicts of interest None. Supplementary materials Supplementary material associated with this article can be found in the online version at doi:10.1016/j.htct.2023.01.003 . Appendix Supplementary materials Image, application 1 Image, application 2
|
[
"KLASTERSKY",
"WINGARD",
"OPOTA",
"LUETHY",
"MAERTENS",
"VANDUIN",
"GOULENOK",
"LEWIS",
"HADOUSSA",
"ARINSBURG",
"MACDONELLYILMAZ",
"YAN",
"KIMURA",
"SCHELER",
"MULLER",
"BOCHUD",
"PAGANINI",
"NUCCI",
"LOCATELLI",
"PRENTICE",
"MAERTENS"
] |
85b22d4644674f5d99d952b69344e913_Sudden death in water Diagnostic challenges_10.1016_j.ejfs.2015.07.003.xml
|
Sudden death in water: Diagnostic challenges
|
[
"Ventura Spagnolo, Elvira",
"Mondello, Cristina",
"Cardia, Luigi",
"Zerbo, Stefania",
"Cardia, Giulio"
] |
The authors report a case of sudden death in a breath-holding diver and highlight the forensic diagnostic difficulties in opining the cause of sudden death in water. The autopsy showed increased thickness of the left ventricular wall with a distinct pattern of concentric hypertrophy, evident particularly in the subaortic interventricular septum. Histological examination revealed diffuse interstitial fibrosis and associated findings of multifocal myocyte disarray especially evident in the subaortic interventricular septum. The analysis and discussion of this case made it possible to attribute sudden death to a lethal arrhythmia following myocyte disarray and hypoxia caused by breath-holding, the triggering factor of apnea. This case demonstrates the importance of a thorough forensic investigation, particularly in histological terms, in subjects found dead in water, in order to ascertain the real cause of death, which may not be always ascribable to drowning.
|
1 Introduction The scientific community has long debated on the need to distinguish between subjects dying of asphyctic syndrome from drowning and subjects found dead in water. In 1999 Modell et al. deemed it appropriate to apply to the latter the notion of “drowning without aspiration”. 1 These implications are even more important in cases where forensic investigations provide evidence that the presence of pre-existing pathological conditions may have contributed to and/or caused death in water. This issue is further complicated by the inherent difficulty of ascertaining that sudden death in water had occurred at such an early stage that no drowning fluid could enter the airways and generally the gastrointestinal tract. In addition, other than in deaths caused by the inhibitory nervous mechanism or asphyctic syndrome from laryngospasm, the main mechanisms underlying sudden death in water do not necessarily involve such a sudden arrest of cardiac function and breathing as to allow concomitant drowning to be ruled out. 2 These considerations are extremely relevant given the growing number of diving incidents that require medical assessment, as competitive and/or non-competitive breath-held diving activities have become increasingly popular. The authors report a case of sudden death in a breath-holding diver hunting underwater. 2 Case report A 44-year-old man died during breath-held diving while hunting under water. His body was found on the seabed about 6 h after plunging. The police authority then requested a post-mortem examination. The external examination of the body, weighing 77 kg and measuring 174 cm in length, revealed conjunctival hyperaemia and moderate leakage of reddish fluid from the nostrils. There was no evidence of traumatic injuries on the body. The internal examination showed in particular that the heart weighed 400 g and had a significant concentric left ventricle hypertrophy particularly in the subaortic interventricular septum (asymmetrical form with significant Van Noorden index), bilateral atrial enlargement and a mild prolapse of the posterior mitral leaflet. The lungs showed some subpleural petechiae and bronchial leakage of pinkish foam. Samples of tissues and fluids were collected for further histological and toxicological testing. Toxicology tests were performed on the heart, blood, urine and bile. All specimens tested negative for alcohol, illegal and psychotropic drugs. Histological tests reported hyperventilated alveoli often converging into emphysematous spaces. Seventy-five percent of these spaces were optically empty ( Fig. 1 ), while the other 25% contained scarce red-blood cells ( Fig. 2 ). The examination of the heart revealed multifocal myocyte disarray affecting 35% of the left ventricular sections studied associated with diffuse interstitial fibrosis mainly in the subaortic interventricular septum ( Fig. 3 ). It also showed significant myocyte hypertrophy and a multifocal wave-like pattern ( Fig. 4 ). The examination of the other organs reported no pathological signs. 3 Discussion The analysis of this case and in particular of the macroscopic and histological cardiac findings demonstrated evidence of previously undiagnosed hypertrophic cardiomyopathy. This disease is a common cause of sudden death in young adults. It can occur during both routine daily tasks and even mild physical activities. Genetic studies demonstrated that it is associated with dominant autosomal mutations of some genes, including MYH7, MYBPC3 and TNNT2. However, these mutations were reported in slightly less than 50% of clinically affected subjects. 3,4 4 In this disease, the most typical anatomopathological findings are an asymmetrical increase in the thickness of the ventricular walls (particularly in the septum) with a potential shrinkage of the cavity, a common endocardial fibrosis of the mitral leaflet, atrial enlargement and mitral valve thickening. The most common histological findings are an uneven pattern of enlarged and irregular myocardial fibres (“disarray”), increased interstitial matrix and fibrotic hyperplasia of intramural arterioles. 5–7 Hemodynamic and electrophysiological studies demonstrated that the clinical symptoms of hypertrophic cardiac disease are attributable to mechanisms associated with structural abnormalities of the ventricular cavity and abnormal ejection due to a defective release of myocytes. This mechanism can be promoted by anterior systolic motion of the mitral valve with subsequent congestive cardiac failure. In other cases, abnormal electrocardiographic findings can be attributed to ischemic events due to the increased myocardial mass. However this finding apparently contradicts the absence of any evidence of myocardial infarction or extensive replacement fibrosis even in severe hypertrophic cardiomyopathy. 8 4,9 Sudden death in hypertrophic cardiomyopathy is a frequent occurrence. It is generally caused by cardiac arrhythmias associated with re-entry mechanisms or excitable foci or a reduced ventricular cavity or small vessel disease. 5 Although lethal mechanisms underlying arrhythmias associated with sudden death are well understood, the same cannot be said for the cause and timing of the initial triggering event. In fact it is still unclear why some subjects die of sudden death, while others with the same anatomical abnormalities survive. It is believed that there are several factors at play, such as the severity of the macroscopic and histological abnormality or the conditions that increase myocyte metabolic stress (triggering factors). In this case, the authors maintain that the most relevant triggering factor is breath-holding underwater. Under such a condition the human body undergoes physiological adjustments with redistribution of the blood flow (greater intrathoracic blood volume with increased cardiac and cerebral perfusion), bradycardia and a decline in the cardiac output due to increased peripheral resistance. 10–12 This sports activity involves many risks, such as barotrauma, which is a common cause of morbidity and mortality. Another risk worth considering, particularly in this case report, is certainly the onset of arrhythmias, such as extreme bradycardia (even under 10 beats per minute), atrial fibrillation, supraventricular and ventricular extrasystoles (occasionally combined), ventricular tachycardia, right branch bundle block and atrioventricular blockade. In 1997, Ferrigno et al. examined a number of breath-held dives in a hyperbaric pool and confirmed a faster onset of bradycardia in cold water. In 2009 Hansel et al. observed a clear-cut correlation between the onset of arrhythmias (recorded in 77% of cases) and the oxygen saturation drop. 13 14 Arrhythmias are caused by progressive hypoxia and therefore alterations in the hydroelectrolytic balance cause changes in action potential and mitochondrial hypo-anoxia accompanied by the production of reactive oxygen species. The risk of lethal events during breath-held diving can also be attributable to glossopharyngeal breathing that increases the air inflow in the lungs and may therefore cause hemodynamic fluctuations in both systemic and pulmonary circulation. The increased intrathoracic pressure impedes the return of blood to the right heart. This activity is associated with a significant drop of mean arterial pressure, an increase in heart rate (up to 103 beats per minute) and a decline of differential pressure associated with the drop in the cardiac output. These mechanisms are at the basis of reduction in the systemic blood pressure and, tissue hypoxia causing syncope or potentially lethal arrhythmias. 15,16 In the light of these considerations and the characteristics of this case, it is believed that in the pathogenesis of sudden death the anatomical abnormality (myocyte disarray) acted as the locus minoris resistentiae to hypoxia following breath-held diving. In this case breath-holding was the triggering factor which contributed with the cardiac disease to the onset of an arrhythmia that was probably lethal. 4 Conclusions The analysis and discussion of this case made it possible to attribute sudden death to a lethal arrhythmia following myocyte disarray and hypoxia caused by breath-holding. This was therefore the triggering factor leading to the manifestation of the previously undiagnosed lethal cardiac disease. This case demonstrates the importance of a thorough forensic investigation, particularly in histological terms, in subjects found dead in water, in order to ascertain the real cause of death, which may not be always ascribable to drowning. Each case should therefore be examined in all its anatomopathological aspects. Prior to performing breath-held diving it is certainly advisable to undergo appropriate health checkups so as to confirm the eligibility of the subjects who intend to undertake these activities. Campaigns aimed at increasing awareness of the importance of prevention should be promoted along these lines. Funding None Conflict of interest None declared Ethical approval Necessary ethical approval was obtained from the institute Ethics Committee
|
[
"MODELL",
"DELLERBA",
"MARON",
"DAVIES",
"DAVIES",
"WIGLE",
"MARON",
"LINER",
"FERRETTI",
"LINDHOLM",
"FERRIGNO",
"HANSEL",
"NOVALIJA",
"EICHINGER"
] |
ca132894485a4b999883a42b57533d94_Penetrating cardiac trauma_10.1016_j.sopen.2022.11.001.xml
|
Penetrating cardiac trauma
|
[
"Lee, Alex",
"Hameed, S. Morad",
"Kaminsky, Matt",
"Ball, Chad G."
] |
This chapter summarizes approaches to hemorrhage control in penetrating cardiac trauma, an injury that is a true test of trauma systems integration, trauma center readiness, teamwork, decision-making, technical excellence, and multidisciplinary trauma care.
|
A 22-year-old male sustains a stab wound to the anterior chest while waiting at a busy bus stop. Responding emergency health services (EHS) crews, arriving 6 minutes after the assault, note that the patient is awake and protecting his airway, and has a palpable radial pulse of 110/minute. His initial blood pressure is noted to be 88/6o mmHg. No interventions are performed at the scene, and based on pre-established transport protocols, the patient is transferred to a nearby trauma center. The trauma team begins to assemble and is informed that the patient's estimated time of arrival is within 6 minutes. How would you prepare for this patient? First principles Team dynamics and responsibility Preparation for penetrating cardiac injury (PCI) begins long before the traumatic event itself. The survival of PCI patients depends on clear transfer protocols, integrated and high functioning multidisciplinary trauma teams at receiving centers, rapid access to necessary resources (blood bank) and equipment (resuscitative thoracotomy tray, operating room (OR)), and thoughtful and rehearsed algorithms to support critical decision-making ( Fig. 1 ). The six minutes prior to arrival is enough time for the assembling team to don complete personal protective equipment, make introductions, define roles, set up equipment, and plan for the physiologic and anatomic challenges that lie ahead. Physiologic considerations As little as 100 mL of blood accumulating in the pericardial space after a PCI can impair venous return to the heart, compromise ventricular filling, and result in diminished cardiac output and coronary perfusion, and, eventually, circulatory collapse. Early resuscitation efforts should prioritize the establishment of wide bore vascular access (14 or 16 G peripheral lines or 8.5 F cordis central venous catheters) and initiation of massive transfusion, in order to maintain ventricular filling during diastole. The use of inotropic agents or pressors can transiently augment compensatory sympathetic responses and maintain cardiac output. Intubation and the initiation of positive pressure ventilation, with an associated incremental decrease in venous return to the right atrium, can tip cardiac tamponade into cardiac arrest. In patients with associated lung injuries, positive pressure ventilation that exceeds pulmonary venous pressure, can promote air embolism, adding further obstructive shock and coronary occlusion to an already dire situation. Delaying intubation until surgical intervention is ready to go, may avoid or minimize the duration of cardiovascular collapse. But these are just temporizing measures – ultimately, the definitive restoration of ventricular filling and cardiac output requires anatomic approaches to relieve tamponade physiology and address cardiac hemorrhage. Anatomic considerations Knowledge of the surface anatomy of the mediastinum and the position of the PCI can give the trauma team an idea of trajectory and injury pattern, and the key to rapid exposure, decompression, and control. Mediastinal wounds should be suspected with any clinical evidence of trauma to the ‘cardiac box’ (i.e. the space bordered by the midclavicular lines, clavicles, and costal margins – Fig. 2 ). However, the observation that a higher mortality rate has been reported for cardiac trauma associated with wounds outside the precordium highlights the importance of maintaining a high index of suspicion for any penetrating chest injury [ 1 ]. Special attention to gunshot injuries is also particularly important, as high energy bullets have a wider path of destruction than their entry wounds might suggest [ 2 ]. The palpable sternomanubrial joint is a good landmark to locate the second rib and, below it, the corresponding second intercostal space. The joint roughly overlies the superior border of the heart, the origin of the aorta and pulmonary artery, and the first part of the descending aorta. The manubrium resides cephalad to this joint, and below it, the main sternal body overlies a large portion of the anterior heart. The most commonly injured chamber of the heart (over 50% of PCIs) is the right ventricle due to its anterior location, which spans vertically between the third costal cartilages and the inferior cardiac border at the xiphisternal joint. The right heart border, which consists mostly of the right atrium between the vena cavae, lies parasternally between the third and sixth costal cartilages. The left heart border is formed mainly by the left ventricle, descending from the second parasternal intercostal space to the fifth intercostal space at the midclavicular line. Injuries to the left lateral mediastinal border are highly lethal due to the high-pressure system it encloses [ 3 ]. An oblique line between the medial portion of the left third intercostal space and the cardiac apex roughly traces the anterior interventricular groove where the left anterior descending artery resides (often within a fat pad). Injuries to this region mandate further planning and assessment for a potentially lacerated coronary artery and associated myocardial ischemia. The posterior surface of the heart begins at the T4/T5 intervertebral level where the arch of the aorta resides, as well as the trachea bifurcates. The heart then extends caudally across the mediastinum to the T8/T9 intervertebral level where it rests on the diaphragm. Preparation The trauma team in the opening scenario has the benefit of a few minutes of preparation time, during which it can don personal protective equipment, anticipate worst case scenarios (e.g. trauma arrest), designate team members for specific tasks (e.g. primary survey, airway interventions, extended FAST exam, intravenous access, coordination of blood transfusion, resuscitative thoracotomy, right chest tube, extended FAST exam). Those minutes can be used to harness resources and to keep them at the ready (e.g. resuscitative thoracotomy tray ( Fig. 3 ), massive transfusion, OR on standby). The patient arrives in the trauma bay and is met by the team. He is awake, but tachypneic (30 breaths/min), pale, tachycardic (120/min), and hypotensive (85/60). He has a stab wound to the right of the sternum. What is the best diagnostic strategy? Diagnostics Initial assessment All penetrating thoracic injuries should trigger a high degree of suspicion for PCI. The classic findings of cardiac tamponade such as Beck's triad (muffled heart sounds, jugular venous distension, and hypotension), electrical alternans, and pulsus paradoxus (a fall in systolic pressure of greater than 10 mmHg during inspiration), have been eclipsed as diagnostic tools by FAST ultrasound findings of pericardial fluid, often supplemented by extended-FAST correlates such as inferior vena cava distension. If the initial assessment is suggestive of PCI, the patient's hemodynamic status determines the location and invasiveness of subsequent diagnostic and therapeutic efforts, which can range from subxiphoid pericardial window (SXPW) or urgent operative exploration in the OR, to resuscitative thoracotomy in the emergency department. ( Fig. 1 ). Subxiphoid pericardial window In patients with suspected PCI, the FAST examination, which screens for pericardial fluid, has been shown to have superb test performance, while also improving both the time to definitive care and overall survival [ 4 ]. However, the sensitivity of bedside ultrasonography in detecting pericardial fluid can be limited by concomitant hemopneumothorax, synchronous lacerations to the pericardial tissue/pleura, subcutaneous emphysema, or operator inexperience. Although SXPW is a traditional test with vanishing indications (for stable patients with suspected PCI and equivocal FAST exams), it remains a powerful and definitive diagnostic adjunct. Furthermore, a recent randomized trial involving stable patients with suspected PCI and ultrasound-detected hemopericardium, which compared an approach using SXPW, gentle pericardial irrigation, and pericardial drain placement followed by median sternotomy for patients with ongoing hemorrhage, with immediate median sternotomy, confirmed that the SXPW approach is safe and effective [ 5 ]. Thus, performance of a SXPW still has a specific and important role in the management of PCIs that do not mandate immediate sternotomy. The SXPW can be performed in the emergency department, the OR, or even the intensive care unit. A 5 to 6 cm vertical, midline incision is centered over the xiphoid process. Deeper tissues are then separated by electrocautery or a spreading technique with scissors. Placing larger patients in the Trendelenburg position will assist in facilitating exposure to the xiphisternum and pericardium [ 6 ]. The linea alba is carefully incised without breaching the peritoneal cavity. Xiphoidectomy is completed by releasing the tissue around the xiphoid process with either electrocautery or Mayo scissors. Alternatively, the xiphoid can be hinged upwards (i.e. similar to opening the hood of a vehicle). The distal sternum is retracted anteriorly and careful dissection is engaged towards the pericardiodiaphragmatic junction. It may occasionally be necessary to divide some of the anterior diaphragm before the pericardium can be identified. When initial visualization is poor despite dissection of the surrounding tissue, which is often, digital palpation of the cardiac impulse can be used as a guide to locate the pericardium. Use of a sponge stick to push the precordial fat out of the way laterally in a corkscrew movement is also very helpful. The heart is further revealed by pushing down the inferior diaphragm. The pericardium is then grasped tautly with two long Allis clamps. A vertical 1 to 2 cm incision is made in-between the two clamps with a #15 scalpel blade (or sharp scissors) to reveal, in the absence of PCI, a trace of clear pericardial fluid and the underlying epicardium. It is essential to ensure a completely bloodless field (ideally with white surgical sponges) prior to violating the pericardium. A false positive result secondary to contaminated blood from outside the pericardial space is suboptimal. A positive window for hemopericardium is noted by the evacuation of clot or blood staining within the pericardial fluid. The pericardial sac is then irrigated with warm saline to confirm any active bleeding. Communication with anesthesia is pertinent since drainage can immediately reduce preload and trigger hemodynamic collapse in some patients. Progressive deterioration during the procedure may mandate emergency sternotomy or thoracotomy. A positive window is classically followed by a median sternotomy and pericardiotomy. If a concurrent laparotomy has been initiated, a pericardial window can also be performed through the central tendon of the diaphragm. This can be accomplished by extending the abdominal incision cephalad to the xiphoid, tracing the falciform ligament to the diaphragm, identifying an area of diaphragm to the left of the falciform, grasping and elevating the diaphragm between two Allis clamps, incising the peritoneum in a vertical direction, and, finally, feathering through the central tendon with a scalpel until the pericardial space is, most gratifyingly, entered. The patient remains profoundly hypotensive and increasingly obtunded during the rapid initiation of a massive transfusion protocol. A FAST ultrasound confirms the presence of pericardial fluid. The trauma team agrees that urgent intervention for cardiac tamponade is indicated. What is the best exposure and technique? Surgical exposure Left anterolateral thoracotomy and clamshell thoracotomy Hemodynamically unstable or pulseless patients with PCI (i.e. despite fluid resuscitation and/or CPR for less than 15 minutes) require resuscitative thoracotomy, often via a left anterolateral thoracotomy (LAT) [ 7 ]. The LAT approach offers rapid access to the heart and left thoracic structures. This also allows for timely decompression of cardiac tamponade, cardiac hemorrhage control, cardiac massage, aortic cross clamping, and prevention or control of air embolism. Concurrent induction of anesthesia and application of positive pressure ventilation in the setting of tamponade physiology can reduce preload to an extent that may result in profound hemodynamic instability. Resuscitation with blood products must be initiated and the thoracic operative field must be prepared and surgeon-ready prior to induction. Penetrating bodies found in situ are generally left in place until the chest is opened in case of concomitant vascular and solid organ injuries [ 8 , 9 ]. With the arms of the supine patient abducted to 90 degrees on arm boards and the breast retracted cephalad, an incision using a #10 scalpel is made from the left sternal border of the fourth or fifth intercostal space to the left posterior axillary line along the curve of the rib ( Fig. 4 ). The inframammary fold is a reliable visual landmark for this space. The intercostal muscles and pleura are subsequently transected with curved scissors along the superior margin of the rib below. A Finochietto retractor is then placed with the instrument joint on the lateral side of the incision. Before spreading the ribs, large surgical sponges may be used to cover the incised edges to avoid injury from rib spikes. Once opened, the incision can be extended medially for further exposure. To improve exposure and protect the pleural surface during the procedure, ventilation to the left lung can be reduced by temporary right mainstem bronchial intubation. Even as they focus on maneuvers in the left chest, surgeons must remain vigilant about possible concurrent right thoracic injuries – LAT can be supplemented by a right sided chest tube to screen for a right hemopneumothorax that may be contributing to hemodynamic instability and that may warrant exploration. An initial right thoracotomy may be preferred for patients with right-sided chest injuries [ 10 ]. Otherwise, the LAT incision can be carried into the contralateral thorax as a bilateral anterior thoracotomy (clamshell) incision by cutting the sternum with either heavy scissors, Lebsche knife, or a Gigli saw. Using one or two retractors, the clamshell is opened for further exposure by extending the thoracotomy posterolaterally. In desperate scenarios, a gloved assistant can manually hold the incision open [ 11 ]. Approximately 1 cm from the lateral edges of the sternum (with a variable course), the internal mammary arteries can be indenitifed in a vertical plane. Rarely, they may be ligated as the incision is extended across the sternum. Although these vessels do not often initially bleed due to vascular spasm, they must eventually be ligated at both the proximal and distal ends before final closure. LAT is employed as the classic approach to resuscitative thoracotomy, but the clamshell incision provides access and improved visualization in poor lighting to every thoracic structure, except the posterior diaphragm and superior esophagus. LAT with clamshell extension is often the incision of choice where wide exposures are required for injury control and repair [ 11–13 ]. Median sternotomy Hemodynamically stable patients with injuries to the cardiac box can be assessed with immediate sternotomy and exploration in the OR [ 14 ]. Although sternotomy requires technical precision and attention to detail (i.e. to avoid postoperative complications), it also affords excellent visualization of the anterior heart and great vessels, and therefore the deployment of multiple operative techniques. A sandbag positioner can be applied posteriorly between the shoulder blades to better expose the midline (particularly for obese patients) [ 15 ]. The suprasternal notch and xiphoid process are first identified to prepare the incision between these two points. The initial skin incision is deepened to the sternal bone with cautery which is then used to trace the midline and divide the interclavicular ligament found at the superior aspect of the manubrium. This prevents subsequent binding and failure of the sternal saw. The jugular venous arch may require ligation or cauterization if closely approximated to the sternal notch. Blunt digital dissection is then engaged to rapidly separate the xiphoid process and manubrium from the underlying mediastinal structures. Opening up the retrosternal space provides additional safety from saw-associated trauma. Osteotomy is generally initiated from the caudal end, rather than the top, as extra steps may otherwise be needed to cut the sternoclavicular ligaments and develop an adequate retromanubrial space to insert the saw [ 16 ]. It is critical to keep the saw within the midline of the sternum to avoid shearing into either side of the chest. This is particularly important in the lower sternum because it is thinner and more vulnerable to saw deviation than the top portion [ 17 ]. If an electric or pneumatic saw is unavailable, a large straight bone cutter can be applied upward from the xiphisternum instead [ 18 ]. The anesthetist ideally holds patient ventilation, and the osteotomy proceeds with the saw angled upwards to avoid any injury to the underlying pleura and mediastinum. With towels/sponges covering the cut sternal edges to control bleeding, the retractor is then placed into the sternum. The retractor blades should ideally contact the distal manubrium to minimize any additional fractures upon rapid thoracic distraction [ 17 ]. The mediastinal fat can be dissected, and pleurae are pushed aside. It should be noted, that in general a median sternotomy should be resevered for patients with anterior thoracic stab wounds only. Procedures required posterior to the heart can be particularly challenging to perform with efficacy through this incision (therefore, a bilateral thoracotomy is preferred for all gunshot wounds and most other penetrating injuries, particularly outside of the cardiac box). Pericardiotomy From a LAT, the pericardium is elevated and punctured with a blade 1 to 2 cm anterior to the phrenic nerve. It is then extended parallel to the nerve with scissors. The phrenic nerve lies on the pericardial surface and is immediately anterior to the pulmonary hilum. Care should be taken to avoid damaging the phrenic nerve by dividing it, or by cutting the pericardium too closely and causing a retraction injury to the nerve. After releasing a cardiac tamponade, open cardiac massage can be initiated against the sternum with one palm on the posterior aspect of the heart [ 19 ]. From a median sternotomy, after the sternal halves are retracted, the pericardium is grasped between two mosquito forceps (or Allis clamps) and a small incision is created with a #10 scalpel blade along the midline. Forceps will be unhelpful in the context of a tight, fluid filled pericardial space. Damage to the underlying epicardium can be avoided by simply maintaining the blade at an oblique angle. The resulting defect is extended longitudinally with Metzenbaum scissors and T extensions are created along the aortic and diaphragmatic reflections. Cautery can also be used to open the pericardium as long as care is taken to avoid direct application to the myocardium, which can initiate rapid wide-complex tachyarrhythmias (i.e. ventricular tachycardia or fibrillation). Likewise, the thymic tissue can be divided with cautery, or pushed away to expose the pericardium covering the ascending aorta. Acess to the heart must be large enough to allow the insertion of two hands to perform internal cardiac massage when indicated. A simple pericardial sling is created by tautly suturing the open edges to the skin or wound towels, and therefore preventing retraction from dehydration [ 6 ]. The hemopericardium should be evacuated, and the cardiac rhythm noted for potential cardiac massage and/or defibrillation. Attention to a sudden change in arterial pressure upon opening of the pericardium (in the presence of a tamponade) is essential, as there will be an initial rise in the arterial pressure. If a continuous intrapericardial bleed is present, this rise will be followed by a drop in arterial pressure due to the continuous blood loss. It should be noted that when a thoracotomy is performed for trauma, the pericardium must always be opened. External inspection of the pericardium is not sensitive for intrapericardial blood, even in the presence of tamponade. The patient loses pulses within minutes of arriving in the trauma bay, and intubation and left anterolateral thoracotomy are simultaneously undertaken (double set up). Pericardiotomy releases torrential hemorrhage, but once the blood is cleared, the heart is seen to be empty and fibrillating, with a 3 cm laceration of the right ventricle. Massive transfusion is ongoing, mainly via a right subclavian cordis line. The patient is also receiving intravenous calcium chloride (1 g), magnesium sulfate (5 g), amiodarone (300 mg), and epinephrine (1 mg). What is the sequence of the next steps? Cardiac hemorrhage control Light digital pressure may be adequate for the initial control of cardiac injuries. When faced with multiple cardiac lacerations, stapling (6-mm-wide skin staples (Auto Suture 35 W, United States Surgical Corporation, Norwalk, CT)) can be employed for temporary bleeding control. While some clinicians reinforce the stapled closure with sutures, they can alternatively be left in place without reinforcement when necessary/preferred. Unfortunately, some injuries, such as large caliber gunshot wounds or injuries proximate to the coronary arteries, cannot be appropriately managed via cardiac stapling [ 20 ]. Balloon occlusion with a clamped Foley catheter, or cuffed endotracheal tube, may address larger defects by inflating the balloon with saline inside the chamber and gently withdrawing it against the wall. Excessive traction can enlarge the laceration further and create a fatal disaster. With the balloon inflated and extremely gentle traction applied to the catheter, Teflon-pledgeted sutures can then be passed through the ventricle from side to side over the balloon. The thin wall of the right ventricle puts the inflated balloon at significant risk of puncture as each suture is placed. Pushing the catheter and balloon into the ventricle with each bite of the suture will mitigate this complication, although blood loss may be significant. An alternative option is to employ a cuffed endotracheal tube. This provides the advantage of increased manual stability while sewing. It must be re-emphasized however, that excessive traction on either device can enlarge the initial laceration and lead to death. Conveniently, direct venous access may be obtained through the Foley catheter itself for medication boluses (i.e. connect intravenous fluid tubing). A novel hemostatic vacuum device, which consists of a central pillar that occludes the wound via peripheral suction, has also been employed to obtain rapid hemostasis, and therefore allow the surgeon to address synchronous injuries [ 21 , 22 ]. Atrial bleeding is fairly easy to control with a Satinsky vascular clamp, followed by sutured repair or stapled resection (linear stapler with a vascular load). Temporary inflow occlusion with vascular tapes or atraumatic clamps applied to the intrapericardial SVC and/or IVC may be necessary as a desperate remedy to visualize and control extensive or high-pressure cardiac wounds [ 23 , 24 ]. More specifically, with a longitudinal perforation or significant rupture of a ventricle, the time-honored technique of inflow occlusion is useful in avoiding cardiopulmonary bypass (CPB). Patients will immediately become hypotensive when the vena cavae are occluded. Curved aortic or angled vascular clamps are first applied to the superior and inferior vena cavae. The inferior vena cava can be accessed either within the pericardium or between the liver and diaphragm if the surgeon is familiar with this area. As the heartbeat slows, horizontal mattress sutures are inserted rapidly on either side of the defect and then crossed to control hemorrhage. A continuous suture is placed to close the defect and before it is tied down, air is vented out of the elevated ventricle by releasing the clamps on the cavae. This cardiac response also occurs with compression of the right ventricle and pulmonary artery. Internal paddles and other resuscitation tools should be readily available. This technique must be limited to short intervals of occlusion with repeated relief, or successful rhythm restoration is unlikely after approximately 3 minutes [ 25 ]. For injuries to more vulnerable or friable myocardium, manually compressing the right atrium will result in the partial inflow occlusion necessary to repair the ventricle [ 19 ]. Injuries involving the lateral wall of the left ventricle, left pulmonary veins, left atrial appendage, or the left pulmonary artery are accessed through a “cupping” maneuver to lift the ventricles out from the pericardial well. This should be performed fairly slowly by running the fingers of the right hand between the diaphragm and the right ventricle, and then sweeping them posteriorly and cephalad. The hand cups the apex of the left ventricle, which is subsequently elevated anteriorly out of the pericardial well. This nuanced sequence will avoid rapid subsequent hypotension. Meanwhile, placing several pericardial retraction sutures in the posterior part of the pericardium is also helpful to maximize exposure. It should be noted that as procedures such as inflow occlusion are considered and/or engaged, additional (and early) consultation with our perfusionist (i.e. heart-lung machine) and cardiac surgical colleagues becomes increasingly important. Scenarios such as ventricular septal punctures or acquired ventricular septal defects, are nuanced and mandate bypass. Cardiac repair Once temporary hemostasis is achieved (often with a delicate single finger), patients with signs of life should proceed to the OR for definitive repair. Optimization of technical conditions (lighting, field organization, operative exposure, instrumentation, suture availability) are essential, both to avoid iatrogenic injury and create a precise and enduring repair. The specific reconstruction technique depends upon the characteristics of the injury, the resources available to the resuscitation area or OR, as well as the operator's experience and preference. Following pericardiotomy, the heart produces an additional lateral rocking motion without the pericardium holding it in place. This movement can be safely minimized by an assistant's Satinsky clamp on the acute anteroinferior angle of the right ventricle [ 26 ]. This technique is often more straightforward in a heart that is less filled with blood. Use of the Octopus tissue stabilizer (Medtronic, Dublin, Ireland) is also a reasonable alternative, if available. Simple ventricular laceration repair involves passing 4–0 SH or 3–0 MH polypropylene sutures (double armed) under the digital occlusion and out the other side in one pass. The two ends of the sutures are gently pulled to approximate the lacerated edges from bleeding, and the needle is reinserted across the finger and back out the other side. This completes a figure-of-eight stitch as the finger is subsequently withdrawn. These steps are repeated along the defect as needed. A potentially safer alternative is to employ pledgeted polypropylene sutures with a horizontal mattress technique when possible to reduce the risk of tearing the heart tissue. Although Teflon pledgets are sometimes unnecessary on a thick and robust myocardium, they can be helpful for a friable and edematous heart, right ventricle, or areas with surrounding contusion and hemorrhage. This technique generally provides additional seal and protection [ 19 , 27 , 28 ] ( Fig. 5 ) . It is important to highlight that the principles of suturing cardiac muscle are similar to sewing other soft structures such as the liver and pancreas. More specifically, correctly selecting the optimal suture and needle type/size, maximizing delicate soft tissue handling, using the entire curve of the needle for insertion and egress, tying flat smooth knots, and avoiding all regional distractions are critical to technical success. A vigorously pumping heart can create difficulty in passing the needle through both edges of the wound within one movement. Instead, an additional needle holder in the non-dominant hand can also be utilized to catch the needle from inside the defect after it is inserted. The needle is then passed through the opposite edge of the laceration. Timing the needle entry to diastole can also prevent inadvertent slashing of the cardiac musculature. Furthermore, if a Foley catheter is employed to control the bleeding, the catheter can be carefully pushed into the chamber each time the needle is inserted, thereby preventing perforation of the balloon. Larger defects, including gunshot wounds, may be closed with interrupted horizontal mattress sutures instead [ 6 ]. Whichever strategy is employed, adequate suture bites through the myocardium must be ensured to lower the risk of tissue tearing. This is particularly important for the thinner right ventricle. As previously noted, the selection of needle size is critical to success. Atrial defects are repaired by placing a vascular clamp under the perforation. Preventing additional traction to the atrial wall is essential to avoid lacerating it. Subsequently, simple, continuous stitches with 5–0 polypropylene sutures on an RB needle can be utilized. Alternatively, a 6–0 polypropylene suture may be employed if the atrial tissue is exceptionally thin. Running horizontal mattress stitches may be more appropriate for thin atrial walls which require a technique that spreads tension along the entire wound edge [ 6 ]. When the injury cannot be controlled with a single clamp, multiple Allis clamps can be engaged in a row to pinch the wounded edges together with subsequently prepare for a mattress repair underneath. If the atrium is especially dilated, pledget reinforcement may be required. When time is limited, or such bioprosthetic materials are not readily available, small pieces of the pericardium can also be used to buttress sutures [ 29 ]. Two needles from a double-ended suture are passed through the pericardial sling on one side, then across the laceration, and out the opposite pericardial edge. Pledgets are cut and fashioned into a particular size, and the two ends are pulled. The second pledget is apposed to the ventricular wound by irrigation, and then the sutures are tied to complete the stitch. This simple technique is also especially useful when small pledgets are required for vascular anastomoses and repairs. As mentioned, the beating heart often presents a challenge for accurate suture placement, posing a risk of needle-stick injury during digital occlusion. Intravenous administration of adenosine has therefore been employed to induce a brief asystole and thereby facilitate repairs on the stationary heart [ 30–32 ]. Low doses of adenosine (3 to 12 mg) stop the heart for 10 to 30 seconds, during which repair and comprehensive inspection are completed. Adverse effects, including atrioventricular block and hypotension, usually resolve when the drug is discontinued, making adenosine a reliable adjunct to repair [ 30 ]. Alongside adenosine infusion, several additional maneuvers for the inspection and repair of challenging cardiac injuries are relevant. Management of wounds to the posterior aspect of the heart require special care, as lifting the heart kinks the great vessels, causing bradycardia, hypotension, and arrest. To access the posterior heart however, it must often be ‘flipped up’ prior to suture repair. Close communication with the anesthesiologist and rapid surgical technique are essential, given the typical induction of complete cardiac arrest after positioning. As a result, intermittent restoration of the heart back into its natural position is required for cardiac relief during prolonged repairs. Alternatively, gentle lifting of the heart by gradually stacking one to three folded laparotomy pads provides time for the heart to adapt to the planned displacement. Depending on their availability, off-pump cardiac stabilization devices are also an option to gain safe elevation and rotation for cardiac exposure [ 33 ]. In desperate cases, it may ultimately be necessary to elevate an atraumatic clamp applied to the acute anterior-inferior margin of the right ventricle and repair the wound as quickly as possible [ 26 ]. Defects adjacent to the coronary arteries also warrant additional comment as coronary blood flow can be inadvertently compromised during the repair. Interrupted, horizontal mattress sutures are placed beneath the bed of the coronary vessel to prevent vascular constriction. Pledgets may be omitted unless the sutures are likely to tear through the myocardium and vessel. Accordingly, suturing alongside a coronary artery is guided by monitoring for ST segment changes or Q waves. If these occur, urgent stitch removal and re-suturing may be required. Despite the multiple strategies that augment cardiorrhaphy, injuries adjacent to the coronary arteries may require a sutureless approach. Application of a collagen mesh dressing covered by fibrin glue to occlude a stab wound near a branch of the circumflex has previously been reported [ 34 ]. Likewise, defects that are complex, such as large lacerations or a coronary sinus injury, may necessitate the use of autologous pericardial or synthetic patches which are subsequently strengthened by applying biologic glue agents [ 14 , 35 ]. When neither are available, tissue patches can be obtained from the anterior rectus fascia. Institution of CBP when bleeding is impossible to control may help further management with patch grafting, including reinforcing the seal with an omentum or muscle flap for additional protection [ 36 ]. Foreign body removal Occasionally, trauma surgeons may encounter an in-situ cardiac foreign body. Symptoms attributable to these foreign bodies, including cardiac tamponade and arrhythmia, are considered a primary indication for removal [ 37 , 38 ]. Simple extraction of the offending object does however pose further risks of damage to a potentially unstable patient (e.g. a missile that approximates a coronary artery or is deeply embedded within the myocardium and tamponading the wound) [ 39 , 40 ]. Furthermore, manipulation of foreign bodies contained within the left heart require great care and speed due to the high risk of critical embolization [ 41 ]. When removal is indicated, embedded projectiles can be manually extracted with forceps after sewing pledgeted, double-armed horizontal mattress sutures around the body and slowly tightening the stitches during extraction [ 33 ]. Nails must be removed by careful twisting instead of simply pulling and risking damage to the surrounding wound edges. Alternatively, purse-string sutures may be placed at the entry site to close the defect immediately after removal. Intravenous adenosine infusion can also be considered as an adjunctive maneuver to lower contractility and facilitate safe extraction of the penetrating object [ 40 ]. The concomitant use of intraoperative transesophageal echocardiography may also be particularly helpful for visualization a bleeding heart, reinforcing assessment of the penetrating body and guiding surgical instruments [ 42 ]. The decision to institute CPB must be balanced against its risks to the patient, but when appropriate should be triggered early in the surgical process given its clear benefit of ensuring adequate repair and/or foreign body extraction. Coronary artery injuries and cardiopulmonary bypass Injuries to the coronary arteries are infrequent and associated with high rates of prehospital and inpatient mortality [ 43 , 44 ]. Decision making and treatment (including the decision to go on CPB) can be complex and require early collaboration with cardiac surgery if possible. It must be re-emphasized again that this close relationship with our cardiac surgical colleagues is essential as their use of CPB may be more liberal and allow avoidance of prolonged inflow occlusion. The general approach to lacerated coronary arteries consists of ligating injuries to small branches or distal vessels less than 1 mm in diameter, and bypassing major arteries in patients with proximal coronary arterial wounds, although small puncture injuries can be repaired with 6–0 or 7–0 polypropylene sutures. Ligation of a distal or narrow artery must be followed by a period of close observation for possible cardiac ischemia and/or failure. Injuries to the left anterior descending artery, which are relatively common, are particularly prone to these complications, as they can devascularize up to 50% of the left ventricle. If significant myocardial injury is identified early enough, ligation should be reversed immediately. Intraluminal shunts that bridge the lacerated ends of the vessel have also been used to stop bleeding while conserving regional ventricular function and perfusion distal to the laceration [ 45 , 46 ]. Using Potts scissors, the wound can also be carefully extended to facilitate insertion of the shunt and subsequent distal anastomosis. Coronary artery bypass may be the only approach available to salvage a lacerated vessel, even in trauma patients with major comorbidities and substantial risks consequent to bypass itself. Off-pump coronary artery bypass grafting may achieve repairs with less anticoagulation, and without cardioplegic arrest, or the risks associated with CPB in hemodynamically stable patients. When heart stabilizing devices are not immediately available for off-pump CABG, the centre of a Teflon patch may be cut to form a square frame that encloses the anastomotic site [ 47 ]. The corners of the patch are deeply sutured into uninjured myocardium, and gentle upward traction is applied to the loosely tied threads. This locally immobilizes the target and prepares the anastomotic site. Bypass can then proceed with a single 6–0 or 7–0 polypropylene suture on a fine taper point needle. The left internal mammary artery is often considered the first choice for aortocoronary bypass grafting, despite its poor patency rates during episodes of vascular spasm in an unstable patient [ 48 ]. Likewise, use of this vessel is also limited when CPB is instituted through a clamshell incision which transects the artery. Alternatively, reversed saphenous vein grafting remains an option. Injuries causing significant cardiac dysfunction, arrhythmias or impending intractable heart failure despite attempts at repair may necessitate engaging acute CBP. Other indications for CBP include the inability to manage a wound due to its large size or location, as well as failure to repair the wound despite hemodynamic stability or inotrope administration [ 49 ]. In these cases, CPB provides a platform for thorough inspection and exposure of the heart, as well as definitive myocardial repair within a bloodless field. Uncontrollable bleeding and/or postoperative coagulopathy and inflammation from systemic heparinization during CPB, especially in patients with multiple injuries, can be obviated to some extent with heparin-bonded circuits [ 43 ]. Marginal cardiac dysfunction, such as that resulting from distal vessel ligation, may be adequately treated with an intra-aortic balloon pump to provide sufficient cardiac output. The right ventricular laceration is rapidly closed with interrupted 3–0 horizontal mattress sutures. With ongoing resuscitation, the heart begins to fill, but organized contractions are not observed. No other cardiac wounds are identified. What options remain? Adjuncts to cardiac repair Open cardiac massage When organized contractions fail to return, open cardiac massage can maintain some cerebral and coronary perfusion. In open cardiac massage, the heart is squeezed between two flat palms from the apex while avoiding any digital penetration into the myocardium. During compressions, the fullness of the heart can provide a sense of the patient's volume status and the adequacy of the resuscitation. The effectiveness of compressions can be gauged by arterial line waveforms or by end tidal carbon dioxide measurements when these adjuncts are available. When needed for ventricular fibrillation, defibrillation via internal paddles commences at 10 joules and is repeated at 10 to 50 joules as required. Aortic clamping: indications, technique Bleeding above the diaphragm generally precludes the need for aortic cross-clamping as it may worsen cardiac hemorrhage. When a patient is close to exsanguination, or in profound shock however, occluding the descending thoracic aorta may be necessary to redistribute any remaining aortic pressure to the myocardium and brain. From an anterolateral thoracotomy (or clamshell thoracotomy in cases of poor exposure or visualization), the left lung is elevated anteriorly, followed by an incision to the mediastinal pleura and the inferior pulmonary ligament. The aorta can be identified just above the diaphragm as the first tubular structure anterior to the thoracic spine. Blunt dissection is performed to separate the pleura along the anterior and posterior borders of the aorta. This must be just enough to place a clamp without significantly disrupting the thoracic and spinal blood supply. Perfusion to the spinal cord can also be maximized if the clamp can be placed closer to the aortic hiatus of the diaphragm. Manual occlusion between the thumb and index finger, or simply against the vertebral body as a desperate measure, can be engaged prior to formal clamping. To avoid esophageal perforation, an in-situ nasogastric tube may be used as a guide to differentiate the aorta from the esophagus. Aortic cross clamping, like resuscitative endovascular balloon occlusion of the aorta (REBOA), will increase blood pressure in the proximal circulation, thereby increasing coronary perfusion and, possibly, myocardial contractility. Cross clamping may be a useful adjunct in refractory cardiac arrest or post arrest cardiogenic shock. However, the technique comes at great cost in terms of warm ischemia time, and must be limited to the time it takes to restore intravascular volume or a maximum of 30 minutes. Pulmonary hilar cross clamping Pulmonary injuries create the risk of air embolism, especially with positive pressure ventilation and low pulmonary venous pressures in the context of hemorrhagic shock. LAT and clamshell thoracotomy can provide access to the injured pulmonary hilum, which can be cross clamped at end expiration after the inferior pulmonary ligaments are partially taken down. This measure can both reduce the risk of air embolization and control ongoing pulmonary hemorrhage, until the lung injury can be controlled via tractotomy or resection. Extracorporeal life support Extracorporeal life support (ECLS) is emerging as an adjunct in the care of patients with penetrating chest trauma and refractory shock. Concerns about systemic anticoagulation had previously limited ECLS use in trauma, but have been offset by potential advantages, including rapid cannulation, and the well-documented feasibility of avoidance of therapeutic anticoagulation when heparin-bonded circuits are used [ 50 ]. Cardiopulmonary failure is managed using veno-arterial ECLS with femoral-femoral cannulation (i.e. percutaneous access with a Seldinger technique), preferably without a skin incision to prevent further bleeding. The venous cannula is positioned near the juntion between the right atrium and the inferior vena cava to optimize venous drainage. The arterial cannula is directed towards the distal aorta. This will offer complete circulatory and respiratory support. ECLS enables trauma teams to control temperature at around 36 °C, which may be useful in helping to reduce secondary brain injury for patients with cerebral injuries or those who have received cardiopulmonary resuscitation. In cases where a complex cardiac operation cannot be tolerated in a resuscitated patient or where myocardial stunning results in transient cardiogenic shock, ECLS may serve as a bridge to recovery, or to further investigations and interventions. After a brief period of resuscitation and open cardiac massage, the heart begins to contract! With time, adequate spontaneous cardiac output is confirmed by end tidal carbon dioxide and blood pressure measurements. The patient receives more calcium and magnesium, as well as intravenous bicarbonate to address anticipated effects of lower body reperfusion. He is moved from the trauma bay to the operating room for definitive hemostasis, placement of chest tubes and mediastinal drains. What are the best closure strategies? Closure Pericardial closure Pericardial closure is favored in most non-trauma operations to minimize postoperative retrosternal adhesions (post sternotomy) and prevent lateral cardiac herniation (post LAT). This is especially true in cases of a repeated sternotomy, as it improves hemodynamics and protects against cardiac tamponade. In trauma patients however, closing the pericardium has the potential to lead to iatrogenic tamponade because of myocardial edema due to direct injury and/or resuscitation. The risks of sealing the heart in these cases may outweigh the benefits. When primary closure is feasible, it can be performed by approximating the edges of the pericardium with a 2–0 absorbable (Vicryl) continuous stitch at 1 cm intervals beginning at the cranial end. If a reoperation is possible or intended, non-absorbable sutures may be employed to guide future re-opening. A 2 cm gap is left at the diaphragmatic end for a mediastinal drain placed anterior to the defect. When closure is still preferred despite cardiac dilation or despite a limited supply of native pericardial tissue, the defect can be conveniently covered with pericardial fat pads that are readily dissected and sutured onto the pericardial edges. Drains Proper placement of mediastinal and pleural tubes can prevent further complications from recurrent hemopneumothoraxes, cardiac tamponade and/or infection. Prophylactic antibiotics are also justified for thoracostomy tubes in patients with penetrating injuries [ 51 ]. Standard 24 to 32 French chest tubes are inserted through the intercostal spaces in the midaxillary line. Although the fourth or fifth intercostal space is often used, they may not be available if a thoracotomy was performed at the same level. Drains can alternatively be placed through the lower intercostal spaces with the help of ultrasound guidance. Air drainage is best achieved by placing the drains in an anterior direction, whereas the tube can be directed posteriorly for evacuating blood. Alternatively, tube thoracostomy can proceed through epigastric incisions by ensuring they are placed laterally within the rectus fascia to prevent subsequent herniation [ 52 ]. Furthermore, mediastinal or pericardial drains can be inserted along the midline, often below the median sternotomy incision in the epigastrium (angled tubes may be particularly helpful). As previously discussed, a distal gap is left behind when closing the pericardium to facilitate drain placement. It is critical to carefully label all drains (i.e. pleural space or mediastinum or pericardial) within the thorax in the postoperative setting. Nursing needs can be complex, and suction has been inadvertantly placed on pericardial drains leading to negative pressure on a sutured cardiac repair. Median sternotomy Accurate sternal re-approximation and closure are key factors in preventing postoperative pain, sternal dehiscence and infection [ 53 ]. Figure-of-eight wires are often used as a fast and stable closure technique that is comparable to new sternal closure methods in regards to wound complication rates [ 53 , 54 ]. Four to eight stainless steel wires are passed through the manubrium and body, including one that bridges the manubriosternal joint. Before closure, a towel can be placed between the sternal halves to protect the heart. Minimal bleeding occurs when passing the needle perpendicularly through the bone, employing the needle holder between the proximal and middle third of the needle and advancing it vertically [ 9 , 52 ]. A concave instrument may also be positioned under the sternum to further protect the mediastinum from injury by the needle. Optimal approximation and stability are achieved by inserting wires at equal distances from the midline at the appropriate vertical level. When this is difficult, it may be better to apply the wires around the sternal body between the intercostal spaces. If this technique is preferred, extra caution is essential to avoid damaging the internal mammary arteries and causing a subsequent hemothorax. Once all the wires are in place and locked with needle drivers, the towel is gently removed, and the mediastinum is rinsed with saline. Definitive hemostasis from the sternal edges is achieved with electrocautery and/or bone wax. The wires are crossed and lifted upwards (i.e. sternal halves are approximated), and then loosely twisted and cut. The assistant can facilitate approximation by lifting forward the pectoral girdle with their palms on the scapulae. The cut ends are tightened until the two edges contact, and the wire stumps are buried entirely into the presternal tissue. Internal sternal fixation with absorbable sternal pins can provide additional stability with the possibility of easy re-entry [ 55 ]. Alternatively, sternal wires can be placed in a simple interrupted manner, spaced 1 to 2 cm apart. They are then straightened prior to crossing in a smooth fashion to ensure adequate sternal apposition once twisted. Delayed pericardial and sternal closure may be necessary if the heart enlarges from primary edema or excessive fluid administration. A temporary thoracic closure may also be required if hemodynamic compromise ensues with attempts at re-approximation. Diuresis with furosemide is a viable option if the patient's hemodynamics will allow. Otherwise, an abdominal-type plastic covering with sterile surgical drapes, or genitourinary irrigation bag sewn onto the skin, can be temporarily employed until definitive closing is possible. It should be noted that if the sternum is left open, the patient needs to remain intubated and paralyzed, as an awake and coughing patient can cause a life-threatening laceration of the right heart by cutting the anterior ventricular surface between the two sternal edges (Hanuman syndrome). It may also be possible to protect against this occurrence by using a Vacuum Assisted Closure (VAC) dressing with concurrent placement of a large pad between the sternal edges. VAC dressings can also be employed as an alternative to sterile draping with no apparent negative impact on cardiac and respiratory dynamics [ 56 ]. In extreme cases, where any contact between the heart and the sternal edges compromises cardiac function, sternal stenting is necessary. Two semi-rigid chest tubes, or twisted wires, can be bridged across the mediastinum and sutured against the sternal edges as a quick and simple approach to prevent an edematous heart from compression [ 57 , 58 ]. Clamshell closure Bilateral transverse thoracosternotomies can be closed by one or two figure-of-eight stainless steel wires that go through and cross-bridge both parts of the separated sternum. Conventional uncrossed loops into the bone do not prevent anteroposterior displacement of the sternal parts and may pose a risk to the already injured heart [ 59 ]. The transected internal mammary arteries must also be ligated prior to closure. Skin closure Consideration for sutured skin closure (versus stapled closure) is relevant in the rare scenario of massive postoperative hemorrhage requiring immediate re-entry into the chest. Almost there! Pleural and pericardial drains are placed, and the LAT is closed. The patient is taken to the intensive care unit where he remains hypotensive, requiring norepinephrine at 11 mcg/min. Are any further investigations needed or interventions likely? Postoperative care and pitfalls Close postoperative evaluation is crucial to reduce the incidence of posttraumatic cardiac sequelae in patients with PCI. In-hospital postoperative care should include electrocardiogram monitoring and the liberal use of two-dimensional Doppler echocardiography. Tang and colleagues reported abnormal echocardiograms in 17.4% of penetrating cardiac trauma patients, with pericardial effusion (47%) being the most common finding (followed by wall motion abnormalities and reduced ejection fraction) [ 60 ]. Further investigation for etiologies of postoperative cardiac failure may elucidate coagulopathic tamponade, hemorrhage from the repair site, and/or acute myocardial infarction. Significant heart failure typically requires inotropic medications, and occasionally electromechanical device assists, for cardiac support. Accordingly, continuous hemodynamic monitoring is essential. Delayed hemorrhage from a traumatic or iatrogenic injury to an internal mammary artery should be considered as a cause of ongoing hypotension and prompt consideration of early re-exploration. Posttraumatic acute myocardial infarction can be diagnosed with a combination of segmental wall motion abnormalities on echocardiography, electrocardiogram abnormalities, and serum tropinin I levels. The latter two tests, however, have low specificity, as surgical and resuscitative maneuvers themselves create changes in both [ 61 ]. Subsequent cardiac assessment should incorporate differentiation of hemorrhagic, dynamic, or stenotic causes of infarction. Complete heart block and other conduction system abnormalities, which have been reported to occur in 2.8% of PCI patients, may warrant temporary placement of epicardial wires or transvenous pacing [ 62 ]. Possible symptoms, such as new murmurs or dyspnea on exertion, can alternatively indicate ventricular septal defects which are less common and can be managed conservatively in asymptomatic patients. Otherwise, transcatheter closure is preferred when possible to avoid further risks of open surgery with CPB. Similarly, other complex cardiac sequelae such as valvular injuries require close multidisciplinary communication and teamwork. Synchronous valvular injury in particular must be ruled out in cases of PCI, as 3 to 8% of patients will have a concurrent trauma to one or more heart valves. Despite modern techniques and standard hygiene within cardiac surgery, sternal wound infections still occur at relevant rates with associated in-hospital mortality rates of up to 35% [ 52 ]. In a trauma patient, who may have undergone a rapid sternotomy with less than sterile technique in a rushed envrionment, special attention must be paid to postoperative infections. Although antimicrobial prophylaxis has been recommended for cardiac surgery, controversy remains over optimal dosing, duration, and timing. A combination of medical treatment and irrigation, VAC, or flap coverage should be utilized for these wound infections. Lastly, antimicrobial prophylaxis or treatment for other complications such as empyema and/or sepsis should be based upon factors such as the hospital antibiogram and specific site(s) of infection. Summary Penetrating cardiac injuries pose complex strategic, technical, and logistical challenges that test the performance of entire trauma systems. Acute care surgeons, with training and experience in the decision making and operative aspects of PCI, and with knowledge of systems of acute care, are well-positioned to lead comprehensive resuscitative and operative efforts. Technical depth and agility with respect damage control physiology and resuscitation, surgical exposure, injury control, cardiac repair, and chest closure can reduce the downstream consequences of PCI and the complications of surgery. With preparation, trauma and acute care surgeons can streamline the response to one of the most acute, time-dependent, and complex surgical crises. Early collaboration with our cardiac surgical colleagues and their perfusionist team, when available, can also be lifesaving. CRediT authorship contribution statement All authors contributed to the research and writing of this chapter. We are grateful to Dr. David Evans for his assistance with the accompanying figures. Funding sources No funding sources were used to write this chapter. Ethics approval As this chapter was a literature review and a summary of expert opinion, and as it did not use any identifying patient data or involve any clinical interventions, research ethics board review was not required. Declaration of competing interest None of the authors have conflicts of interest related to this chapter or its subject.
|
[
"DEGIANNIS",
"LINDSEY",
"KAMALI",
"PLUMMER",
"NICOL",
"OCONNOR",
"KARMYJONES",
"SOBNACH",
"DEGIANNIS",
"BASTOS",
"WISE",
"SIMMS",
"FLARIS",
"NAVID",
"YILMAZ",
"STARK",
"DEGIANNIS",
"RAMDASS",
"HIRSHBERG",
"MACHO",
"GUERRERO",
"PORCU",
"ELLERTSON",
"GELDENHUYS",
"ASENSIO",
"GRABOWSKI",
"BURLEW",
"MATTOX",
"SHAPIRA",
"LIM",
"KOKOTSAKIS",
"RUPPRECHT",
"FEDALEN",
"TODA",
"AGRIFOGLIO",
"SLATER",
"ACTISDATO",
"WANG",
"HARRER",
"MICHALSEN",
"SYMBAS",
"FRY",
"WALL",
"ASENSIO",
"FEDALEN",
"NARAYAN",
"RAMA",
"ATKINS",
"FELICIANO",
"ARLT",
"SANABRIA",
"RESER",
"SCHIMMER",
"CATANEO",
"KOSHIYAMA",
"FLECK",
"JONES",
"EREK",
"KOSTER",
"TANG",
"CASTANO",
"JHUNJHUNWALA"
] |
e9f35850e3f04fc883df57f91090d69a_Differentiated responses of plant water use regulation to drought in Robinia pseudoacacia plantation_10.1016_j.agwat.2023.108659.xml
|
Differentiated responses of plant water use regulation to drought in Robinia pseudoacacia plantations on the Chinese Loess Plateau
|
[
"Yan, Xiaoying",
"Zhang, Zhongdian",
"Zhao, Xiaofang",
"Huang, Mingbin",
"Wu, Xiaofei",
"Guo, Tianqi"
] |
Robinia pseudoacacia plantations play an important role in improving the ecological environment of the Chinese Loess Plateau (CLP). However, drought stress is emerging as the major threat of sustainable growth in R. pseudoacacia plantations under the background of global warming and increasing water scarcity. Investigating the responses of plant water use to drought and the associated regulating mechanism in R. pseudoacacia is helpful for improving understanding of plant survival strategies and developing sustainable forest management practices under climate change. In this study, we monitored the canopy transpiration (Tr) dynamics with synchronous observations of soil water content and leaf water potentials in R. pseudoacacia plantations during two different hydrological years (2021 and 2022) at two sites featuring semihumid (Changwu) and semiarid (Mizhi) climate conditions. Results showed that normalized Tr exhibited stronger relationships with meteorological variables at the Changwu site than Mizhi site, as well as under non-drought condition compared to drought condition. The canopy stomatal conductance (G
c) decreased significantly with increasing vapor pressure deficit (VPD) and soil drought at both sites. The sensitivity of G
c to VPD revealed more strict stomatal regulation of transpiration in response to drought at the Changwu site, and less strict stomatal regulation at the Mizhi site. Relationship between midday and predawn water potentials indicated a partial isohydric strategy in response to drought, and reflected stomatal closure tends to occur more rapidly than hydraulic conductivity loss in R. pseudoacacia. These results suggest that the Tr and G
c values of R. pseudoacacia and their sensitivity to climate weakened as soil drought progresses and varied with different climatic conditions, and R. pseudoacacia exhibited flexible stomatal regulation of transpiration and water use strategies in response to drought.
|
1 Introduction Forests are a major component of terrestrial ecosystems at the global scale and play an important role in carbon and water cycles and energy balance processes ( Bonan, 2008 ). Forest ecosystem functions are also affected by numerous biotic and abiotic factors, among which water availability is often the most limiting abiotic factor ( Boisvenue and Running, 2006 ). Especially in semiarid and arid regions, the evaporative demand exceeds precipitation and available water resources are often very deficient, which severely limits the growth and survival of trees ( Wang et al., 2019 ). In recent years, the frequency and severity of droughts have continued to increase with global warming, which may lead to a decrease in water availability and further negatively affect the physiological processes of trees and regional water cycles ( Munoz-Villers et al., 2018 ). Extreme and/or chronic drought has led to large-scale canopy dieback and increased tree mortality in temperate forests in Western Europe and in poplar ( Populus spp. ) plantations in northern China ( Breda et al., 2006; Anderegg et al., 2019; Ji et al., 2020 ). Drought stress limits plant growth, reduces plant productivity, and even threatens plant survival in severe cases ( McDowell et al., 2020 ). Plant water use is a key component of the water cycle ( Katul et al., 2012 ). Plants regulate water use across a broad range of timescales to maintain a favorable water status under varying water availability ( Feng et al., 2017 ). Therefore, quantifying tree water use is important for tree physiology, ecohydrology, and other studies. Transpiration plays a central role in plant water use and accounts for more than 60% of the evapotranspiration in terrestrial ecosystems ( Jiao et al., 2019; Kumagai et al., 2014; Ungar et al., 2013 ). It is jointly influenced by external environmental factors such as vapor pressure deficit (VPD), solar radiation (R s ), and soil water availability ( McDowell N.G. et al., 2008; Tie et al., 2017; Song et al., 2022 ), and is also regulated by internal factors such as stomatal conductance and plant hydraulic conductance ( Chen et al., 2023 ). Variations in environmental factors can influence the water potential gradient that drives transpiration. In response to these changes, trees employ physiological regulatory mechanisms to control transpiration and adapt to their environment ( He, 2020 ). These mechanisms exert an important influence on tree survival and growth ( Bovard et al., 2005 ). For instance, canopy transpiration (T r ) of Robinia pseudoacacia , Mongolian pine, and Chinese pine generally increases exponentially with VPD, and is related to R s and soil moisture. Increases in VPD and soil water stress can inhibit the opening of stomata and result in the reduction of T r ( Jiao et al., 2019; Song et al., 2021; Lyu et al., 2022; Guillén et al., 2022 ). Several studies also show that the reduced soil water availability enhances the hydraulic resistance between roots and soil systems, prevents water movement from soil to leaves, and triggers stomatal closure to avoid or postpone hydraulic failure ( Manzoni et al., 2014; Ghimire et al., 2018 ). Information about how these environmental factors interact to influence tree transpiration and canopy conductance is accumulating ( Fang et al., 2019; Iqbal et al., 2021; Song et al., 2021; Du et al., 2023 ) and is critical for determining the mechanisms controlling transpiration as well as the long-term hydrological regime of forests. Furthermore, the regulation of transpiration by leaf stomata is also widely recognized as related to leaf water potential ( Eller et al., 2020 ). Strong stomatal regulation can help trees prevent excessive water loss and maintain a relatively stable leaf water potential ( Fisher et al., 2006; Martin-StPaul et al., 2017 ). Conversely, stomatal closure can be induced by a decline in leaf water potential ( Hoffmann et al., 2011 ), with the variability in sensitivity of stomata to leaf water potential determining plant function in different species ( Klein, 2014 ). Resistance to xylem cavitation is related to the performance of leaf stomata at low leaf and stem water potentials ( Pivovaroff et al., 2018 ). Under different soil moisture conditions, the same tree species can control transpiration by developing different water use strategies ( Franks et al., 2007 ). Previous studies have tended to analyze the relationship between the predawn and midday leaf water potentials ( Martinez-Vilalta et al., 2014 ) to distinguish plant water use strategies. In addition, abundant evidence suggests stomatal regulation is closely linked to the water supply capacity of vascular systems ( Sperry et al., 2002; Zhang et al., 2013 ) and that trees suffer from hydraulic impairment and dysfunction prior to death ( McDowell N. et al., 2008; Pangle et al., 2015 ). Although hydraulic conductance can be partially recovered from cavitation overnight ( Brodribb and Holbrook, 2004 ), its reduction may further affect tree transpiration and stomatal conductance, thereby lowering carbon accumulation ( Pangle et al., 2015 ). Therefore, a better understanding of the impact of driving factors on transpiration is critical to reveal the mechanism regulating plant water use in forests at stand level and can provide a theoretical basis for water resource management under future climate change. The Chinese Loess Plateau (CLP), located in the middle reaches of the Yellow River basin in Northern China, is a typical water-scarce and climate-sensitive region. The region is also ecologically vulnerable and prone to severe soil erosion ( Shi and Shao, 2000 ). The Chinese Government implemented an extensive ecological rehabilitation program (the “Grain for Green Project”) in the late 1990s to control soil erosion and restore the degraded ecosystem; as a result, vegetation coverage increased by 37.9% from 2000 to 2020 ( Wang et al., 2022 ). Robinia pseudoacacia L. was widely selected as a pioneer afforestation species due to its rapid growth, high drought tolerance, and ability to adapt to poor soil fertility ( Li et al., 2018 ). R. pseudoacacia has been widely planted on the CLP and to date accounts for about 90% of the total area of artificial forest plantations ( Ma et al., 2017 a). Although R. pseudoacacia plantation restoration practices have achieved significant ecological benefits, negative effects have manifested in the region. For example, R. pseudoacacia trees grew fast in early years, but then their health status suffered due to depleted soil water storage ( Chen et al., 2020 ). Chen et al. (2008a) and Fu et al. (2012) both report that high-intensity water use by R. pseudoacacia has resulted in soil desiccation and the formation of dried soil layers, which may further hamper plant growth ( Shangguan and Zheng, 2006; Chen et al., 2008b ). R. pseudoacacia must achieve a stable balance between water supply and demand to realize sustainable growth in forest stands ( Cao et al., 2010 ). In addition, the climate on the CLP has exhibited warmer and dryer trends in the wake of global warming over several decades ( Li et al., 2010; Xin et al., 2011 ), which has led to prolonged droughts of increasing intensity ( Zhang et al., 2012; Sun et al., 2015 ). In this context, the effects of environmental drivers on plant water use are still not fully understood for R. pseudoacacia . In particular, the differential regulating strategies of mature plantations in different climatic zones have rarely been investigated, presenting a critical limitation for sustainable water resource management in R. pseudoacacia plantations. The overall goal of this study was to present a comprehensive analysis of water use and regulating mechanisms of R. pseudoacacia plantations in response to drought in different climatic conditions. We conducted successive field observations of transpiration, soil water content (SWC), and leaf water potential (predawn: ψ pd ; midday: ψ md ) in R. pseudoacacia plantations in semihumid and semiarid areas of the CLP over 2 years (2021 and 2022). Specific objectives were to: 1) quantify the dynamic variations in transpiration, canopy stomatal conductance, and whole-tree hydraulic conductance of R. pseudoacacia at semiarid and subhumid sites; 2) determine the differences in response patterns of transpiration and canopy conductance to meteorological variables and water supply conditions; and 3) explore the mechanism regulating water use in response to drought. 2 Materials and methods 2.1 Study sites This study was conducted in 2021 and 2022 at two R. pseudoacacia plantations located on the CLP in Shaanxi Province ( Fig. S1 ). One site was at the Changwu forest station (35º7.8′N, 107º50.4′E, 1078.0 m a.s.l.) near Changwu county. This site is characterized by typical topographical features of a loess gully, and experiences a subhumid temperate climate. The mean annual precipitation (MAP) and mean annual temperature (MAT) are 592.5 mm and 9.8 ℃ (1981–2020), respectively. The soils were developed from wind-deposited loess and have a silty loam texture. The R. pseudoacacia stands were approximately 20-year-old plantations with a density of 2375 trees ha −1 . The other site was located in Mizhi county (37º40.2′ N, 110º13.2′ E, 985.7 m a.s.l.) and had a temperate semiarid climate. The MAP and MAT at this site are 440.8 mm and 10.5 ℃ (1981–2020), respectively. This site is a typical hilly-gully area, with a sandy loam soil texture ( Luo et al., 2023 ). The stands were approximately 19-year-old plantations with a density of 990 trees ha −1 . At this site, R. pseudoacacia demonstrated normal growth during the growing season of 2021, but nearly 80% of the trees showed different degrees of canopy dieback at the beginning of the growing season (May-June) in 2022 ( Fig. S1 ), after which the canopy dieback slightly recovered with increasing precipitation. A 20 m × 20 m sample plot was established at each site; tree characteristics, including tree height, diameter at breast height (DBH), sapwood depth, and sapwood area were surveyed ( Table 1 ). Undergrowth vegetation was mainly composed of Arctium lappa , Duchesnea indica , Artemisia argyi , Carpesium abrotanoides , Litsea pungens , and wheatgrass. The growing season for R. pseudoacacia on the CLP is from May to September. We measured the leaf area index (LAI) at each site once a month during the growing seasons of 2021 and 2022 using a plant canopy analyzer (LAI-2200, Li-Cor, Lincoln, NE, USA). To avoid transient changes in sky conditions and influence from direct sunlight, measurements were conducted either during overcast weather or a period of very low solar elevation around sunset. Leaf water potential was measured by a pressure chamber (Model 1000, PMS Instruments, USA) at predawn (ψ pd , Mpa; 05:00–06:00 h) and midday (ψ md , Mpa; 12:00–13:00 h) on typical sunny days once a month during the growing seasons at each site. In each measurement period, three canopy leafstalks with their compound leaves were taken from different branches of each sample tree as replicates. 2.2 Measurements of hydrometeorological variables Meteorological variables, including the air temperature (T a , ℃), relative humidity (RH, %), precipitation (P, mm), wind speed (WS, m s −1 ) and solar radiation (R s , WJ m −2 ), were measured and continuously recorded every 1 h by local automatic weather stations at both the Changwu and Mizhi sites in 2021 and 2022. To reflect the interactive effects of climate drivers on tree transpiration, the potential evapotranspiration (ET p )—an integrated indicator involving radiation, WS, T a , RH, and VPD—was estimated by the Penman equation based on the measured meteorological variables ( Penman and Keen, 1948 ). At each site, the soil water content (SWC, cm 3 cm −3 ) in R. pseudoacacia stands was continuously monitored with EC-5 sensors (Decagon, Inc. Decagon, USA) and calibrated against the gravimetric method. Nine EC-5 sensors were installed in the soil at locations 20, 40, 60, 80, 100, 200, 300, 400, and 500 cm below the ground surface, respectively. Data were recorded every 10 min with a datalogger (CR1000, Campbell Scientific, Logan, UT, USA), with data missing for some days at the two sites due to power failure. In order to apply the results to study sites with different soil hydrological properties, and also to ensure the results are comparable with other studies, we adopted the relative extractable soil water (REW) to reflect soil moisture conditions. When the REW drops below 0.4, the vegetation is assumed to be suffering from drought stress ( Granier et al., 2007; Zhou et al., 2013 ). REW is calculated as follows: where VWC (1) R E W = V W C − V W C min V W C max − V W C min max and VWC min are the maximum and minimum volumetric SWC at each soil depth across the 2 years. 2.3 Sap flow measurement and calculation of canopy transpiration The sap flux measurements were conducted using Granier-type sensors (TDP, 10 mm) ( Granier, 1987 ) during the 2021 and 2022 growing seasons at the Changwu and Mizhi sites. Each sensor consisted of a pair of 10-mm-long cylindrical probes with diameters of 1.2 mm: a continuously heated upper probe with a 0.15 W of constant power supply, and an unheated lower probe as a temperature reference ( James et al., 2002 ). Based on the distribution of DBH, five trees were selected as sample trees at each site ( Table 1 ). To install the sensors, bark was first removed from the sample tree until the cambium was exposed and then the sensor inserted into the sapwood on the north-facing side of the trunk at a height of 1.3 m above the ground. Finally, a sheet of 60- cm-wide aluminum reflective foam insulator was wrapped above the probes and around the tree to protect the sensors from solar radiation and rainfall ( Wu et al., 2021 ). Data were recorded every 10 min using a CR1000 datalogger (Campbell Scientific Inc., Logan, UT, USA). The sap flux density ( F d ) can be calculated from the measured temperature difference using the following relationship according to Granier (1987) : in which (2) F d = α K β where (3) K = Δ T max − Δ T Δ T F d (g cm −2 s −1 ) is the sap flux density; K is the dimensionless sap flow index; and α and β are fitted parameters determined by the species-specific calibration curve. Granier (1987) found a strong correlation between K and F d with α = 0.0119 and β = 1.231 in different conifers and broad-leaf tree species, and noted that the empirical equation was not species-dependent. However, several studies showed that for a given K , the F d was underestimated in several tree species using Granier’s original calibration curve ( Taneda and Sperry, 2008; Bush et al., 2010; Fuchs et al., 2017 ). Therefore, we adopted the calibrated α and β parameters ( α = 0.051 and β = 1.18) by Ma et al. (2017) to calculate the F d of R. pseudoacacia in the CLP in this study since new coefficients with Clearwater correction almost accounted for the underestimation and also allowed for more precise estimation for R. pseudoacacia F d by using the TDP technique ( Paudel et al., 2013 ). ΔT (℃) is the temperature difference between the two probes at any given time, and ΔT max (℃) is the maximum temperature difference between the two probes during any given day, which is determined by the maximum ΔT of successive 7- to 10- d periods ( Lu et al., 2004; Peters et al., 2018 ). If the sapwood thickness of sample trees was less than the probe length, corrected values (ΔT sw ) were used instead of measured values (ΔT) for the temperature difference according to Clearwater et al. (1999) : where (4) Δ T s w = Δ T − ( 1 − a ) Δ T max a a is the proportion of the probe in the sapwood. Plot averaged sap flow rate ( J s , g m −2 s −1 ) was calculated using: where (5) J s = ∑ i = 1 5 F d i × A s i ∑ i = 1 5 A s i F di and A si are the sap flux density and sapwood area of the i th sample tree, respectively. Canopy transpiration (T r , mm day −1 ) was calculated as follows: where A (6) T r = J s × A s t A g st (m 2 ) is the total sapwood area and A g (m 2 ) is the total ground area of the plot. The sapwood thickness and area of the sample trees were based on core sample analysis. An increment borer was used to drill core samples. Regression equations of A s vs. DBH were derived from core samples taken from 20 randomly selected trees around the plots, resulting in relationships of A s = 0.34 DBH 1.92 (R 2 =0.97) for the Changwu site and A s = 1.64 DBH 1.27 (R 2 =0.87) for the Mizhi site. 2.4 Canopy stomatal conductance calculation The canopy stomatal conductance ( G c , mm s −1 ) and midday canopy stomatal conductance ( G c, md ) for the plantation were calculated from the canopy transpiration using a simplified formula established by Kostner et al. (1992) : where T (7) G c = T r L A I ρ G v T a + 273 V P D r (mm s −1 ) is the daily canopy transpiration of the plantation; LAI is the leaf area index (m 2 m −2 ); ρ is the density of moist air (998 kg m −3 ); and G v is the gas constant for water vapor (0.462 m 3 kPa kg −1 K −1 ). Data recorded on rainy days were excluded ( Kumagai et al., 2008 ). 2.5 Data analysis In this study, significant differences in hydrometeorological variables (T a , VPD, R s , ET p , SWC) between the two growing seasons and the two sites were tested using paired samples t-tests. Because transpiration can vary with site locations, tree morphological and physiological parameters (e.g. stand density, tree size, leaf area, and sapwood area), the normalized T r data were used in this study so that differences among two study sites were minimized. During each growing season (May–September) in 2021 and 2022, the maximum T r was determined at each study site. Normalized T r was defined as the ratio between daily T r and the maximum T r throughout the growing season. Significant differences in normalized T r and G c between the two years were also tested using paired samples t-tests. The repeated-measures ANOVA was used to test the differences in normalized T r , G , leaf water potential (ψ c pd and ψ md ), and whole-tree hydraulic conductance ( K S-L ). To investigate the response of T r to microclimate, an integrated index named variable of transpiration (VT) was computed using R s and VPD. Because VPD is considered as the primary environmental variable that usually contributes more than 2/3 of the driving force of transpiration, with the remainder coming from the radiative component ( Green, 1993; Zhang et al., 1997 ). Thus VT (kPa (MJ m −2 ) 1/2 ) was calculated as follows ( Du et al., 2011 ): (8) V T = V P D × R s 1 / 2 Canopy transpiration is controlled by not only canopy conductance, but also VPD and other environmental variables. Therefore, to elucidate the response mechanism of T r to meteorological factors, we analyzed the relationships between normalized T r and meteorological factors (T a , VT and ET p ) under different soil moisture conditions based on daily time scale data. Previous studies tended to analyze the relationship between T r and VPD by using the following exponential saturation function ( Ewers et al., 2002; Kumagai et al., 2008; Du et al., 2011; Song et al., 2021 ): where (9) T r = a ( 1 − e − b x ) a and b are fitting parameters; T r is daily canopy transpiration; and x is the corresponding meteorological variables. In this study, we adopted linear, logarithmic and exponential saturation functions to fit the relationship between normalized T r and meteorological factors, and then selected the optimal fitting curve from the functions. The responses of G c to VPD were examined using a linear logarithmic function because it provides a convenient benchmark for comparisons among conditions ( Oren et al., 1999 ): where (10) G c = − m ln V P D + G c r e f m is the slope of G c versus lnVPD, quantifying the sensitivity of the G c to VPD and implying the closure rate of canopy stomatal. G is a reference canopy stomatal conductance when VPD = 1 kPa. Generally, the ratio of stomatal sensitivity to reference canopy stomatal conductance ( cref m / G ) is approximately 0.6 across a large range of species and environmental conditions ( cref Oren et al., 1999; Ewers et al., 2005; Naithani et al., 2012 ), indicating the physiological stomata regulating leaf water potential to prevent xylem cavitation. The value of m / G smaller than 0.6 suggests less strict regulation of water loss. The low ratio of boundary layer conductance to stomatal conductance results in cref m / G larger than 0.6. Therefore, it provides the benchmark for assessing the responses of various tree species to environmental conditions. Because many variables affect cref G c (soil moisture, R s , etc.), there is a typically distribution of G c values for any level of VPD. By only fitting a model based on the upper value of G c for any VPD level can minimize the constraints of other variables on G c and maximize inferences about the impact of VPD on G c ( Ford et al., 2011 ). The derivation of the upper boundary line is as follows: (1) dividing the G c response to the VPD into 0.2 kPa VPD intervals; (2) calculating the mean and standard deviation of the G c data within each VPD interval; (3) removing outliers and selecting data above the mean plus one standard deviation of G c as the boundary line data. ANCOVA was used to test for significant differences in m and G of cref R. pseudoacacia between Changwu and Mizhi sites under different soil moisture conditions. The whole-tree hydraulic conductance of the soil to leaf pathway ( K S-L , g m −2 s −1 MPa −1 ) was calculated according to the Darcy equation ( Cohen and Naor, 2002; Lu et al., 1996; Sperry, 2000 ): where (11) K S − L = J d ψ p d − ψ m d J d is the difference in sap flux density between predawn and midday (g m −2 s −1 ); ψ pd represents an estimate of soil water potential; and ψ md is midday leaf water potential. Iso/anisohydraulic regimes during the time periods were determined using the linear framework of Martinez-Vilalta et al. (2014) : where Λ is the intercept of the relationship, measuring the transpiration stream relative to the plant hydraulic capacity under well-watered conditions; and σ is the slope, characterizing the relative sensitivity of the transpiration rate and plant hydraulic conductance to declining soil water potential. σ = 0 implies strict isohydry; σ = 1 implies strict anisohydry; σ > 1 implies extreme anisohydry; and 0 < σ < 1 implies partial isohydry. (12) ψ m d = Λ + σ • ψ p d The functional relationships between two significantly correlated variables were built using regression analysis. All statistical analyses were performed with SPSS 25.0 software (SPSS Inc., Chicago, IL, USA). Statistical significance in this study was set at P < 0.05. 3 Results 3.1 Variations in hydrometeorological variables Daily variations in hydrometeorological variables are shown in Fig. S2a –f during the growing seasons (1 May–30 September period) in 2021 and 2022 at the Changwu and Mizhi sites. The total precipitation during the growing season was 568.2 and 378.3 mm in 2021 and 2022 at the Changwu site, representing 131% and 87% of the long-term average precipitation over the same period (434.0 mm, 1981–2020), respectively. This indicates that 2021 was a wet year while 2022 was a normal year at Changwu. The total precipitation during the growing season was 183.2 and 618.6 mm in 2021 and 2022 at the Mizhi site, respectively, representing 52% and 175% of the long-term mean precipitation over the same period (353.3 mm, 1981–2020). Notably, a large rainfall event in 2022 (127.6 mm) accounted for about 21% of that year’s total growing season precipitation. This indicates that 2021 and 2022 represented dry and wet years at the Mizhi site, respectively ( Fig. S2e ). Seasonal variation patterns of other meteorological factors (T a , R s , VPD, and ET p ) in 2021 and 2022 were identical at two sites, peaking in June and July. The mean T a during the 2021 and 2022 growing seasons averaged 19.8 ( ± 3.5) and 20.1 ( ± 4.3) °C at the Changwu site, and averaged 22.7 ( ± 4.3) and 21.6 ( ± 4.3) °C at the Mizhi site, respectively ( Fig. S2a ). The average VPD during the growing seasons of 2021 and 2022 was 0.7 ( ± 0.38) and 0.7 ( ± 0.39) kPa at the Changwu site, and was 1.4 ( ± 0.6) and 1.0 ( ± 0.6) kPa at the Mizhi site, respectively ( Fig. S2b ). During both growing seasons, the daily mean R s was 18.9 ( ± 7.5) and 20.5 ( ± 7.6) MJ m −2 at the Changwu site, respectively, and was 18.8( ± 6.8) and 18.0 ( ± 7.5) MJ m −2 at the Mizhi site ( Fig. S2c ). The ET p averaged 4.0 ( ± 1.7) and 4.3 ( ± 1.3) mm d −1 during both growing seasons at the Changwu site, and averaged 5.0 ( ± 1.7) and 4.9 ( ± 1.6) mm d −1 at the Mizhi site, respectively ( Fig. S2d ). Significant differences in the daily mean R s and ET p were observed in both years ( P < 0.05) at the Changwu site, while significant differences in the daily mean T a and VPD were observed in both years ( P < 0.05) at the Mizhi site, with lower mean values in wet years. There were significant differences in the daily mean T a , VPD, and ET p between two study sites ( P < 0.001), with higher mean values at the Mizhi site. The average SWC in the 0–500 cm soil layer of the R. pseudoacacia plantations during the 2021 and 2022 growing seasons ranged from 0.14 to 0.21 cm 3 cm −3 at the Changwu site and from 0.04 to 0.09 cm 3 cm −3 at the Mizhi site ( Fig. S2e ). Significant differences in SWC were observed between the two years and sites ( P < 0.05). REW values in the 0–500 cm soil layer ranged from 0 to 0.39 (i.e., < 0.4) from late July to late September in 2021 and 2022 at Changwu and from June to September in 2021 and late May to early August in 2022 at Mihzi, indicating periods of soil drought ( Fig. S2f ). Overall, these data indicate that the hydrometeorological conditions were warmer and drier at the Mizhi site, with higher evaporative demands relative to the Changwu site. 3.2 Response of canopy transpiration to drought Fig. 1 presents daily variations in T r for R. pseudoacacia during the growing seasons of 2021 and 2022 at the Changwu and Mizhi sites. During the growing season in 2021, the daily T r varied from 0.2 to 4.2 mm d −1 with an average of 1.8 mm d −1 at Changwu, and from 0.4 to 2.8 mm d −1 with an average of 1.2 mm d −1 at Mizhi. During the growing season of 2022, the daily T r varied from 0.4 to 3.7 mm d −1 at Changwu, and from 0.1 to 2.0 mm d −1 at Mizhi, with mean values of 2.1 mm d −1 and 0.8 mm d −1 , respectively. There was significant difference in normalized T r between years at the Changwu site. ( P < 0.05) ( Fig. 2 ). Cumulative T r was 220.5 and 307.8 mm during the growing seasons in 2021 and 2022 at Changwu, respectively, accounting for 80.6% and 81.4% of the cumulative precipitation and 47.3% and 46.4% of the cumulative ET p over the same period ( Fig. 3 ). At Mizhi, the cumulative T r was 175.5 and 127.6 mm during the growing seasons in 2021 and 2022, respectively, accounting for 97.1% and 20.6% of the precipitation and 24.8% and 17.0% of the cumulative ET p over the same period ( Fig. 3 ). The REW is an index that represents the soil water available for plants. Since R. pseudoacacia transpiration is dramatically affected by soil moisture when REW < 0.4 ( Jiao et al., 2019 ), soil water availability can be classified as soil drought when REW < 0.4. The normalized T r of R. pseudoacacia with normal growth was significantly higher under non-drought condition than under drought condition in 2021 and 2022 at the Changwu and Mizhi sites ( P < 0.001) ( Fig. 2 ). The linear, logarithmic and exponential saturation functions were used to fit the relationship between the normalized T r and meteorological factors (T a , VT and ET p ) under different soil moisture conditions across the measurement periods of 2021 and 2022 at both sites. We found the relationship was more in line with linear and exponential saturation function curve, with a higher R 2 ( Figs. 4 and 5 , logarithmic function curve was not shown). The normalized T r increased with increasing T a , VT and ET p at both sites under different soil moisture conditions. The coefficient of determination (R 2 ) and slope of normalized T r vs. VT or ET p under drought conditions were lower than under non-drought conditions at both sites, and the R 2 and slope of normalized T r vs. ET p were the greatest, followed by values for T r vs. VT. The R 2 and slope of normalized T r vs. VT or ET p at the Changwu site were higher than at the Mizhi site under soil drought condition. In addition, the R 2 and slope of normalized T r vs. VT were higher after precipitation than before precipitation ( Fig. 6 ). In summary, the T r of R. pseudoacacia was more sensitive to the environmental variables at the Changwu site than at the Mizhi site under soil drought condition, and was more sensitive under non-drought condition than drought condition. Although the drought at the Mizhi site had eased in the latter part of the growing period in 2022, the sensitivity of the normalized T r of R. pseudoacacia to environmental variables was still at a low level. 3.3 Regulation of canopy stomatal conductance To explore the stomatal regulation of water loss for R. pseudoacacia in response to VPD as represented by G c at different sites and under different soil moisture conditions, daily G c was calculated for the measurement periods of 2021 and 2022 ( Fig. 7 ). To eliminate the effect of other factors, only datasets for rain-free days and VPD > 0.4 were included in this calculation. The daily G c at Changwu ranged from 0.1 to 5.8 mm s −1 , with mean values of 1.2 and 1.7 mm s −1 during the growing seasons of 2021 and 2022, respectively. At Mizhi, the daily G c ranged from 0.3 to 12.0 mm s −1 , with mean values of 1.7 and 1.9 mm s −1 for 2021 and 2022, respectively; notably, however, the daily G c of normal growth R. pseudoacacia averaged 4.4 and 1.1 mm s −1 before and during the soil drought in 2021, respectively, which represents a 75% decline. Similarly, the mean G c of R. pseudoacacia in canopy dieback decreased by 57% during the drought and 56% after the drought in 2022 relative to pre-drought in 2021. The daily G c of R. pseudoacacia significantly declined with VPD, and VPD explained more than 62% of the variation in G c at both sites ( Fig. 8 ). After applying boundary line analysis, the slope of the stomatal response to lnVPD ( m ) was higher under non-drought conditions than under drought conditions at both sites (except for Changwu site in 2021), and the G cref varied with soil moisture conditions. At Changwu, the ratio of m and G cref ( m / G cref ) was 0.39 and 0.87 under non-drought and drought conditions in 2021, respectively; corresponding values for 2022 were and 0.74 and 0.88 ( Fig. 8 a, b). At Mizhi, m / G cref values were 0.65 and 0.38 under non-drought and drought conditions in 2021, respectively, and were higher after canopy dieback in 2022( Fig. 8 c, d). Notably, m / G cref values were greater than 0.6 at Changwu under drought condition, whereas at Mizhi, it was closed to or less than 0.6. Overall, R. pseudoacacia demonstrated stomatal sensitivity that gradually decreased with increasing VPD and soil drought, but showed more strict stomatal regulation of transpiration in response to drought at Changwu and less strict stomatal regulation at Mizhi. 3.4 Water use strategy and regulation of whole-tree hydraulic conductance Seasonal changes of ψ pd , ψ md and K S-L for R. pseudoacacia during the measurement periods of 2021 and 2022 at Changwu and Mizhi are shown in Fig. 9 . During the growing seasons of 2021 and 2022, the monthly ψ pd of R. pseudoacacia varied from − 0.3 to − 1.55 MPa at Changwu and from − 0.32 to − 1.62 MPa at Mizhi ( Fig. 9 a, d), the monthly ψ md ranged from − 1.13 to − 2.21 MPa at Changwu and from − 1.70 to − 3.03 MPa at Mizhi ( Fig. 9 b, e), and the K S-L ranged from 2.96 to 112.41 g m −2 s −1 MPa −1 at Changwu and from 0.94 to 43.85 g m −2 s −1 MPa −1 at Mizhi ( Fig. 9 c, f). The variations in ψ pd , ψ md , and K S-L were largely consistent with changes in soil moisture at both sites, with higher values observed during moist months and lower values during dry months. Significant differences in ψ pd , ψ md , and K S-L were found between different months ( P < 0.05), and ψ pd also differed between the two sites in 2021 ( P < 0.05). The linear fitting of ψ pd and ψ md for R. pseudoacacia at the Changwu and Mizhi sites is shown in Fig. 10 . The model explained more than 87% of the variability in ψ pd . The slope (σ) of the relationship between ψ pd and ψ md for R. pseudoacacia was 0.67 and 0.79 during the growing seasons of 2021 and 2022 at Changwu, corresponding values at Mizhi were 0.82 and 0.89. The value of the σ ranged between 0 and 1 at both sites, indicating the water use strategy of R. pseudoacacia is partial isohydric regulation during the growing season. This further suggested that a faster decline in canopy transpiration than in plant hydraulic conductance in response to soil drought. The intercept of the relationship (Λ) was − 1.65 and − 1.38 at Mizhi, indicating lower maximum transpiration rate per unit of hydraulic transport capacity for R. pseudoacacia compared to at Changwu. During the growing season, soil moisture conditions of R. pseudoacacia plantations could affect the relationships of G c,md to both ψ md and K S-L ( Fig. 11 ). When soil water was relatively sufficient, the correlations between both G c, md and ψ md and between G c, md and K S-L were not significant ( P > 0.05). However, G c, md was positively correlated with ψ md (R 2 =0.43, P < 0.05) and K S-L (R 2 =0.66, P < 0.05) during soil drought. These results indicate that R. pseudoacacia showed partial isohydric strategies in response to drought at both sites. Additionally, the decrease in leaf water potential and K S-L could also lead to leaf stomatal closure and reduce canopy conductance under drought conditions. 4 Discussion 4.1 Differences in transpiration and its response to hydrometeorological variables R. pseudoacacia, as a dominant afforestation species on the CLP, has so far experienced to reduce growth and present symptoms of early degradation due to high water consumption. Transpiration and related water-use strategies are fundamental to understanding the physiological processes of plantations and play a vital role in their survival and growth, especially in the CLP regions, where water availability is greatly affected by the increased frequency and intensity of droughts ( Anderegg et al., 2019; Ji et al., 2020 ). While previous studies linked R. pseudoacacia tree dieback to deep soil drying ( Jia et al., 2017; Liang et al., 2018 ), information is still needed on how R. pseudoacacia trees may respond differently to drought. In the present study, the average daily T r and cumulative T r for R. pseudoacacia during the growing season were lower in wet year than in dry and normal years at the same site, and their variations fell within the ranges reported for R. pseudoacacia on the CLP ( Table S1 ) ( Wang et al., 2010; Chen et al., 2014; Zhang et al., 2015; Jiao et al., 2016a, 2016b; Ma et al., 2017; Lyu et al., 2020; Wu et al., 2021; Lyu et al., 2022 ). The seasonal variation patterns of T r were identical at the two sites and peak in May to July, consistent with the findings of Lyu et al. (2022) for semiarid and subhumid sites on the CLP. We also found that the normalized T r for R. pseudoacacia during two consecutive growing seasons was significantly higher at the Changwu site than at the Mizhi site. These results might be attributed to differences in the environmental conditions and physiological characteristics of the trees among two sites and different years. The environmental conditions could be divided into evaporative demand (T a , R s , VPD, ET p , etc.) and water supply (soil water, precipitation, etc.) factors ( Naithani et al., 2012; Grossiord et al., 2017 ). The tree physiological characteristics determined transpiration potential, and leaf area index also resulted in a difference in transpiration among two sites and years ( McJannet et al., 2007; Pataki et al., 2011; Zhao et al., 2011 ). In this study, the evaporative demand was higher at the Mizhi site, while the water supply conditions and growth status were better at the Changwu site. In addition, compared with non-drought condition, the normalized T r with normal growth decreased under drought condition at both sites. This is similar with the results found by Chen et al. (2023) , and the reason could be that relatively lower soil moisture increased the hydraulic resistance of the soil–plant system, and ultimately decreased plantations transpiration rate ( Ghimire et al., 2018 ). Tree transpiration is primarily driven by the evaporative demand and available energy ( Di et al., 2019 ). In this study, VT and ET p were verified two good indicators to reflect the impacts of meteorological factors on transpiration, because the VT integrated the impacts of R s and VPD on transpiration while ET p combined the effects of VPD, R s , T a , and WS on transpiration. These results are consistent with the previous reports for R. pseudoacacia and other tree species on the CLP ( Du et al., 2011; Tie et al., 2017; Lyu et al., 2022 ). However, the contributions of meteorological factors to transpiration of trees varied with soil moisture conditions. The impacts of VT and ET p on transpiration were higher under soil drought condition than under non-drought condition at both sites ( Figs. 4 and 5 ). These results indicated that sensitivity of transpiration process to environmental variables decreased with increasing soil water stress on daily time and regional spatial scales. Similar findings have been reported in R. pseudoacacia ( Du et al., 2011; Lyu et al., 2022 ), Mongolian pine and Chinese pine plantations in the Keerqin sandy land ( Song et al., 2022 ) and in Haloxylon ammodendron plantations in Central Asian deserts ( Gu et al., 2017 ). Notably, in this study, the overall impact of environmental factors on transpiration under drought condition at the Mizhi site was obviously lower than that under non-drought condition. The reason could be that severe meteorological and soil drought occurred during June to September in 2021, and subsequently resulted in nearly 80% of the trees occurring different degrees of canopy dieback in 2022 ( Fig. S1 ). Canopy dieback can affect the water use of trees by causing changes in the total leaf area and tree physiology, thus weakening the environmental controls of water use ( Adams et al., 2009 ). 4.2 Stomatal regulation of water loss in response to drought The negative relationship between G c and VPD at both sites ( Fig. 8 ) indicated a control of G c by evaporative demand. Many studies showed stomatal closure due to an increasing evaporative demand in terms of VPD for R. pseudoacacia and other species ( Addington et al., 2004; Jiao et al., 2019; Lyu et al., 2022 ), consistent with our results. Increased stomatal regulation of water use in response to increased VPD played an crucial role in conserving water and maintaining water status within limits, preventing catastrophic loss of xylem function ( Gao et al., 2015 ). We also found the m and G cref of normal growth R. pseudoacacia were higher under non-drought than under drought conditions ( Fig. 8 ), indicating soil water limitations on canopy conductance. This finding is consistent with studies that the sensitivity of G c responses to VPD decreases with increasing soil drought ( Novick et al., 2016; Jiao et al., 2019 ). In addition, the ratio of parameter m to reference canopy stomatal conductance ( m / G cref ) is widely used as an indicator to assess the strictness of stomatal regulation in transpiration. It has a value of ∼0.6 across a wide range of species and environmental conditions. Nevertheless, the ratio is lower than 0.6 in trees that exhibit less strict stomatal regulation ( Oren et al., 1999; Ewers et al., 2005; Naithani et al., 2012 ). Our study indicated that, from non-drought to drought conditions and from wet to normal years, R. pseudoacacia trees shifted from relatively less to more strict stomatal regulation at the Changwu site. Similar results were also found in poplar trees by Song et al. (2021) . In contrast, the ratio of m / G cref for R. pseudoacacia was closed to or less than 0.6 at Mizhi under drought condition and gradually increased from drought to post-drought (recovery) periods, which was significantly different from Changwu. The result indicated that the shift from relatively more to less strict stomatal regulation in R. pseudoacacia trees occurred from non-drought to drought conditions, as well as from dry (2021) to wet (2022) years. Therefore, the R. pseudoacacia trees showed flexible stomatal regulation of transpiration in response to drought at both sites. 4.3 Water use strategy of R. pseudoacacia in response to drought In this study, ψ md significantly differed between the two sites in 2021, whereas ψ pd did not in either year, suggesting that R. pseudoacacia continued to absorb soil water at midday by decreasing the leaf water potential at Mizhi in dry year (2021). It was generally supposed as a risky strategy to alleviate the impact of drought conditions on photosynthesis by reducing minimum leaf water potential, and it might lead to widespread canopy dieback and whole-plant mortality ( Davis et al., 2002; Miyazawa et al., 2018; Choat et al., 2019; He et al., 2020 ). Our results also showed leaf water potential and K S-L were lower in months with dry soil moisture conditions ( Figs. S2, 9 ). ψ pd and ψ md differed for different months ( P < 0.05) at the Mizhi site with a significant decreasing trend in 2021. The leaf water potential (ψ pd and ψ md ) of R. pseudoacacia at the Mizhi site gradually decreased to around − 3.03 MPa as the drought progressed during the growing season of 2021; this value is lower than the minimum leaf water potential of R. pseudoacacia under drought stress conditions in other studies ( Li, 1991; He, 2020 ). The results align with those of previous studies, and suggest prolonged drought causes ψ md to fall below its usual levels and reduces the capacity of the hydraulic system ( Cochard et al., 1992; Franks et al., 2007; Garcia-Forner et al., 2016; Hoffmann et al., 2011 ). During the transition from non-drought to drought conditions in 2021 at the Mizhi site, ψ md decreased from − 1.85 to − 3.03 MPa, canopy transpiration and stomatal conductance decreased by 51% and 62%, respectively, and soil water storage decreased by 32.4 mm. In this study, the relationship between ψ pd and ψ md was used to examine water transport regulation in R. pseudoacacia trees. The results demonstrated that, the response of plant’s water potential gradient to decreasing soil water availability was strongly determined by the ratio between the sensitivity and the vulnerability of the plant hydraulic system, instead of the sensitivity of transpiration rate to drought. The slope (σ) of the linear relationship between ψ pd and ψ md of R. pseudoacacia was between 0 and 1 ( Fig. 10 ). Accordingly, the water regulation strategy of R. pseudoacacia in response to drought was partly isohydric regulation. However, the σ was closer to 1 at Mizhi, implying hydraulic transport limitation for R. pseudoacacia was more likely to occur rapidly than stomatal closure in response to drought at the Mizhi site. This strategy may appear disadvantageous because stomatal conductance loss can be reversed easily compared to hydraulic conductivity loss in the xylem. However, prolonged hydraulic failure can eventually lead to whole-plant mortality ( Tyree and Sperry, 1988; McDowell et al., 2008 ). Martinez-Vilalta et al. (2014) also found that the value of σ was closer to 1 in most species, reflecting a close coordination between stomatal and hydraulic responses to drought. The intercept of relationship (Λ) between ψ pd and ψ md of R. pseudoacacia was − 1.65 and − 1.38 in 2021 and 2022 at Mizhi, corresponding values at Changwu were − 1.14 and − 1.02, indicating lower maximum transpiration rate per unit of hydraulic transport capacity at Mizhi than Changwu. Moreover, our study also implied that K S-L and leaf water potential were significantly correlated with canopy conductance as soil water stress increased ( Fig. 10 ). Similar results have been found in previous studies for urban P. tabulaeformis and Populus euphratica ( Si et al., 2008 ; Chen et al., 2023 ). 5 Conclusions To investigate the regulation of plant water use of R. pseudoacacia plantations in response to drought on the CLP, we conducted a comprehensive two-year field observation of canopy transpiration and canopy stomatal conductance dynamics in semihumid (Changwu) and semiarid (Mizhi) sites. The normalized canopy transpiration (T r ) and canopy stomatal conductance ( G c ) of R. pseudoacacia were significantly greater under non-drought conditions than drought conditions. The T r of R. pseudoacacia was more sensitive to meteorological variables at the Changwu site under soil drought condition, and the R. pseudoacacia at both sites showed reduced climate sensitivity of the canopy transpiration rate in response to increasing soil drought. The G c decreased significantly with increasing vapor pressure deficit and soil drought at both sites. Transpiration of R. pseudoacacia exhibited more strict stomatal regulation in response to drought at the Changwu site and less strict stomatal regulation at the Mizhi site. Furthermore, R. pseudoacacia showed partial isohydric strategies in response to drought at both sites, reflecting stomatal closure to occur more rapidly than hydraulic conductivity loss. Overall, the T r and G c values of R. pseudoacacia and their sensitivity to climate were more susceptible to soil drought at the semihumid site, and R. pseudoacacia showed flexible stomatal regulation of transpiration and water use strategies in response to drought. This study is helpful for understanding the regulation of water use by R. pseudoacacia in response to drought, which is important for sustainable afforestation and water resources management on the Chinese Loess Plateau. CRediT authorship contribution statement Huang Mingbin: Writing – review & editing, Supervision, Project administration, Methodology, Funding acquisition, Conceptualization. Wu Xiaofei: Writing – review & editing, Investigation, Data curation. Guo Tianqi: Writing – review & editing, Investigation, Data curation. Yan Xiaoying: Writing – original draft, Software, Methodology, Formal analysis, Data curation, Conceptualization. Zhang Zhongdian: Writing – review & editing, Methodology, Funding acquisition, Formal analysis, Conceptualization. Zhao Xiaofang: Writing – review & editing, Methodology, Data curation. Declaration of Competing Interest We declare no any conflict of interest in the manuscript of “Regulation of plant water use in response to drought in Robinia pseudoacacia plantations on the Chinese Loess Plateau”. Acknowledgements This research was financially supported by the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB40020202 ) and the National Natural Science Foundation of China (No. 42107335 ). We thank the editors and reviewers for the insightful contributions by way of constructive comments and suggestions on the work. Appendix A Supporting information Supplementary data associated with this article can be found in the online version at doi:10.1016/j.agwat.2023.108659 . Appendix A Supplementary material . Supplementary material
|
[
"ADAMS",
"ADDINGTON",
"ANDEREGG",
"BOISVENUE",
"BONAN",
"BOVARD",
"BREDA",
"BRODRIBB",
"BUSH",
"CAO",
"CHEN",
"CHEN",
"CHEN",
"CHEN",
"CHEN",
"CHOAT",
"CLEARWATER",
"COCHARD",
"COHEN",
"DAVIS",
"DI",
"DU",
"DU",
"ELLER",
"EWERS",
"EWERS",
"FANG",
"FENG",
"FISHER",
"FORD",
"FRANKS",
"FU",
"FUCHS",
"GAO",
"GARCIAFORNER",
"GHIMIRE",
"GRANIER",
"GRANIER",
"GREEN",
"GROSSIORD",
"GU",
"GUILLEN",
"HE",
"HE",
"HOFFMANN",
"IQBAL",
"JAMES",
"JI",
"JIA",
"JIAO",
"JIAO",
"JIAO",
"KATUL",
"KLEIN",
"KOSTNER",
"KUMAGAI",
"KUMAGAI",
"LI",
"LI",
"LI",
"LIANG",
"LU",
"LU",
"LUO",
"LYU",
"LYU",
"MA",
"MANZONI",
"MARTINEZVILALTA",
"MARTINSTPAUL",
"MCDOWELL",
"MCDOWELL",
"MCDOWELL",
"MCJANNET",
"MIYAZAWA",
"MUNOZVILLERS",
"NAITHANI",
"NOVICK",
"OREN",
"PANGLE",
"PATAKI",
"PAUDEL",
"PENMAN",
"PETERS",
"PIVOVAROFF",
"SHANGGUAN",
"SHI",
"SI",
"SONG",
"SONG",
"SPERRY",
"SPERRY",
"SUN",
"TANEDA",
"TIE",
"TYREE",
"UNGAR",
"WANG",
"WANG",
"WANG",
"WU",
"XIN",
"ZHANG",
"ZHANG",
"ZHANG",
"ZHANG",
"ZHAO",
"ZHOU"
] |
469ff01f4ca548e4a1adff79415aecc4_Germline mutation in POLR2A a heterogeneous multi-systemic developmental disorder characterized by t_10.1016_j.xhgg.2020.100014.xml
|
Germline mutation in POLR2A: a heterogeneous, multi-systemic developmental disorder characterized by transcriptional dysregulation
|
[
"Hansen, Adam W.",
"Arora, Payal",
"Khayat, Michael M.",
"Smith, Leah J.",
"Lewis, Andrea M.",
"Rossetti, Linda Z.",
"Jayaseelan, Joy",
"Cristian, Ingrid",
"Haynes, Devon",
"DiTroia, Stephanie",
"Meeks, Naomi",
"Delgado, Mauricio R.",
"Rosenfeld, Jill A.",
"Pais, Lynn",
"White, Susan M.",
"Meng, Qingchang",
"Pehlivan, Davut",
"Liu, Pengfei",
"Gingras, Marie-Claude",
"Wangler, Michael F.",
"Muzny, Donna M.",
"Lupski, James R.",
"Kaplan, Craig D.",
"Gibbs, Richard A."
] |
De novo germline variation in POLR2A was recently reported to associate with a neurodevelopmental disorder. We report twelve individuals harboring putatively pathogenic de novo or inherited variants in POLR2A, detail their phenotypes, and map all known variants to the domain structure of POLR2A and crystal structure of RNA polymerase II. Affected individuals were ascertained from a local data lake, pediatric genetics clinic, and an online community of families of affected individuals. These include six affected by de novo missense variants (including one previously reported individual), four clinical laboratory samples affected by missense variation with unknown inheritance—with yeast functional assays further supporting altered function—one affected by a de novo in-frame deletion, and one affected by a C-terminal frameshift variant inherited from a largely asymptomatic mother. Recurrently observed phenotypes include ataxia, joint hypermobility, short stature, skin abnormalities, congenital cardiac abnormalities, immune system abnormalities, hip dysplasia, and short Achilles tendons. We report a significantly higher occurrence of epilepsy (8/12, 66.7%) than previously reported (3/15, 20%) (p value = 0.014196; chi-square test) and a lower occurrence of hypotonia (8/12, 66.7%) than previously reported (14/15, 93.3%) (p value = 0.076309). POLR2A-related developmental disorders likely represent a spectrum of related, multi-systemic developmental disorders, driven by distinct mechanisms, converging at a single locus.
|
Introduction The human enzyme DNA-directed RNA polymerase II (EC 2.7.7.6) transcribes all nuclearly encoded messenger RNA (mRNA). It is a large enzyme composed of twelve subunits, the largest of which—the 220-kDa subunit A—is encoded by POLR2A (MIM: 180660 ). POLR2A contains essential domains of the RNA polymerase II enzyme, including the catalytic core and a C-terminal heptapeptide repeat, the differential phosphorylation of which is critical for regulating transcriptional dynamics. 1 , Numerous structural and mutational studies in various systems have been conducted, revealing a spectrum of genetic variants differentially impacting distinct dimensions of transcription (i.e., initiation, elongation, etc.). 2 3 , Indeed, the function of RNA polymerase II has been extensively studied for decades. 4 Despite its centrality within the central dogma of molecular biology and its extensive study over decades, POLR2A was not implicated in human disease until 2016, when Clark et al. reported multiple distinct, recurrent somatic mutations in the gene as causative for a clinically unique subset of meningiomas. Very recently the first report of pathogenic germline mutations in 5 POLR2A was published, describing a phenotypically heterogeneous neurodevelopmental syndrome with hypotonia (MIM: 618603 ). Here, we report additional clinical and molecular evidence strengthening the case for 6 POLR2A dysfunction as a multi-systemic, phenotypically heterogeneous Mendelian disorder. Material and methods DNA sequencing and genotyping For individuals 1, 6, and 8–13, DNA capture and sequencing of exomes was carried out as previously described by Hansen et al. at either the Baylor Genetics (BG) laboratories or at the Baylor College of Medicine Human Genome Sequencing Center (HGSC). Sequencing and analysis for individuals 2 (genome sequencing) and 5 (exome sequencing [ES]) were provided by the Broad Institute of MIT and Harvard Center for Mendelian Genomics (Broad CMG). ES and analysis for individuals 3, 4, and 7 was performed at other commercial clinical laboratories. Chromosomal microarray analysis (CMA) for individual 8 was performed at BG. CMA for individuals 1–8, 10, and 12 were performed at other commercial clinical laboratories. It is unknown whether CMA was performed for individuals 9 and 11. 7 NGS analysis Initially, a local data lake containing ES data for approximately 20,000 individuals with suspected Mendelian disorders (Hadoop ARchitecture LakE of Exomes [HARLEE]) was utilized to discover POLR2A as a candidate Mendelian disease-associating gene. Within this dataset, fastq files were aligned to hg19, and variants were called with Atlas2 (v1.4.3) and annotated with VEP. High-quality ultra-rare (MAF < 1/10,000) variants observed in individuals within HARLEE were prioritized. This analysis resulted in the discovery of ultra-rare, potentially pathogenic 7 POLR2A variants in individuals 1, 6, and 8–13. Individuals 2–5 and 7 were ascertained for this study after the initial published discovery of a POLR2A -related developmental disorder, with analysis conducted by the Broad CMG or other commercial clinical laboratories. 6 For all individuals, region-specific intolerance to missense variants is calculated with the Missense Tolerance Ratio (MTR) score, with scores < 1.0 indicating a lower-than-expected ratio of missense to synonymous variants in the ExAC dataset 8 for the 31-bp window surrounding an amino acid residue. Estimates of residue-level conservation were obtained from GERP++ 9 via the UCSC Genome Browser. 10 11 Phenotyping Phenotyping is described in the Supplemental material and methods , with all phenotypes summarized in Table 1 . Structural domains and alignment POLR2A structural domains were derived from the yeast RNA polymerase II (Pol II) crystal structure, which shares a remarkably high level of conservation with POLR2A . Human 3 POLR2A domain coordinates were derived by alignment as performed with MAFFT FFT-NS-2 (v7.305b). 12 Functional evaluation of variants in yeast Thirteen ultra-rare variants in POLR2A were identified in clinical samples at the time the yeast experiments were initiated. Yeast studies were conducted in the yeast ortholog of POLR2A , RPO21 (generally referred to as RPB1 ), and are summarized in Table 2 . Detailed experimental methods are described in the Supplemental material and methods . Ethics statement Data for individuals 1–8 were collected after written informed consent in conjunction with the Baylor Hopkins Center for Mendelian Genomics (CMG) (H-29697) study with approval by the institutional review board at Baylor College of Medicine. Other clinical samples (individuals 9–12) were from the Baylor College of Medicine clinical testing laboratories, now incorporated as BG; these data were studied in aggregate for the purpose of improving the diagnostic assay, under protocol H-41191. The procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national), and proper informed consent was obtained. Results Utilizing the HARLEE data lake, a total of seven clinical exomes were originally identified with ultra-rare (MAF ≤ 1/10,000) suspected pathogenic variants in 7 POLR2A . Attempts were made to contact all individuals and recruit to a research protocol (see Material and methods ). After the initial report of de novo variation in POLR2A causing a Mendelian disorder, five additional individuals were subsequently identified through the genetics clinic at Texas Children’s Hospital and a social media-based support group for families of individuals diagnosed with pathogenic variants in 6 POLR2A ( Table 1 ), including one previously published individual for whom amended and additional phenotypic information is provided (individual 7; Haijes et al., individual 15). The family of individual 7 reports that she is positive for a few phenotypes previously reported as negative: feeding difficulty/failure to thrive, decreased endurance, and decreased fetal movement. Phenotypic data from clinical samples detected in HARLEE for individuals without research consent are reported in aggregate, in accordance with institutional review board (IRB)-approved protocols (see 6 Ethics statement ). Clinical ES or genome sequencing revealed no other genetic diagnosis for any of these individuals. CMA reportedly did not reveal any findings for individuals 1–8 and 12. It is unknown whether CMA was performed for individuals 9 and 11. CMA for individual 10 reportedly identified a variant of uncertain significance (VUS) gain in Xp22.13. Thus, in total, twelve individuals with ultra-rare, putatively pathogenic variants in POLR2A are reported. Pathogenicity for all of these variants is supported by the extreme degree of constraint for missense (missense intolerance Z score = 8.59) and loss-of-function (pLI = 1.0) variants observed in healthy individuals from the ExAC dataset. All variants occur at highly conserved residues (as indicated by a GERP++ score ≥ 2.0), in regions further constrained for missense variation in the ExAC dataset ( 9 Table 1 ). 8 , Of these twelve individuals, there are ten affected by missense variants (including one previously published individual and two individuals affected by the same variant). We also report one individual affected by an in-frame deletion and one family affected by a C-terminal frameshift variant. Variants are distributed throughout the length of the protein product, with no obvious association between severity of phenotype and affected protein domain ( 10 Figure 1 ). We report one variant located within the Clamp core domain, two variants within the Clamp head, one variant within the Dock, three variants within the Cleft, one variant within the Trigger loop, one variant within the Jaw (observed in two individuals), and one frameshift variant in the C-terminal domain (CTD), which is likely to escape nonsense-mediated decay as it occurs within the last exon of the gene. 13 Missense variants Of the ten individuals harboring missense variants, four non-research-consented individuals have phenotypes reported in aggregate (individuals 9–12) rather than at an individual level. Six of these ten missense variants were confirmed to be inherited de novo . The remaining four were observed in clinical samples for which parental ES was not performed, but their pathogenicity is further supported by functional evidence and phenotypic similarity. Commonly reported phenotypes among these individuals include developmental delay (10/10), intellectual disability (7/7), seizures (≥7/10), hypotonia (≥7/10), abnormal movements (≥7/10), ataxia (≥6/10), autism spectrum disorder (ASD) (≥6/7), failure to thrive/feeding difficulty (≥6/10), joint hypermobility (≥6/10), abnormal brain MRI (≥5/10), incontinence (≥5/10), skin abnormalities including keratosis pilaris and easy scarring (≥5/10), visual impairment (≥4/10), short stature (≥4/10), difficulty sleeping (≥4/10), skeletal abnormalities (≥4/10), recurrent upper respiratory infections (≥4/10), and cool distal extremities (≥4/10). (Diagnoses of intellectual disability and autism spectrum disorder cannot be ruled out for three clinical samples, as phenotyping for these individuals is limited to what was included on the ES requisition, ordered for these individuals at an age too young to typically diagnose intellectual disability or ASD.) Individuals 4 and 5 are of note, sharing an identical variant with a previously reported individual: c.3752A>G (p.Asn1251Ser; Haijes et al. ) Individual 14 was reported as a 6-year-old girl with hypotonia, strabismus, frog position in infancy, decreased endurance, feeding difficulties, recurrent respiratory tract infections, disturbed sleeping, gastro-esophageal reflux, failure to thrive, microcephaly, brachyplagiocephaly, decreased fetal movements, aggressive behavior, pectus excavatum, walking at 5.5 years of age, and mega cisterna magna. 6 A detailed clinical description of individuals 4 and 5 is included in the supplement of this manuscript ( 6 Supplemental note ). There is considerable variability in age of walking across these three individuals, reported as 3.5 (individual 5), 4.5 (individual 4), and 5.5 (Haijes et al., individual 14) years of age. Other notable phenotypic differences include the presence of a cardiac abnormality (atrial septal defect) in individual 4 and the presence of recurrent respiratory infections in individual 4 and Haijes et al. 6 individual 14, but not in individual 5. Facial dysmorphology is relatively similar for individuals 4 and 5 ( 6 Figure 2 ). Other variants We report one frameshift variant that is potentially pathogenic—g.7417023_7417024del (GenBank: NC_000017.10 ) (c.5440_5441del [NM_000937.4]; p.Gln1814Valfs99ter)—observed in individual 8, with a remarkably more mild presentation than the individuals affected by missense variants. This variant occurs in the last exon of the gene (29/29) and leads to a premature termination near the C-terminal end of the protein, after amino acid residue 1912/1970, and an alteration of 99 amino acids. This alteration would result in truncation of the CTD by 20 heptapeptide repeats and 1 partial out of 52. Truncation of CTD repeats can confer phenotypes in model organisms. Furthermore, given its position in the gene, it is predicted to escape nonsense-mediated decay, with a fully expressed, yet truncated, protein product. Individual 8 presented at 8 years of age with hypertonia/spasticity of the right extremities, recent worsening headache, memory problems and personality changes, hemiplegia, and delayed speech and motor milestones. MRI of the brain revealed left hippocampal atrophy, and an electroencephalogram (EEG) revealed a focus of spike activity in the left central region. Clinical ES initially failed to detect any pathogenic variants consistent with phenotypic presentation. At most recent follow-up, individual 8 was reported to have developed seizures at age 13 years, recurrent urinary tract infections, and a history of two Achilles tendon release surgeries. Daily severe headaches were reported, as well as difficulty sleeping. Aggregate genocentric reanalysis through HARLEE 14–20 revealed individual 8 as positive for the ultra-rare C-terminal frameshift variant in 7 POLR2A . Sanger sequencing of proband and maternal saliva samples revealed the variant to be maternally inherited. The mother of individual 8 reports no significant related medical history other than delayed speech and mild learning difficulty. Maternal family history is also positive for a son who was born at 35 weeks and had hypoplastic left heart, brain anomaly, and failure to thrive. He had three open heart surgeries and died at 7 months of age. He was reported to have a chromosome 6p duplication with an unknown POLR2A genotype. Harboring an in-frame deletion of two amino acids, individual 1 presented with the most severe phenotype of all evaluated individuals. Clinical ES was ordered at 3 months of age, with reported phenotypes of immature lungs, dilated cardiomyopathy, failure to thrive, hypotonia, developmental delay, dysmorphic features, and abnormal brain MRI findings including polymicrogyria, ventriculomegaly, hydrocephalus, and hypomyelination. ES failed to detect any known pathogenic variants consistent with phenotypic presentation. Individual 1 reportedly died during infancy. As above, genocentric reanalysis identified an ultra-rare variant in POLR2A —chr17:g.7401503GACCTTC>G (NM_000937.4) (c.1314_1319del; p.His439_Leu440del). The average GERP score across the six deleted nucleotides is 4.64, indicative of high evolutionary conservation. Notably, this same variant, when present as a developmental somatic mutation, has been established as causal for a subset of meningiomas. Meningioma was not reported in individual 1. Sanger sequencing of parental saliva samples failed to detect the variant. Sanger sequencing in individual 1 confirmed the presence of the heterozygous variant. 5 Functional studies Functional assays were conducted in yeast to further evaluate the pathogenicity of observed POLR2A variants in clinical and research samples ( Figure 3 ). As noted above, the large subunit of Pol II (encoded by POLR2A in human, RPB1 in yeast) is highly conserved in sequence and structure. We previously established a number of plate phenotypes highly predictive of transcription defects due to specific alterations to Pol II catalytic activity in yeast. 21 , Residues analogous to some identified in individuals had been identified previously as being mutated in genetic screens for yeast transcription mutants; 22 rpb1 Pro24Ser was identified as rpb1-9 (analogous residue to POLR2A p.Pro28), while 15 rpb1 Gly1388Val was identified as sua8-4 (analogous to POLR2A p.Gly1418). Here we employed these tests to interrogate conserved residues impacted by missense variants observed in humans for growth defects in yeast. The yeast strains utilized were the same as in Haijes et al., 23 as their strains were derived from the Kaplan lab. Mutant plasmids encoding variants in conserved residues identified in a subset of individuals were introduced into yeast as the sole copy of 6 RPB1 and phenotyped on a number of growth media. We observed conditional growth defects as well as phenotypes related to altered transcription for a subset of mutants ( Table 2 ). Conditional defects such as temperature sensitivity or formamide sensitivity are consistent with protein folding or assembly defects exacerbated by heat or solvent, and these were observed for yeast rpb1 Pro24Arg, Glu104Leu, and Arg134Trp (analogous to p.Pro28Arg, p.Arg108Leu, and p.Arg140Trp, respectively). Other subsets of alleles show suppression of the gal10Δ56 transcriptional reporter (Asp423delIle424Δ, Asp423Δ, Ile424Δ, Asp1069Val, Gly1388Arg, and Gly1388Val) or constitutive expression of the imd2promoter::HIS3 transcriptional reporter due to altered transcription start selection (Thr1272Ala) (see Table 2 for corresponding human variants). 21 , 22 , Each of these phenotypes has been linked to altered Pol II transcription, usually due to decrease in Pol II catalytic function. 24 21 , 22 , 24 , These phenotypic effects are relatively minor and would be consistent with subtle alterations to Pol II function. 25 Discussion Herein, we confirm the recent discovery of association between pathogenic germline variation in POLR2A and a phenotypically heterogeneous neurodevelopmental disorder. We report the transmission of a potentially pathogenic 6 POLR2A variant within a family—individual 8, inheriting a p.Gln1814Valfs99ter variant from a mother with a remarkably mild presentation of delayed speech and mild learning difficulties. We observe several previously unreported phenotypes (as compared to Haijes et al. ) in individuals with 6 POLR2A -related disorders, including ataxia (observed in 7/12, or 58.3% of individuals), joint hypermobility (6/12, 50%), short stature (5/12, 41.7%), skin abnormalities including easy scarring and keratosis pilaris (5/12, 41.7%), recurrent febrile illness of unknown etiology (4/12, 33.3%), congenital cardiac abnormalities (3/12, 25%), immune system abnormalities (3/12, 25%), hip dysplasia (2/12, 16.7%), and short Achilles tendons (2/12, 16.7%). We also report a significantly higher proportion of individuals with epilepsy (8/12, 66.7%) than previously reported (3/15, 20%) (p value = 0.014196; chi-square test) and a somewhat lower proportion of individuals with hypotonia (8/12, 66.7%) than previously reported (14/15, 93.3%) (p value = 0.076309). We describe the facial dysmorphology of a subset of affected individuals, which is generally mild and nonspecific across individuals with different variants but remarkably similar for the two reported individuals sharing the same variant (individuals 4-5) ( 6 Figure 2 ). In this cohort, previously unreported neuroradiological anomalies include polymicrogyria (2/12, 16.7%) and various benign, congenital anomalies, which cannot yet be ruled out as unrelated to POLR2A dysfunction, each occurring in a single individual: Rathke cleft cyst, hemangioma, and a small, enhancing developmental venous anomaly (DVA). We also report one individual (individual 1) with a germline variant identical to a previously reported meningioma-causing somatic mutation (p.His439_Leu440del). As individual 1 died during infancy, the extent of correlation between germline inheritance of p.His439_Leu440del (or other pathogenic germline variants) and risk of developing meningioma remains unclear. 5 Due to the centrality of POLR2A in transcriptional networks and the wide range of ways in which its function is known to be regulated, it can be reasonably inferred that a spectrum of possible pathogenic genetic variants will present with differential phenotypic presentation and severity. Future efforts should focus on elucidating the molecular mechanisms of pathogenicity—common or distinct—across the spectrum of known pathogenic POLR2A variants. Phenotypes of tested mutants in yeast in most instances were relatively weak, though a subset are strongly predicted to have protein structural or stability defects. Known catalytic mutants identified in yeast cause widespread transcriptional defects when introduced into human cells and likely would not be viable in an organism. For example, a known slow-elongating variant has been introduced into mouse embryonic stem cells (Polr2a Arg749His, analogous to Arg726His in yeast), could not be transmitted through the germline, and caused early embryonic lethality. 26–28 Mouse embryonic stem cells containing Arg749His showed defects upon neuronal differentiation, likely deriving from observed altered elongation rate, gene expression, and alternative splicing changes. Of interest, long genes, which are enriched among neuronally expressed genes, might be predicted to be especially sensitive to altered Pol II elongation or cotranscriptional splicing defects. 29 Haijes et al. 30 delineate two potential molecular mechanisms of disease: haploinsufficiency (with a relatively mild presentation) and a dominant-negative effect caused by aberrant Pol II elongation. We emphasize that despite the relatively mild presentation of all individuals harboring truncating and frameshift variants reported to date (p.Gln700∗, p.Gln735∗, p.Pro1767fs, and p.Gln1814fs), 6 the potentially pathogenic CTD frameshift variants (p.Pro1767fs and p.Gln1814fs) most likely escape nonsense-mediated decay and thus exhibit a dominant-negative mechanism of pathogenicity. Therefore, while haploinsufficiency associated with p.Gln700∗ and p.Gln735∗ almost certainly constitutes a distinct mechanism of pathogenicity, expressed mutant 6 POLR2A products can cause the full range of phenotypic severity observed in POLR2A -related disorders. Taken together, these molecular and phenotypic data suggest that these pathogenic variants constitute a spectrum of transcriptional dysfunction, with phenotypes likely explained by a combination of specific POLR2A variation in conjunction with the genetic burden across a potential variant- or domain-specific network of interacting partners. Such a model could explain both a degree of phenotypic convergence (i.e., similar facial dysmorphism in individuals 4 and 5) and variable expressivity (i.e., microcephaly, cardiac and immune abnormalities present in individual 4 but not in individual 5) or incomplete penetrance (i.e., the mother of individual 8 exhibiting only sub-clinical phenotypes) for a given disease-associating variant. To assess this model, global transcriptional profiling (via RNA-sequencing [RNA-seq], GRO-seq, etc.) could be evaluated across biologically relevant cell types or tissues using patient-derived induced pluripotent stem cell (IPSC) lines. The function of individual pathogenic variants—in the appropriate genetic background—could be assessed by transcriptional profile comparison of patient-derived cells against a split of the same cell line with a mutationally induced wild-type POLR2A . The function of different pathogenic variants could then be compared by normalizing their impact against their respective isogenic controls. Acknowledgments This work was supported in part by grants UM1 HG008898 from the National Human Genome Research Institute (NHGRI) to the Baylor College of Medicine Center for Common Disease Genetics and UM1 HG006542 from the National Heart, Lung, and Blood Institute (NHLBI) and NHGRI to the Baylor Hopkins Center for Mendelian Genomics. C.D.K. was supported by grants R01 GM097260 and R01 GM120450 National Institute of General Medical Sciences (NIGMS). A.W.H. was supported in part by NIH T32 GM08307-26 , The Cullen Foundation , and the Baylor College of Medicine President’s Circle . D.P. was supported by Clinical Research Training Scholarship in Neuromuscular Disease partnered by the American Academy of Neurology (AAN), American Brain Foundation (ABF) and Muscle Study Group (MSG), and the International Rett Syndrome Foundation (IRSF grant #3701-1 ). S.M.W. is supported by the Victorian Government’s Operational Infrastructure Support Program . S.D. was supported by the National Human Genome Research Institute , the National Eye Institute , and the National Heart, Lung, and Blood Institute grant UM1 HG008900 and in part by National Human Genome Research Institute grant R01 HG009141 . We thank Ali Jalali and Amy L. McGuire for the valuable discussions. Last, we thank the individuals participating in research and their families for their assistance and significant contributions to this research. Declaration of interests J.R.L. has stock ownership in 23andMe, is a paid consultant for Regeneron Pharmaceuticals, and is a co-inventor on multiple US and European patents related to molecular diagnostics for inherited neuropathies, eye diseases, and bacterial genomic fingerprinting. The Department of Molecular and Human Genetics at Baylor College of Medicine derives revenue from clinical genetic testing offered in the Baylor Genetics Laboratory. Data and Code Availability POLR2A variants are available on ClinVar: SUB7404119, accession pending review. Supplemental Information Supplemental Information can be found online at https://doi.org/10.1016/j.xhgg.2020.100014 . Supplemental information Document 1. Supplemental note, supplemental material and methods, and Tables S1 and S2 Document 2. Article plus supplemental information Web resources Baylor Genetics Laboratory, https://baylorgenetics.com ClinVar, https://www.ncbi.nlm.nih.gov/clinvar/ Face2Gene tool, https://www.face2gene.com OMIM, https://www.omim.org The Human Phenotype Ontology, https://hpo.jax.org/app/
|
[
"JERONIMO",
"HARLEN",
"CRAMER",
"CRAMER",
"CLARK",
"HAIJES",
"HANSEN",
"TRAYNELIS",
"LEK",
"DAVYDOV",
"KENT",
"KATOH",
"COBANAKDEMIR",
"NONET",
"BARTOLOMEI",
"SCAFE",
"MEISELS",
"LITINGTUNG",
"CHAPMAN",
"GIBBS",
"KAPLAN",
"MALIK",
"BERROTERAN",
"KAPLAN",
"QIU",
"FONG",
"FONG",
"SALDI",
"MASLON",
"ZYLKA"
] |
ef988765f3ba465e979fcb2accbf9fe8_Prevalence of Toxocara spp eggs in soil of public areas in Iran A systematic review and meta-analysi_10.1016_j.ajme.2017.06.001.xml
|
Prevalence of Toxocara spp. eggs in soil of public areas in Iran: A systematic review and meta-analysis
|
[
"Maleki, Bahman",
"Khorshidi, Ali",
"Gorgipour, Mohammad",
"Mirzapour, Aliyar",
"Majidiani, Hamidreza",
"Foroutan, Masoud"
] |
Toxocariasis is a zoonotic and widespread infection which manifest as a spectrum of syndromes in humans such as visceral, neural, ocular, covert and asymptomatic. Herein we aimed to design a systematic review and meta-analysis to determine the prevalence of Toxocara spp. eggs in soil depositories in Iran. English (PubMed, Scopus, Google Scholar, Web of Science, Science Direct, EBSCO, and Ovid) and Persian (Scientific Information Database and Magiran) databases were explored. This review resulted in a total of 14 publications meeting the inclusion criteria during January 2000–November 2016. Altogether, 3031 soil samples were examined among which 470 were positive in terms of Toxocara spp. The weighted overall prevalence of Toxocara spp. in soil samples was 16% (95% CI=11–21%), and Tehran and Qazvin provinces had the highest and lowest prevalence rates, respectively. Meta-regression analysis showed that the correlation between prevalence of Toxocara eggs in soil with sample size (P
=0.45) and year of study (P
=0.42) were not statistically significant. Further studies are highly recommended to enlighten different aspects of toxocariasis in Iran.
|
1 Introduction The enigmatic ascarid roundworms, Toxocara canis ( T. canis ) and Toxocara cati ( T. cati ), are envisaged as one of the striking neglected tropical diseases, being able to ignite serious complications such as visceral larva migrans (VLM) syndrome and toxocariasis. Feline and canine feces act as the significant depot of unembryonated eggs, and become larvated in optimum soil and environmental conditions. 1,2 Humans are considered as the paratentic hosts and infection would occur via ingestion of undercooked meat of infected paratentic hosts (chickens, pigs and ruminants), polluted water, contaminated soil (playgrounds, parks, gardens, lake beaches and sandpits) and close contact with pet animals. 3,4 Ingested eggs penetrate the intestinal mucosa, disseminate in human body through blood stream and encyst in several tissues. 4–10 Four major manifestations of toxocariasis are as follows: (1) VLM, frequently taking place in young children, is evinced by 5 Toxocara larva wandering in body organs enclosing liver, lungs and brain, provoking symptoms such as hepatitis, pneumonitis, meningo-encephalitis, headache, abdominal cramps, eosinophilia, behavioral and cognitive perturbations; (2) the so-called ocular larva migrans (OLM) is permanent loss of sight due to retinal damage and detachment which is typical in older children; (3) long time subjection to infection in children may increase a hidden, hardly diagnosed form called covert toxocariasis which emerges as asthma-like symptoms or eosinophilia with sleep and intellectual disorders; and (4) common toxocariasis usually in adults with rash, pruritus, dyspnea and abdominal pain. One of the most noted outcomes of infection is dysfunction of cognitive practices in youngsters, where infected individuals show decreased ability in reading, math operations and block design. 1 On the other hand, toxocariasis is believed to be a possible reason of blindness, a potential cause of asthma and has been linked to seizures and epilepsy. Furthermore, a rare but likely life-threatening disorder caused by toxocariasis is cardiac involvement that evokes inflammation of heart tissues, tamponade and heart failure. 11 12–15 Infection with Toxocara species has global distribution and is taken into account as one of the most frequent helminthiases in humans, according to seroprevalence reports. Histopathological examinations as well as several medical imaging techniques, e.g. computed tomography (CT), ultrasound and magnetic resonance imaging (MRI) have been employed to discern injuries of creeping parasites in human body. 1,8 Although serological tests such as enzyme-linked immunosorbent assay (ELISA) with 16–19 Toxocara excretory-secretory antigens and western blotting are the regular methods of diagnosis for toxocariasis, it is not very specific as the cross reactions may occur. As far as we know, there is lack of a systematic and quantitative analysis of obtained data in terms of prevalence of 20–22 Toxocara species in soil depots in Iran. So, herein we designed a systematic review and meta-analysis study in order to shed light on the prevalence of these common ascarids in Iran. 2 Methods 2.1 Search strategy To unravel part of the prevalence of Toxocara spp. eggs in soil in Iran, we planned a systematic review and meta-analysis according to online literature screening of English (Pubmed, Scopus, Google Scholar, Web of Science, Science Direct, EBSCO, and Ovid) and Persian (Scientific Information Database and Magiran) databases for published papers from January 2000 – November 2016. We applied medical subject heading (MeSH) terms as follows: “ Toxocara spp.”, “Iran”, “Epidemiology” and “Prevalence” alone or combined together using “OR” and/or “AND”. The reference list of selected full-text papers were also meticulously checked manually to find articles not retrieved by the database searching. 2.2 Study selection and data extraction Pertinent to inclusion criteria, the cross-sectional studies based on parasitological and molecular techniques that estimated the prevalence of Toxocara spp. eggs in soil samples were included. Eligibility of all explored papers were assessed by three reviewers (MF, MG, and BM). The discrepancies among studies were obviated by discussion and consensus. Afterwards, data of interest were gathered using a pre-designed data extraction form on the basis of province, sample size, positive cases, method of examination, main findings and year of publication. Current review was performed based on PRISMA guideline (preferred reporting items for systematic review and meta-analysis). 23 2.3 Meta-analysis Meta-analysis procedure was performed as formerly described. 24–29 3 Results Amid 518 reviewed studies from online literatures, 14 papers were eligible for this systematic review and meta-analysis, based on inclusion criteria, as depicted in Fig 1 . The results of qualified literature and details of each study are embedded in Table 1 . Egger’s regression test was exerted to discover publication bias, indicating that publication bias was very statistically significant ( P = 0.001) ( Fig. 2 ). Totally, 3031 soil samples were examined for Toxocara from January 2000- November 2016. The random-effects model revealed that the weighted overall prevalence of Toxocara spp. in soil samples was 16% (95% CI = 11–21%). Comply with meta-regression results, the correlation between prevalence of Toxocara spp. eggs in soil with sample size ( P = 0.45) and year of study ( P = 0.42) were not statistically significant ( Table 2 ). The diagram of forest plot is illustrated in Fig. 3 . 4 Discussion Despite of health and hygiene promotion in today societies, still there exist risk of transmission and incidence of parasitic infections. Toxocariasis is a soil-transmitted helminthiasis that could lead to serious sequelae. Ahead meta-analysis aims to estimate the prevalence of 1,2 Toxocara spp. in soil samples in Iran. Upon accurate literature review and based on inclusion criteria, 14 papers were finally elected, indicating that the weighted overall prevalence of Toxocara spp. in soil samples was 16% (95% CI = 11–21%). Some investigations in North and South America have approximated the prevalence of Toxocara ova in soil samples, ranging 0.3–39% and 0.3–79.4%, respectively. Furthermore, the prevalence rate in Europe and Asia are confined to 3.2–64% and 5.7–95%, respectively. 30–32 33–36 The toxocariasis is probably mainly associated with T. canis and to a lesser extent by T. cati . Several biological factors implicate in the constant prevalence of this helminthiasis in the final hosts, such as taking advantage of vertical transmission, recruiting a diverse multiple-host system as well as high resistant of eggs in various environments. 37 It has been demonstrated that embryonation of 38 T. canis eggs would frequently occur during warm seasons, whereas in tropical countries it may take place throughout the year. Additionally, type of soil, pH and vegetation density can play a major role in survival of Toxocara ova. Beside, some human risk factors such as geophagia and/or pica, mostly in low-income countries, highlight the importance of soil as a main source of infection in propagation of toxocariasis. Children are usually more prone to accidentally ingest 39–41 Toxocara spp. eggs due to putting different objects in mouth, their proximity and emotional feeling to dogs, the likelihood of geophagia and eating earthworms, etc. 38,42–45 Toxocariasis is considered as a public health issue. Based on epidemiologic data, T. canis is found in many habitats, from tropical regions to sub-Arctic lands. In comparison to developed countries, the prevalence status of toxocariasis is more elevated in underdeveloped, tropical-located nations such as Swaziland, Nigeria, Nepal, Indonesia, Brazil and Peru. 46,47 Regarding to presence of 38 Toxocara species, their permanent persistence in dogs and cats and their potentially pathogenic nature in humans, it is recommended to examine pet feces regularly and apply anti-helminthic mediation program. Furthermore, enclosing playgrounds and other terrestrial lands with a fence in municipal parts in order to avert the entry of definitive hosts and avoiding children playing with soil in public places or using processed, sanitary soil supplies for children would decrease the possibility of infection transmission. Also, a huge attention must be paid to public, particularly pet owners where they should familiarize with origin of infection, transmission pathways, disease symptoms and control measures. On the other hand, general physicians and medical experts should take into account toxocariasis as a probable differential diagnosis. Raising public awareness is a helpful modality to reach early detection and stay away from subsequent outcomes. Current review met some limitations, including: (1) most studies on soil samples have not mentioned detailed soil characteristics and accurate climatic parameters in sampling area; (2) there is a gap in terms of molecular techniques to determine Toxocara species in soil, since most studies have performed parasitological examination; and (3) there is lack of literature in many parts of the country. These limitations might have a significant role on the epidemiologic perspectives of toxocariasis in Iran. Authors’ contribution BM, MG, and MF conceived the study; BM and MF designed the study protocol; BM, MG, and MF searched the literature and extracted the data; AK analyzed and interpreted the data; HM wrote the manuscript; HM, AK, AM, BM and MF critically revised the manuscript. All authors read and approved the final manuscript. Compliance with ethical standards Conflicts of interest The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The authors received no financial support for the research, authorship, and/or publication of this article. Ethical approval As this review did not involve any human or animal subjects, ethical approval was not required. Acknowledgment The authors would like to thank all staff of Department of Parasitology of Tarbiat Modares University, Tehran, Iran. We are very grateful to Mr Mousa Vatanmakanian for his helpful consultations.
|
[
"LEE",
"OVERGAAUW",
"PARSONS",
"KHADEMVATAN",
"STRUBE",
"TAIRA",
"SALEM",
"STURCHLER",
"NAGAKURA",
"KHADEMVATAN",
"WALSH",
"KUENZLI",
"BARRY",
"PINELLI",
"MUNOZGUZMAN",
"BALDISSEROTTO",
"RUTTINGER",
"PARSONS",
"DUPAS",
"MAGNAVAL",
"SMITH",
"DESAVIGNY",
"MOHERDLIBERATIATETZLAFFJALTMANDGGROUPP",
"KHALKHALI",
"KHADEMVATAN",
"FOROUTAN",
"MAJIDIANI",
"FOROUTANRAD",
"FOROUTANRAD",
"WOODHALL",
"MARQUES",
"MILANO",
"AVCIOGLU",
"PERECMATYSIAK",
"MOHDZAIN",
"WIWANITKIT",
"GLICKMAN",
"MACPHERSON",
"TREJO",
"AZAM",
"GAMBOA",
"LEE",
"WON",
"OVERGAAUW",
"DUNSMORE",
"JENKINS",
"FAN",
"MOTAZEDIAN",
"TAVASSOLI",
"ZIBAEI",
"GAREDAGHI",
"TAVALLA",
"SARAEI",
"YAKHCHALI",
"MARAGHI",
"BERENJI",
"GHOMASHLOOYAN",
"MALEKI",
"HEZARJARIBI",
"GHASHGHAEI"
] |
2fbb725b81714d54a5b1148d2669eafd_Erratum to Effects of blindfolding and tail bending of Egyptian water buffaloes on their behavioural_10.1016_j.vas.2020.100105.xml
|
Erratum to “Effects of blindfolding and tail bending of Egyptian water buffaloes on their behavioural reactivity and physiological responses to pain induction” [5C (June 2018) 38–43]
|
[
"Mohamed, R.A.",
"Abou-Ismail, U.A.",
"Shukry, M.",
"Elmoslemany, A.",
"Abdel-Maged, M."
] | null |
The publisher regrets for a production error that caused the omission of conflict of interest statement in this paper. The authors declared that they had no conflicts of interest on this paper. We have amended our processes to ensure that such omissions are not repeated. The publisher would like to apologize for any inconvenience caused.
|
[] |
a1c129005d984aeabdedd7b1b34ef306_Food-trade-associated COVID-19 outbreak from a contaminated wholesale food supermarket in Beijing_10.1016_j.jobb.2021.04.002.xml
|
Food-trade-associated COVID-19 outbreak from a contaminated wholesale food supermarket in Beijing
|
[
"Lu, Shan",
"Wang, Weijia",
"Cheng, Yanpeng",
"Yang, Caixin",
"Jiao, Yifan",
"Xu, Mingchao",
"Bai, Yibo",
"Yang, Jing",
"Song, Hongbin",
"Wang, Ligui",
"Wang, Jiaojiao",
"Rong, Bing",
"Xu, Jianguo"
] |
The re-emerging outbreak of COVID-19 in Beijing, China, in the summer of 2020 originated from a SARS-CoV-2-infested wholesale food supermarket. We postulated that the Xinfadi market outbreak has links with food-trade activities. Our Susceptible to the disease, Infectious, and Recovered coupled Agent Based Modelling (SIR-ABM) analysis for studying the diffusion of SARS-CoV-2 particles suggested that the trade-distancing strategy effectively reduces the reproduction number (R0). The retail shop closure strategy reduced the number of visitors to the market by nearly half. In addition, the buy-local policy option reduced the infection by more than 70% in total. Therefore, retail closures and buy-local policies could serve as significantly effective strategies that have the potential to reduce the size of the outbreak and prevent probable outbreaks in the future.
|
1 Introduction When the Corona Virus Disease of 2019 (COVID-19) outbreak was noticed for the first time by the end of 2019, the majority of cases were linked to the Huanan seafood wholesale market of Wuhan in the Hubei province of China. This market is mainly involved in the sale of the seafood, vegetables, fruits, poultry, snakes, birds, frogs, hedgehogs, and other wildlife animals . On June 11, 2020, another outbreak with 335 confirmed cases had emerged in Beijing, which was found to be linked with the Xinfadi wholesale food market; where poultry, chicken, mutton, seafood, fruits, and vegetables were on sale. Further studies involving the whole genome sequence analysis of the Xinfadi strain isolated from the patients revealed that this strain was different from the one that caused the Wuhan outbreak, which was grouped into Branch 1 of L-lineage circulated in Europe 1–4 . It was also revealed that the SARS-CoV-2 was detected in both food processing and environmental samples in Xinfadi wholesale food market, including a cutting board used to slice imported salmon ( 5,6 ) https://www.caixinglobal.com/2020–07-08/101577190.html . Recently, SARS-CoV-2 has been detected on frozen food packages imported from other countries. These data suggest that the contaminated wholesale food markets with frozen food have played a significant role in the transmission of SARS-COV-2, where the modern food distribution and supply practices accelerated the spread of the virus 7 . 8 2 Materials and methods 2.1 Internet-based investigation of COVID-19 outbreak We mined information on the Xinfadi outbreak of COVID-19 in Beijing, available on the internet, which was mainly from the detailed daily situation reports released by the Beijing Center for Disease Control and Prevention ( https://www.bjcdc.org/ ). A total of 335 cases of infectious diseases were reported from June 11 to July 12, 2020. All confirmed cases were divided into cohorts of sellers, buyers, and contacts. The data of COVID-19 cases data from the different groups (Seller, Buyer, and Contacts) were georeferenced and aggregated into 500 m-spaced hexagon grids using GPS location data ( Fig. 2 A and B). All other geodata and base maps, road networks, and urban points of interest (POI, including locations of all supermarket stores) data were acquired online from OpenStreetMap (OSM) ( https://www.openstreetmap.org ) . Population distributions and Xinfadi market-related trade activity data in early June 2020 were derived from the Tencent Location Big Data service ( 9 ) and Dianping ( https://heat.qq.com http://www.dianping.com ) respectively, using Python crawling scripts. In the population datasets, the city was divided into a grid of cells of approximately 5 × 5 km and then downscaled to 500 m resolution hexagonal grids and assigned an estimated population value. By considering the distance between the grid cells and stores along with road networks, we assigned each cell to a ‘local’ store; this process generates over 1000 subpopulations to build trade-mobility layers to reflect the trade and shopping patterns for further spatial modeling analysis. The trade-mobility data revealed significant variations in the number of buyers per market ( Fig. 2 C). Geostatistical analyses and COVID-19 disease dynamics simulations were completed in QGIS ( https://www.qgis.org/en/site/ ) with the Geoda ( 10 http://geodacenter.github.io/ ) spatial correlation tools and NetLogo ( http://ccl.northwestern.edu/netlogo/index.shtml ) software for the agent-based model (ABM). The spatial database was compiled by utilizing OSM data layers of residential areas, business areas, markets, roads, as well as the boundary of districts in Beijing City ( 11 Fig. 2 C), and the population density ( Fig. 2 D). The spatial network model was built based on the current traffic road network using the distance (time) for service area analysis ( Fig. 2 C). Daily confirmed cases in the cohorts (buyers, sellers, and contacts) were summarized for each week after the first confirmed case in every grid to establish a Geographic Information System (GIS) based disease characteristic data layer, including the location and density of confirmed cases. To examine the spatial association of COVID-19, Moran’s I statistic was used for each week with different groups ( Fig. 3 C and 3D) . Moran I's calculation formula is 12 where (1) I = n S 0 ∑ i - 1 n ∑ j = 1 n w ij x i - x ¯ x j - x ¯ ∑ i = 1 n x i - x ¯ 2 w is the weight between the observation of i and j, S ij 0 is the sum of all w ij ’s. S 0 = ∑ i - 1 n ∑ j - = 1 n w ij The value range of Moran’s I was [−1, 1] ( Fig. 4 A). 3 Susceptible to the disease, Infectious, and recovered (SIR) coupled Agent based Modelling (SIR-ABM) Our SIR-coupled ABM model (SIR-ABM) introduced Seller (Se) and Buyer (Bu) subgroup agents within the traditional Susceptible (S) compartment . The states of human agents change under certain conditions over time. It is assumed that the total population 13 is fixed. N = S ( t ) + I ( t ) + R ( t ) It follows that (1) 0 = d N / d t = d S / d t + d I / d t + d R / d t , ∀ t ≥ 0 The SIR-ABM model integrates three layers: real-world data on the city population, real-world data on the mobility of this population linked with trade data ( Fig. 3 A and 3B), and an individual-based stochastic mathematical model of the infection dynamics. For population mobility, road travel networks with origin–destination matrices of trading patterns were used to ensure comparability between and within cells for their models. The disease is transmitted between adjacent grids when people trade (shopping) across the grid cells. A wide range of non‐medical interventions, such as restrictions on retails inside the wholesale food market, market closures, and buying local ( Fig. 4 B) via adding case progress status variables related to market-related trade data, were then modeled and studied in terms of the effectiveness of the contact‐tracing regime. 4 Results 4.1 The transmission of SARS-CoV-2 among the sellers and buyers from the Xinfadi market and Beijing outbreak of COVID-19 Up until July 12, 2020, a total of 335 confirmed COVID-19 cases linked with Xinfadi market were reported by the Beijing CDC. Of these, 261 cases had a history of direct exposure to Xinfadi wholesale food market, which were divided into two cohorts: the seller and the buyer with 177 and 83 cases, respectively. The sellers’ cohort included all employees of the market, such as managers, vendors, cleaners, and all others who worked in the market. The buyers’ cohort included all customers who visited the market (n = 26), such as the buyers for restaurants (n = 8), other food markets (n = 2), for own family (n = 14), and enterprises (n = 3). These 26 of the 83 infected buyers had transmitted the disease, leading to 63 new infections accounting for approximately 3.2-fold increase in the total number of confirmed cases. 4.2 Transmission of SARS-CoV-2 among the buyers in Xinfadi market and COVID-19 outbreak in Beijing Our internet-based investigation had revealed eight primary COVID-19 cases that included infected staff from seven restaurants in Beijing and one restaurant in Tianjin. These confirmed primary cases then led to 24 and 2 secondary and tertiary transmissions, respectively. Additionally, seven of the eight infected restaurants had resulted in a secondary transmission. Further investigation had revealed that eight buyers for restaurants in Beijing were diagnosed with COVID-19. They were distributed in three districts: Daxin (n = 3), Haidian (n = 2), and Fengtai (n = 3) ( Fig. 1 ). Two cooks in the barbeque restaurant were virologically diagnosed and had no history of exposure to the Xinfadi market. However, the manager of the barbeque restaurant had an exposure history to the Xinfadi market, but had no evidence of infection ( Fig. 1 ). It also needs to be specified that a dishwasher in a western food restaurant at C Hotel in Tianjin city was diagnosed on June 17, 2020. He had denied a history of visiting Beijing. In addition, a chef in the same restaurant had tested positive for IgM against SARS-CoV-2 on June 19. The chef had visited Beijing frequently in the preceding two weeks but had denied visiting the Xinfadi market. Phylogenetic grouping of the complete SARS-CoV-2 genome sequence obtained from the infected dishwasher with Xinfadi strains, which had not been previously circulated in this region, further implies that this case was linked to the Xinfadi market. The staffs infected in restaurants were cooks, food dispensers, or servers. The items purchased from Xinfadi market for restaurants included meat, seafood, vegetables, fruits, and others. The restaurants in Beijing were relatively small, with more than 50 seats. All of these restaurants had been opened for business before the first employee was diagnosed with the infection. Remarkably, no customer infection from these restaurants was reported. Additionally, two food markets were confirmed to be infected, with buyers who had purchased items from the Xinfadi wholesale food market. A buyer from Yuquandong food market in Haidian District was diagnosed with COVID-19 and had led to five second-generation and one third-generation transmissions, including one of his family members diagnosed on June 15, 2020. Four vendors in adjacent stalls, about two meters in distance, were also transmitted with SARS-CoV-2 and diagnosed for the same in the period from June 14 to 25, 2020. By sharing the same public toilet in the same building where the infected vendor rented and lived, a staff member from a small restaurant who worked in a nearby food court was also infected, who then further transmitted the disease, leading to four additional cases ( Fig. 1 ). A buyer from a food market in Xicheng district, who had purchased items from Xinfadi market, was diagnosed with COVID-19 on June 15, 2020. However, no secondary transmission was detected, and all the 62 close contacts of the buyer tested negative for SARS-CoV-2. Two buyers from an enterprise of food products were diagnosed with SARS-CoV-2 on June 15 and 17, 2020, leading to 11 secondary transmissions and three tertiary transmissions ( Fig. 1 ). A buyer from a food research institution was diagnosed positive for SARS-CoV-2 infection on June 12, 2020, leading to five secondary transmissions, including two cases in Beijing and three cases in Liaoning province ( Fig. 1 ). Fourteen buyers for their respective families were infected and diagnosed with SARs-CoV-2 during the period from June 12 to 24, 2020, leading to 15 and 3 secondary and tertiary transmissions respectively. It must be noted that 13 of the 14 infected buyers had transmitted the virus to their family members ( Fig. 1 ). One infected buyer had returned to his home town in Hebei province, resulting in a secondary transmission. One of his family members was also infected ( Fig. 1 ). 4.3 Transmission of SARS-CoV-2 among the sellers in Xinfadi market The retrieved data has revealed that 11 out of 177 (62%) infected sellers in Xinfadi wholesale food market had caused secondary transmissions. Three of the secondary transmission cases were from Beijing, leading to infection of four family members. It was also revealed that a seller immigrating from Sichuan province had infected his wife, who was diagnosed after returning to the home city. While a seller from Zhejiang province had returned to his hometown and caused no further transmissions. A total of 21 infections caused in Hebei province were associated with the Xinfadi market. Two infected sellers had caused two secondary and one tertiary transmission. Seven infected sellers had caused nine secondary transmissions in Hebei province; all of whom had contacted the primary cases when they returned from Xinfadi ( Fig. 1 ). However, there was no information to illustrate who was transmitted by whom. 4.4 Food-trade-associated SARS-CoV-2 transmission analysis Consumers (buyers) who had visited the Xinfadi market and shopped in other places were identified using crawled data from Dianping.com . The actual customer and consumption data with derived store addresses helped to build spatial connections to evaluate the relevance of the Xinfadi market and consumer activities in other regions. The top-10 districts /regions with Xinfadi trade-related stores based on consumption records are summarized in 14 Table 1 . A complete list of stores and other relevant business and spatial data was shared in a dedicated GitLab project site ( ). It was observed that Xinfadi market attracted customers from across a large region, and most trade-related stores were spatially distributed within the Fifth Ring Road and in the south of Beijing. Among them, approximately 2000 stores were concentrated in the Fengtai and Chaoyang districts, accounting for 46% of the total number of stores in Beijing. The trading stores’ coverage was relatively uniformly distributed in the city center. The largest population served by those stores was located in Xicheng, Dongcheng, Haidian, Fengtai district, and several neighborhoods in the Chaoyang district. The map shown in https://gitlab.com/map4china/xinfadi-COVID.git Fig. 2 C illustrates the drive times to Xinfadi market from the connected stores ( Table 2 ), which provides a useful method for determining trade connections based on travel time and road networks. It uses distances along actual streets and highways, and combines with their respective travel speeds, to calculate travel time for food shopping. The map also displays the geographic distribution of other trade-based stores linked to Xinfadi market. By tapping into such trade-based store network/ location data in ABM, our model tracked the transmission of the infection and estimated the number of people who may have been exposed. 4.5 Spatial-temporal analysis of SARS-CoV-2 transmission Our spatio-temporal analysis has generated maps of the spatial cumulative case distribution in Beijing from June 11 to July 12, 2020 ( Fig. 2 ). The maps revealed a few COVID-19 transmission clusters in two neighborhoods of the Fengtai and Daxing districts ( Fig. 2 A, 2B), with a much larger buyer bounding area ( Fig. 2 C). The highest numbers of seller transmission hubs were located in the Xinfadi neighborhood, while the buyer transmission hubs extended to cover more than three different districts. The analysis of the exposure population density for the affected grid cells reveals that Fengtai district had the highest number of cases, whereas the northern districts had reported fewer or no cases ( Fig. 2 D). It also revealed that the areas with a high incidence of COVID-19 were concentrated across neighborhoods in the southwest of Beijing's Fifth Ring Road and the western section of the Fourth Ring of southwestern Beijing, and the west portion of Fuxing Road. The spread of COVID-19 within the Sixth Ring Road was centered on the intersection of the South Fourth Ring Road and Beijing-Kaifeng Expressway and extended along the northwest-southeast direction (south-north direction). Following the intuitive description of the spatial–temporal distribution of COVID-19 transmission, a global/local spatial autocorrelation analysis was conducted with the epidemiological data to interpret the quantitative distribution characteristics of the spatial aggregation. The bivariate Moran’s I of 0·38 ( Fig. 4 A) for the numbers of Buyers and Contacts cases indicates that there is a strong positive correlation for transmission of SARS-CoV-2 between them. In addition, the study of the global Moran’s I of new COVID-19 cases every week reveals that there is spatial clustering mainly in the first and third weeks, and the new cases in the second and fourth weeks show a relatively unstable random distribution. A transmission risk analysis further revealed that the high-risk (hotspot area) of COVID-19 infection located in the upper right (HH) quadrant is mainly concentrated in the southwest (South Third Ring Road) region of Beijing. The risk of COVID-19 transmission in suburban areas (LH and HL quadrants) in the northeast of Beijing is not only comparatively low but relatively safe as well. High-low clustering refers to the transition from a high-risk transmission area to a low-risk transmission area, and low–high clustering refers to a transition from a low-risk transmission area to a high-risk transmission area. Our analyses reveal that some cases near the Southwest Fourth Ring Road and Southwest Fifth Ring Road in Beijing do not belong to this category. The SIR model output ( Fig. 4 A) also supports the results from the space–time statistic. Our modelling analyses reveal that the outbreak originated in the south by June 2020 and then expanded to the west–central and southern districts of Beijing after June 2020. In July 2020, the transmission extended to the surrounding region. It is well known that higher the density and degree of urban space gathering the more severe is the spread of an epidemic. We had implemented all mitigation strategies in order to simulate the transmission of COVID-19 between human agents based on the SIR-ABM analysis ( Fig. 4 B). In the SIR-ABM analysis, two COVID-19 mitigation strategies were applied and investigated. The key epidemic control parameters, such as R0, in the model were set differently using the R0 parameter. The mitigation strategies related to the so-called trade-off in our model effectively reduces R0. To model the dynamic process of outbreaks, our ABM model was initialized with collected historical case numbers for the first four weeks and then continued to run for another four weeks to present different adaptation scenarios. The main causes of the COVID-19 outbreak are people's movements for trade as well as their interactions with each other during trade events. Thus, one of the strategies that can help control the COVID-19 outbreak is retail closures (using the travel-goal switch as shown in Fig. 4 B) inside the wholesale food markets. The ABM was implemented in two modes: retail shops were open and completely shut down. In the latter case, the number of visitors to the food market dropped by nearly half. The other mitigation measure introduced a buy-local policy that guides consumers to visit nearby markets to access food supply. The results of the simulation indicated that the buy-local option in the Xinfadi market is capable of reducing the number of infected people by 60% each week on an average and more than 70% in total from June 21 to July 20, 2020. Overall, the results suggest that trade-related travel of people is the main factor responsible for the transmission of COVID-19. Thus, the closure of retail outlets as well as the buy-local policy can serve as potential strategies to radically reduce the number of infected people. 5 Discussion Both the Wuhan and Beijing outbreaks were linked to a contaminated wholesale food supermarket with seafood . At the initial stage of the Wuhan outbreak, most cases had a history of exposure to the Huanan seafood wholesale market. Of the first 41 confirmed cases, 27 (66%) had been exposed to the Huanan seafood market 1,6 . Among the first 425 confirmed cases with onset before January 1, 2020, 55% were linked to the Huanan seafood wholesale market, as compared with 8.6% of the subsequent cases 7 , leading to 68 000 cases in total in Hubei province. For the Xinfadi market outbreak, a total of 335 confirmed cases were reported from June 11 to July 8, 2020 1 . 5 Upon considering the size, trade volume, and density of the visitors, the influence of Xinfadi market was very high, as it covers an area of 1,20,000 m 2 , which is 21 times larger than the Huanan seafood market in Wuhan . The Xinfadi market has 4500 employees and approximately 2000 fixed booths comprising of management personnel and tenants. Each day, over 2,00,000 visitors were estimated to visit the Xinfadi market between May 30 to June 12, 2020, when the first case of COVID-19 was identified, and the market was closed to stop the rapid transmission of the virus 15 . The Huanan seafood wholesale market is the largest aquatic product wholesale market in central China, integrating seafood, frozen fresh food, aquatic products, and dry goods; with a size of approximately 50,000 m 15 2 and more than 1000 stalls. The susceptible-exposed-infectious-recovered (SEIR) dynamics model analysis suggests that the outbreak probably started between May 22 and May 28. 2020. The cumulative number of COVID-19 cases would have reached 65,090 (95% CI : 39 068–105 037) at July 1, 2020 . Since the population size and density at Beijing is much higher than that in Wuhan city, and the size of Xinfadi wholesale food market is much larger than Huanan seafood wholesale market; the size of Xinfadi outbreak could have been much bigger, if no prevention and control measures was immediately and effectively implemented 15 . 5 According to current information, the contaminated seafood market was responsible for the re-emergence of the COVID-19 outbreak in Beijing . The virus was isolated and detected from environmental samples (chopping board and floor drain) from the Xinfadi wholesale market 7,16 . The environmental samples from the market were also tested and found to be positive for SARS-CoV-2, including a cutting board in a booth handling imported salmon 6 . However, how the Xinfadi wholesale market was contaminated by the virus still remains unclear. Recently, SARS-CoV-2 has been frequently tested positive in seafood samples imported from several cities in China. These facts suggest that SARS-CoV-2 is evidently associated with food processing and distribution system 7 . 17 We report here for the first time that SARS-CoV-2 could be spread and transmitted by modern food distribution networks, such as the Xinfadi outbreak in Beijing. Out of a total of 335 confirmed cases reported in this Xinfadi outbreak, 177 (52.8%) were employees in the market and termed as sellers, and 83 (24.8%) were customers who shopped there and termed as buyers for the purpose of this study. 26 of 83 (31.3%) infected buyers were responsible for the transmission of 63 new infections (75.9%). Eleven out of 177 (62%) infected sellers caused secondary transmissions, comprising of four infections in Beijing, one infection in Sichuan, and 21 infections in Hebei province. The highest number of seller transmission hubs were located in the Xinfadi neighborhood where the sellers worked and lived ( Fig. 2 D). A much larger buyer bounding area ( Fig. 2 C) was observed in the two neighborhoods in the Fengtai and Daxing districts ( Fig. 2 A and B). Our analyses further revealed that the Xinfadi market attracted customers from across a large region, and most trade-related stores were spatially distributed within the Fifth Ring Road and in the south of Beijing ( Fig. 2 C). Moreover, the buyer transmission hubs extended to cover more than three districts ( Fig. 2 D). It was found that disease transmission was associated with food-trade activities but not with population density in the Xinfadi market outbreak. The Dongcheng and Chaoyang districts have the highest population density; however, it was the Fengtai district, which had the highest number of cases. The high incidence area of COVID-19 made the Xinfadi food wholesale market as a hot spot that further transmitted along the urban rapid transit line. The main causes of the Xinfadi market outbreak were the movements of people for the purpose of shopping as well as their interactions with each other during shopping events. Therefore, one of the strategies that can help control the food-associated outbreak is the closure of retail outlets ( Fig. 4 B) inside the Xinfadi wholesale food market. Our model suggests that when the retail outlets were completely shut down, the number of visitors to the food market would drop by nearly half. When the buy-local policy was implemented, which guides consumers to visit nearby markets to access food supply, the number of infected people could be reduced by 60% each week on an average and more than 70% in total from June 21 to July 20, 2020. According to the results, it is proven that trade-related travel of people is the main factor in spreading the COVID-19 and the Retail closures as well as Buy-local policy can serve as important strategies that can significantly reduce the number of infected people. The Xinfadi market outbreak of COVID-19 developed uncertainties when the first few cases were confirmed. When the virus was detected in the sealed package of salmon and other seafood during cold storage, we were inspired to consider the possible link with the Huanan seafood market in the Wuhan outbreak. It seems that if the fast, scientific, and strict public health actions were implemented for the Wuhan outbreak at the first time, the massive public infection might have been prevented, as was observed in the case of the Xinfadi market outbreak. CRediT authorship contribution statement Shan Lu: Data curation, Writing - original draft. Weijia Wang: Software, Visualization. Yanpeng Cheng: Data curation, Investigation. Caixin Yang: Data curation, Investigation. Yifan Jiao: Data curation, Investigation. Mingchao Xu: Data curation, Investigation. Yibo Bai: Data curation, Investigation. Jing Yang: Data curation. Hongbin Song: Writing - original draft. Ligui Wang: Software, Validation. Jiaojiao Wang: Visualization. Bing Rong: Software, Validation. Jianguo Xu: Supervision, Writing - review & editing. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have influenced the work reported in this paper. Acknowledgements OpenStreetMap contributors possess the copyrights for the map data which is available from https://www.openstreetmap.org. Shan Lu is a fellow of National Institute for Communicable Disease Control and Prevention, Chinese Center for Disease Control and Prevention. She is currently engaged in the monitoring of infectious diseases and outbreak responses.
|
[
"CHEN",
"ZHU",
"LI",
"SHAIRA",
"MORAN",
"CHEN",
"WEI",
"JALAVA"
] |
4e0ccde04a5d4a4db4785200b06ae36a_A retrospective longitudinal study and comprehensive review of adult patients with glycogen storage _10.1016_j.ymgmr.2021.100821.xml
|
A retrospective longitudinal study and comprehensive review of adult patients with glycogen storage disease type III
|
[
"Hijazi, Ghada",
"Paschall, Anna",
"Young, Sarah P.",
"Smith, Brian",
"Case, Laura E.",
"Boggs, Tracy",
"Amarasekara, Sathya",
"Austin, Stephanie L.",
"Pendyal, Surekha",
"El-Gharbawy, Areeg",
"Deak, Kristen L.",
"Muir, Andrew J.",
"Kishnani, Priya S."
] |
Introduction
A deficiency of glycogen debrancher enzyme in patients with glycogen storage disease type III (GSD III) manifests with hepatic, cardiac, and muscle involvement in the most common subtype (type a), or with only hepatic involvement in patients with GSD IIIb.
Objective and methods
To describe longitudinal biochemical, radiological, muscle strength and ambulation, liver histopathological findings, and clinical outcomes in adults (≥18 years) with glycogen storage disease type III, by a retrospective review of medical records.
Results
Twenty-one adults with GSD IIIa (14 F & 7 M) and four with GSD IIIb (1 F & 3 M) were included in this natural history study. At the most recent visit, the median (range) age and follow-up time were 36 (19–68) and 16 years (0–41), respectively. For the entire cohort: 40% had documented hypoglycemic episodes in adulthood; hepatomegaly and cirrhosis were the most common radiological findings; and 28% developed decompensated liver disease and portal hypertension, the latter being more prevalent in older patients. In the GSD IIIa group, muscle weakness was a major feature, noted in 89% of the GSD IIIa cohort, a third of whom depended on a wheelchair or an assistive walking device. Older individuals tended to show more severe muscle weakness and mobility limitations, compared with younger adults. Asymptomatic left ventricular hypertrophy (LVH) was the most common cardiac manifestation, present in 43%. Symptomatic cardiomyopathy and reduced ejection fraction was evident in 10%. Finally, a urinary biomarker of glycogen storage (Glc4) was significantly associated with AST, ALT and CK.
Conclusion
GSD III is a multisystem disorder in which a multidisciplinary approach with regular clinical, biochemical, radiological and functional (physical therapy assessment) follow-up is required. Despite dietary modification, hepatic and myopathic disease progression is evident in adults, with muscle weakness as the major cause of morbidity. Consequently, definitive therapies that address the underlying cause of the disease to correct both liver and muscle are needed.
|
1 Introduction GSD III (OMIM 232400 ) is caused by a deficiency of the glycogen debrancher enzyme (GDE; OMIM 610860 ), an enzyme with two independent catalytic activities: amylo-1,6-glucosidase (EC 3.2.1.33) and 4-alpha-glucanotransferase (EC 2.4.1.25) [1] . Together with glycogen phosphorylase, GDE degrades glycogen to release glucose and glucose-1-phosphate for use as a source of energy [1] . A deficiency of GDE leads to an accumulation of abnormally structured glycogen, called limit dextrin, which is characterized by short outer chains. This accumulation occurs in different tissues, and especially in the liver, muscle and heart. GSD IIIa is the most common subtype, accounting for 85% of cases, and is characterized by a lack of GDE activity in the liver, muscle and heart. The second subtype, accounting for the remaining 15% of cases, is GSD IIIb, with GDE deficiency confined to the liver [2–5] . The predominant biochemical features of GSD III are hypoglycemia with or without ketosis, hyperlipidemia and elevated liver transaminases [6] . Due to intact gluconeogenesis in these patients, hypoglycemic episodes are usually not as severe as seen in GSD I. Hepatomegaly and growth retardation are present during infancy and early childhood. Hepatic fibrosis can occur at an early age, and is associated with a decrease in liver size and transaminases over time, indicating the progression of liver disease [7] . In addition, hepatic cirrhosis, hepatocellular adenoma and carcinoma are well-recognized late, long-term complications [6,8,9] . In patients with GSD IIIa, the degree of muscle involvement varies significantly [6] . Exercise induced muscle pain and exercise intolerance are common complaints. Proximal and distal muscle weakness, which can be associated with atrophy of the affected muscle groups, is a cause of significant morbidity, limiting movement and daily activities. Although myopathy is reported as more prominent in the 3rd-4th decade of life [6] , gross motor delay, muscle weakness and hypotonia are now recognized as part of the clinical spectrum in children with GSD IIIa [6,10,11] . Left ventricular hypertrophy is the most common cardiac manifestation of GSD IIIa, and is often asymptomatic. Although it may manifest in the first decade of life, or even in early infancy, it becomes more prevalent at older ages [6,8,12,13,14] . Symptomatic cardiomyopathy accompanied with heart failure has also been reported [14,15,16] ; as a result, the most severe phenotype may need heart transplantation [17] . In addition, GSD IIIa patients are at an increased risk of different types of arrhythmia, such as atrial and ventricular fibrillation [15,18,19] . A number of published case reports revealed an increased risk of sudden death in GSD IIIa patients, which can be secondary to either severe cardiomyopathy or arrhythmia [17,19,20] . Diabetes mellitus type 2 and osteopenia / osteoporosis are among the most common endocrine disorders reported in GSD III patients [21, 22 23]. Furthermore, females with GSD III are at an increased risk of polycystic ovary disease. Fertility does not seem to be affected as there are reports of successful pregnancies in these women [24,25] . A high protein diet (3–4 g/kg/day in children and 20–30% total calories in adults) with complex carbohydrates (<50% of total calories) is the main treatment for GSD III [6] . Uncooked cornstarch or its extended-release form (Glycosade R ) can be used to prolong fasting tolerance with steady levels of blood glucose [6,26] . A high protein diet promotes gluconeogenesis, and improves blood glucose control, growth parameters, and myopathic symptoms in patients [27,28,29] . Furthermore, in patients with cardiomyopathy and myopathy, there may be potential benefits in using medium chain triglycerides and/or ketogenic supplements, with or without a high protein diet [30,31,32,33] . However, despite achieving normoglycemia by dietary therapy long-term complications still occur. There are limited studies on the long-term outcomes of adults with GSD III [8,34] . The natural history of adults with GSD III documented in the literature is mainly composed of case reports and studies that have emphasized observation of a single body system, or documented findings after a short follow-up period [9,13–20,21,22,24,25] . Herein, we report the clinical manifestations in combination with the biochemical, radiological, muscle strength and ambulation, and liver histopathological findings in 25 adult patients with GSD III, and review the current management guidelines. We characterize the multi-systemic phenotype and disease course over a period of up to 40 years, to understand the disease progression and its implications on current and future therapy development. 2 Methods 2.1 Subjects & study design Twenty-five adults (10 M, 15 F, aged >18 years) with GSD III were included in our longitudinal, retrospective natural history study. A diagnosis of GSD III was confirmed by the presence of two pathogenic variants in the AGL gene and/or a deficiency of GDE in the liver or muscle biopsy. All patients were seen at Duke University Health System at least once. The study was approved by the Duke University Institutional Review Board (IRB). Informed consent or a decedent waiver was obtained from all patients (IRB Pro00013699 and Pro00047556). Clinical case descriptions of subjects 13, 22, 24, 25, and 39 were published previously [9,17,35] . 2.2 Data collection We reviewed patient charts from January 1979 to September 2020 for the main findings in the history and physical examination, taking into consideration the chronological order of the clinical visits. 2.2.1 Clinical data General information about sex, age (at diagnosis and at most recent visit), presentation, ethnicity and modality of diagnosis (molecular, liver or muscle biopsy) was collected. Anthropometric measurements (height and weight) and body mass index (BMI) were calculated and tracked over time. Based on the extent of hepatic disease, patients were categorized with the input from an experienced hepatologist in GSDs (A. M.) into the following three groups: 1) no detection of cirrhosis on imaging and/or liver biopsy, 2) compensated cirrhosis, and 3) decompensated cirrhosis/portal hypertension. The Model for End-Stage Liver Disease MELD-Na score was calculated for patients with advanced fibrosis/cirrhosis or portal hypertension. The MELD-Na score is used to determine the severity and prognosis of chronic liver disease, as well as to prioritize the reception of a liver transplant in these patients, through the use of serum total bilirubin, the international normalized ratio (INR), serum creatinine, and serum sodium. The MELD-Na score ranges from 6 to 40, with higher scores correlating to a higher risk of liver disease-related 3-month mortality [36] . Patterns of proximal, distal, and generalized muscle weakness, were identified based on clinical symptoms and examination including muscle strength testing. In addition, information was collected on muscle pain/cramping, and exercise intolerance. Data on the cardiac involvement were monitored and the presence of related symptoms such as palpitations, chest pain, shortness of breath, and symptoms secondary to heart failure, such as orthopnea, were reported. Charts were also reviewed for other organ system involvement including: renal (creatinine, BUN, GFR, and urinalysis, for evidence of nephropathy or renal stones), endocrine disorders (prevalence of osteopenia/osteoporosis, DM type 2, and polycystic ovary disease), neurological disease (incidence of headaches, migraines, seizures, and results of imaging studies), skin (lipoma) and psychiatric (depression, anxiety, and attention deficit hyperactivity disorder (ADHD)). Finally, developmental, drug and family history for all patients were recorded. 2.2.2 Dietary history Data related to dietary compliance was collected from the dietary records, including the intake of protein, carbohydrate (CHO), and fat, reported as g/kg and/or as a percentage of total energy intake. A high protein diet was defined as daily ingested protein equivalent to 20–30% of total calories, while a low CHO diet was defined as total CHO consumed (from both diet and cornstarch doses) equivalent to <50% of total calories with limited simple sugars, per the published guidelines [6] . In addition, cornstarch dose and frequency, vitamin D and other supplementations such as multivitamins and Beneprotein® were collected. Dietary data was reviewed by our metabolic dietitian (S. P.). 2.2.3 Biochemical tests Trends and correlations of biochemical markers were reviewed and included in the data analysis of this cohort. These included aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma glutamyl transferase (GGT), urinary glucose tetrasaccharide (Glcα1-6Glcα1-4Glcα1-4Glc, Glc 4 , also referred to as Hex 4 ), creatine phosphokinase (CPK), lipid profile (triglycerides, total cholesterol, high and low density lipoproteins (HDL, LDL)), liver function testing (albumin, prothrombin time (PT)), platelet count, bilirubin, glucose, alpha-fetoprotein (AFP), and carcinoembryonic antigen (CEA) levels. Because of reduced fasting tolerance in GSD III, lipid profiles were routinely performed after 3–4 h of fasting. 2.2.4 Radiologic studies Results of abdominal ultrasound (US), computerized tomography scan (CT) and magnetic resonance imaging (MRI) were recorded longitudinally for changes in liver size, echogenicity, and evidence of fibrosis, cirrhosis or hepatocellular adenoma/carcinoma. The type of imaging modality used was different across patients. Contributory factors to these differences included variability in insurance coverage and in the availability of techniques at the hospitals performing the imaging. Hepatomegaly was defined on imaging as the size of the liver measured at the midclavicular line exceeding 16 cm as stated by Kratzer W, et al. [37] . Cardiac findings were described based on transthoracic echocardiography (ECHO) results. 2.2.5 Muscle strength and ambulation Data on functional mobility, ambulatory status, and use of assistive devices and home modifications were recorded. Manual muscle testing was performed by physical therapists with extensive experience in metabolic and neuromuscular disorders. Muscle strength was measured using a modified Medical Research Council (mMRC) scale which ranges from 0 (no contraction) to 5 (full strength). Strength testing was performed at the shoulders (flexion, abduction), elbows (flexion, extension), hips (flexion, abduction, adduction, extension), knees (flexion, extension), and ankles (dorsiflexion, plantarflexion). Distal upper extremity strength was assessed by measuring hand grip and lateral pinch strength using Jamar hydraulic dynamometers (Sammons Preston, Bolingbrook. IL, USA) as previously described [38] . 2.2.6 Liver histopathology In our cohort, liver biopsies were performed when clinically indicated. Liver biopsies were reviewed by an experienced pathologist as part of clinical care. To describe the stages of liver fibrosis/cirrhosis, the Batts–Ludwig system was used as described previously [39] . Stage 1 is represented by portal fibrosis, Stage 2 by periportal fibrosis, Stage 3 by bridging fibrosis and stage 4 by cirrhosis. 2.3 Statistical analysis Continuous variables described by the laboratory numerical values, mean and standard deviation (SD), and median (range). Categorical data was expressed as a proportion. The relationships between continuous measures were examined using generalized estimating equations to account for multiple observations per patient. STATA 15.0 (College Station, TX) was used for the analysis. 3 Results 3.1 Participants In total, 21 adults with GSD IIIa (14 F, 7 M) and 4 adults with GSD IIIb (1 F, 3 M) met the inclusion criteria for our longitudinal study. The median age recorded at the most recent visit was 36.0 years (range: 18.5–67.7). The median follow-up time for 24 patients was 16.4 years (range: 1.9–41.1). One patient (ID 46) had a single clinic visit record. Table 1 describes the characteristics of 25 patients originating from 23 families. Patients ID 13, 47, and 48 and patients ID 18 and 19 were siblings. Nineteen patients (15 GSD IIIa, 4 GSD IIIb) were non-Hispanic Caucasians, three patients were Hispanic Caucasians, one patient was Asian, and one was African American. Ethnicity was not identified from records for two patients (ID 46 and ID 49). BMI (Kg/ m 2 ), determined using the WHO international guidelines, was reported for the most recent clinic visit. Ten patients (40%) had a normal BMI (BMI 18.5–24.9), 5 patients (20%) were overweight (BMI ≥ 25 but <30) and 10 patients (40%) had an obese BMI rating (BMI ≥ 30). The overall mean (SD) of individual patient BMI means was 28.6 ± 7.4 ( n = 127 total BMI measurements). A high protein, low CHO diet was prescribed for all patients. High protein diet was initiated at a later age in the older patients. Periods of non-compliance with the diet after the age of 18 were noted in ten patients (7 GSD IIIa, 3 GSD IIIb) (ID 8, 11, 13, 21, 24, 25, 29, 33, 38, 39). Thirteen patients (11 GSD IIIa, 1 GSD IIIb) used uncooked cornstarch. Additional cohort characteristics are summarized in Table S1. 3.2 Molecular analysis Two pathogenic variants in AGL were detected in 22 patients (88%). Four patients had homozygous variants, while 18 were found to be compound heterozygous. Deficiency of the GDE enzyme in muscle or liver biopsy was used to confirm the diagnosis in one patient with only a heterozygous inactivating variant identified (ID 49), and two other patients did not have genetic testing performed. Four novel pathogenic variants, not previously reported, were identified (c.1471_1482del(p.Val491_Arg494del), c.4259 + 5G > A, c.2039G > T, c.4365del(p.Ile1455fs)). The most commonly observed inactivating variants were nonsense variants (35.6%), followed by splice site intronic, missense, and frameshift deletion or duplication variants (Fig. S1). Table S2 presents the different inactivating variants, their predicted effect and location throughout the AGL gene in 23 patients. 3.3 Genotype-phenotype correlations All GSD IIIb patients in our cohort had one copy of a GSD IIIb associated variant, c.16C > T(p.Gln6Ter) or c.18_19del(p.Gln6fs) in exon 2, in combination with a nonsense or missense variant in exons 16, 6, and 20 in the second allele. Severe phenotypes in patients with nonsense variants and intrafamilial variability are highlighted in selected cases: i) Homozygosity for c.3965del(p.Val1322fs) in exon 30 was associated with early-onset myopathy and sudden unexpected death at the age of 36 in one GSD IIIa patient (ID 25). ii) Esophageal varices, portal venous thrombosis, leukopenia and thrombocytopenia as complications of advanced cirrhosis at the age of 23 years was observed in a patient (ID 24) who was compound heterozygous for variants c.118C > T(p.Gln40Ter) and c.2309-1G > A in exon 3 and intron 17. iii) Compound heterozygosity for the variants c.100C > T(p.Arg34Ter) and c.2590C > T(p.Arg864Ter) in exons 3 and 20, respectively, was associated intrafamilial variability for three siblings. The youngest of these siblings experienced severe cardiomyopathy, liver cirrhosis and portal hypertension, secondary renal failure at the age of 40 years, and progressive severe myopathy (patient ID 13). For the two older siblings of this patient (ID 47 and 48), cirrhosis was not detected and mild concentric LVH with normal ejection fraction was described via echocardiography at the age of 61 and 57 years, respectively. One had a generalized myopathy with full independent mobility; no corresponding information was available for the second sibling. Genetic testing was not done for one of our patients (ID 2) who died suddenly at the age of 20 years. 3.4 Hepatic findings Episodes of hypoglycemia (symptomatic and asymptomatic) were reported in 10 patients (40%) after the age of 18 years, with a median age of 33.8 years (age range 18.6–64.9) ( Table 1 ). The number of episodes was reported in 5 patients, and ranged from 1 to 5 episodes in a time period of 6 months to one year. Median blood glucose during episodes was 60 mg/ dl (range 40–69 mg/ dL) (desirable blood glucose >70 mg/dL). The most commonly reported hypoglycemic symptoms were irritability, jitteriness, sweating, headache, dizziness, and loss of concentration. One patient presented with a hypoglycemic seizure at the age of 21.5 years. Hypoglycemic episodes were triggered by dietary non-compliance and exercise in most patients, and was related to post-operative management in one patient. We assessed liver imaging findings of 22 patients (18 GSD IIIa, 4 GSD IIIb) in our cohort ( Table 1 ). Hepatomegaly was found in 11 patients (50%) with a median age of 21.9 years (range 18.3–40.3) at the most recent imaging. Hepatic steatosis was detected in 6 patients (27%), of which 4 were obese and 2 were overweight. Additionally, cirrhosis was detected in 9 patients (40%) with a median age of 40.1 years (range 21.5–54.9) at the time of imaging. Hepatocellular carcinoma was diagnosed in 3 patients (14%), at a median age of 65 years (range 54.9–67.0). Two patients with HCC (ID 22and ID 39) were previously reported in detail [9] . Seven patients (5 GSD IIIa, 2 GSD IIIb) (28%) developed portal hypertension with ascites or hypersplenism which was secondary to either cirrhosis in 5 of 7 patients (71%) or cirrhosis in combination with HCC in 2 patients (29%). The median age for diagnosis of decompensated liver disease with portal HTN was 39 years (range 21.5–64.8). MELD-Na scores were calculated in 8 patients with cirrhosis. Five patients had decompensated cirrhosis/ portal HTN while the other three patients had compensated cirrhosis. Median MELD-Na was 8.5 (range 7–15). Two patients in our cohort underwent liver transplantation for decompensated liver disease. The first patient received the transplant at 24 years of age while the second patient underwent combined heart-liver-kidney transplantation at the age of 40 years. Both patients have been previously discussed [17,35] . In our cohort, liver tissue was available for review from 5 patients who had liver biopsies and 2 explanted livers post-transplant, with a median age of 39 years (range 24–67). Changes consistent with fibrosis and/or cirrhosis were seen in six patients. Five patients had cirrhosis (stage 4) on the Batts–Ludwig score while one patient had portal fibrosis (stage 1). The overall prevalence of cirrhosis detected by imaging and/or biopsy was 44% (8 GSD IIIa, 3 GSD IIIb). 3.5 Muscle strength and ambulation findings Sixteen of 19 GSD IIIa patients (84%) complained of weakness in different muscle groups, with difficulty in performing functional activities, such as buttoning, handwriting, opening jars, picking up objects, pulling out drawers, and walking. Exercise induced muscle pain, stiffness and fatigue were reported in 12 of 14 patients (86%) with available data. Eighteen GSD IIIa patients participated in muscle strength assessment. Muscle weakness was found in 16 of 18 patients (89%). Generalized muscle weakness including proximal and distal muscles of the upper and lower extremities and small muscles of the hand were reported in 12 of 16 patients (75%). Four of 16 patients (25%) had weakness involving the lower limbs and small muscles of the hand while maintaining normal strength in their proximal upper limbs. Muscle weakness limited walking in 6 of 16 patients (37.5%). Three of these patients were wheelchair dependent, while the other three required either assistive devices (cane or walker) or home modifications (ramp) in order to walk safely. Older patients tended to show more severe muscle weakness and walking limitations, compared with younger adults ( Table 1 ). 3.6 Cardiac findings Cardiovascular system-related symptoms were retrieved for 14 GSD IIIa patients. Half of these patients (7/14) had symptoms. Spontaneous or drug induced palpitations (3/14), and chest pain at rest or with exertion, and with or without shortness of breath (4/14) were reported. In addition, symptoms secondary to heart failure such as orthopnea was described in one patient (ID 13). Echocardiographic results were retrieved for 21 GSD IIIa patients ( Table 1 ). LVH was diagnosed in 9/21GSD IIIa patients (42.9%). Seven patients (77.8%) showed concentric LVH, while asymmetric LVH was detected in one patient (11.1%). One patient showed concentric LVH in combination with asymmetric septal hypertrophy. One of the four patients with GSD IIIb (ID11) had mild concentric LVH with normal ejection fraction, which was most likely secondary to longstanding hypertension. Two of 21 GSD IIIa patients (ID 13, 42) (9.5%) presented with symptomatic cardiomyopathy and a reduced ejection fraction, which was treated by heart transplantation at the age of 27 and 40 years respectively. The hepatic, muscular and cardiac findings are summarized in Table 1 . 3.7 Biochemical findings 3.7.1 Liver biochemistry There were 167 readings for each of AST and ALT. Median values and ranges were 81 U/L (28–598) (reference range 15–41 U/L) for AST and 71 U/L (22–249) (reference range 17–63 U/L) for ALT. AST and ALT were not significantly correlated with age (AST regression coefficient 0.30, p 0.55, 95% Conf. Interval − 0.68- 1.28; ALT regression coefficient − 0.15, p 0.65, 95% Conf. Interval - 0.82- 0.51) (Fig. S2a, S2b respectively). ALT and AST were significantly correlated (regression coefficient 1.04, p 0.000, 95% Conf. Interval 0.84–1.24) (Fig. S2c). Patients with cirrhosis were found to have ALT and AST levels that were on average 24.5 and 29.3 IU higher than patients without cirrhosis. However, this finding was not statistically significant, with p values of 0.10 and 0.15 respectively. 3.7.2 Muscle biochemistry in GSD IIIa patients There were 101 readings for CK in GSD IIIa patients. Median CK value and range were 796 U/L (66–5592) (reference range 55–170 U/L). CK was not significantly correlated with age (CK regression coefficient − 2.3, p 0.81, 95% Conf. Interval − 21.6- 16.9) (Fig. S3a). CK was significantly correlated with urinary Glc 4 (regression coefficient 0.00, p 0.000, 95% Conf. Interval 0.00–0.012) (Fig. S3b). There was no statistically significant correlation between CK & AST (regression coefficient 2.02, p 0.13, 95% Conf. Interval − 0.60- 4.65) (Fig. S3c). Patients with severe muscle disease had higher CK levels of 75.5 IU than the rest of GSD IIIa patients, though this finding is not statistically significant- p value was 0.87. 3.7.3 Urinary Glc 4 There were 67 readings for our cohort patients (GSD IIIa & b) with a median value of 6 mmol/mol creatinine (range 0.67–49) (reference range: <3). Urinary Glc 4 was not significantly correlated with age in this adult cohort (regression coefficient 0.18, p 0.42, 95% Conf. Interval − 0.27- 0.64) (Fig. S4a). Urinary Glc 4 was significantly correlated with AST (regression coefficient 0.22, p 0.000, 95% Conf. Interval 0.15–0.29), ALT (regression coefficient 0.34, p 0.000, 95% Conf. Interval 0.22–0.47) (Fig. S4b, S4c respectively), and CK in GSD IIIa patients (see above). 3.7.4 Lipid profile Twenty-four patients (20 GSD IIIa, 4 GSD IIIb) had lipid profile data available for assessment. Hypercholesterolemia (total cholesterol >200 mg/ dL) was observed in 11 patients (46%) (9 GSD IIIa, 2 GSD IIIb), with a median total cholesterol of 218 mg/dl (range: 200–312). Ten patients (42%) (8 GSD IIIa, 2 GSD IIIb) had elevated triglycerides levels (>150 md/dL), with a median triglyceride concentration of 190 mg/dL (range 150–372). The total cholesterol and triglyceride median values and ranges for all patients were 185 mg/ dL (52–312) (reference range: < 200) and 138 mg/dL (46–372) (reference range < 150) respectively. Median values and ranges for LDL and HDL cholesterol were 113.5 mg/dL (52–297) (reference range < 100) and 48.5 mg/dL (22–93) (reference range: > 60) respectively. There was no statistically significant correlation between age and cholesterol (regression coefficient 0.01, p 0.96, 95% Conf. Interval − 0.67- 0.70), triglycerides (regression coefficient 0.27, p 0.50, 95% Conf. Interval − 0.52- 1.07), LDL cholesterol (regression coefficient 0.17, p 0.64, 95% Conf. Interval − 0.57- 0.91) or HDL cholesterol (regression coefficient − 0.11, p 0.45, 95% Conf. Interval − 0.42- 0.18) (Fig. S5a, S5b, S5c, S5d respectively). Cirrhotic patients were found to have cholesterol and triglycerides levels that were on average 9.5 and 33.9 IU higher than non-cirrhotic patients. However, this finding was not statistically significant, with p values of 0.31 and 0.07, respectively. 3.7.5 Other biochemical findings Hyperuricemia was detected in three patients (2 GSD IIIa, 1 GSD IIIb) and allopurinol was used as a treatment. Protein intake ranged from 20 to 25% of total energy intake in these patients. Biochemical Findings are summarized in Table S3. 3.8 Osteopenia/ osteoporosis Bone mineral densitometry was performed in 11 patients. Based on the WHO guidelines of diagnosis of osteopenia or osteoporosis, eight patients (6 GSD IIIa, 2 GSD IIIb) (5 F, 3 M) showed osteopenia and two GSD IIIa patients (1 F, 1 M) had osteoporosis. As obesity can predispose patients to osteopenia/ osteoporosis, we examined the BMI in these patients. Of the 10 patients with osteopenia or osteoporosis, obesity was found in 5 patients, while one patient was overweight and the 4 other patients had normal weight. All GSD IIIa patients with reduced bone density had myopathy, seven of them had generalized myopathy and one patient had myopathy involving the muscles of the lower limbs and small muscles of the hand. Wheelchairs and assistive devices like canes or walkers were used for mobility by two patients with osteoporosis and one patient with osteopenia. Treatment with bisphosphonates was prescribed for the two patients with osteoporosis, as well as one patient with osteopenia and a history of a fractured rib after a fall. Vitamin D insufficiency (25 hydroxy vitamin D level 20 to 29.9 ng/mL) was observed in 10 of 15 patients, while vitamin D deficiency (25 hydroxy vitamin D level < 20 ng/mL) was diagnosed in 6 of 15 patients at different time points of follow up. 3.9 Endocrine findings Data was retrieved on fourteen females (13 GSD IIIa & 1 GSD IIIb). Six GSD IIIa patients complained of menstrual problems such as metrorrhagia, menorrhagia and irregular cycles. Three GSD IIIa patients (21%) were diagnosed with polycystic ovary syndrome (PCOS) and were treated with birth control medications. Although the main presenting symptom was the irregular menstrual cycle, hirsutism was present in one patient and increased BMI was noticed in two of the three patients. Two patients were diagnosed in their early twenties and the third patient was 16 years old at time of diagnosis. Bilateral ovarian cysts on pelvic imaging were seen in one GSD IIIb asymptomatic patient who had no menstrual problem or PCOS. Three patients (2 GSD IIIa & 1 GSD IIIb) (3 M) (ID 22, 38, 47) developed type II diabetes mellitus and were managed by either diet, insulin or oral hypoglycemic medications. High protein diet was used in the two patients who had GSD IIIa. Cornstarch was not prescribed for the 3 diabetic patients. All 3 patients were obese (BMI ≥ 30). The age of DM type II diagnosis was known for two patients (47 and 61 years). 3.10 Renal Three patients (ID 11, 13, 38) (1 GSD IIIa, 2 GSD IIIb) presented with recurrent kidney stones which needed either surgical excision or lithotripsy. Stones were found to be secondary to hypocitraturia in one patient (ID 11), while hyperuricemia was the cause in another and unknown in the third. None of our cohort patients had evidence of renal tubular acidosis, proteinuria or hematuria. One patient (ID 13) had chronic renal failure secondary to heart failure and cardiomyopathy and underwent combined heart-liver- kidney transplantation at the age of 40.3 (see section 3.4 ). 3.11 Skin Eight patients (5 GSD IIIa & 3 GSD IIIb) (1 F, 7 M) developed skin lipomas ranging in number from 4 to over 35. Interestingly, three patients (ID 13, 47, 48) were siblings with GSD IIIa. Lipomas were scattered over the upper and lower extremities, abdomen and back of all patients. They were more common in the older group of patients with the median age of diagnosis being 47.3 years (range 35.5–57.4). Surgical excision was indicated in all patients, and confirmation of diagnosis through histopathology in five patients. 3.12 Headache/ migraine Data on the prevalence of headaches and migraines in our cohort were collected on 20 patients. Five patients (4 GSD IIIa & 1 GSD IIIb) had migraines, while 3 other GSD IIIa patients complained of headaches. Three patients with migraines had brain imaging with normal results. Furthermore, one patient with frequent headaches had a finding of bilateral dense basal ganglia calcifications unrelated to GSD IIIa. Prophylactic medications were needed for all patients with migraine, although one patient had severe daily migraines which needed different therapeutic options to be used, such as Botox injections and cervical epidural injections of Phenergan. 3.13 Psychiatric disturbances A total number of seven patients (5 GSD IIIa, 2 GSD IIIb) (4 F, 3 M) out of twenty whose medical records include data on psychiatric disturbances had abnormal findings. Three were diagnosed with combined anxiety, depression and ADHD. In addition, one patient had anxiety and depression. Furthermore, two patients had depression only and one had ADHD. Family history for psychiatric illness was found in one patient. All patients with psychiatric problems received medications to control their symptoms. 3.14 Development and education level Eighteen patients had data on their career and education level. Thirteen patients finished high school, eleven of them pursued college. Twelve patients were employed in diverse types of jobs requiring different skills, such as communication and executive skills. 3.15 Mortality Five GSD IIIa patients (5/25) died during the course of follow-up. Median age of death was 36 years (range 20–68). The cause of death was due to liver disease progression in 3/5 GSD IIIa patients. Advanced HCC was the cause of death in two of these patients (ID 22, 39), while one patient (ID 24) died secondary to disseminated infection after receiving liver transplantation, as described previously in detail [17] . The two other patients died unexpectedly at the ages of 20 and 36. The autopsy report of the younger patient (ID 2) showed no clear cause of death, but pointed towards a marked hypertrophic cardiomyopathy induced rhythm disturbance and sudden death. Autopsy of the older patient (ID 25) showed significant glycogen accumulation in the conduction system of the heart, which can predispose the patient to a fatal arrhythmia. Full description of this patient was published formerly, including the autopsy results [17] . The Clinical, Radiological, Functional and Liver Histopathological Findings in our cohort are summarized in Table S4. 4 Discussion GSD III is a rare disease with limited detailed information in the literature on the clinical, biochemical, radiological, functional, and histopathological aspects of the disease course in adults. Given the recent advances in medical care, the life expectancy of GSD III patients has improved with new manifestations of the disease exhibited in adults. This study describes the comprehensive follow-up of twenty-five adults with GSD III who were monitored for up to 41 years in our center, to add to the growing literature in the field. Of note, complications in older patients in this cohort represent sequelae of the natural history of the disease and could be related to delayed initiation of current practices such as initiation of a high protein diet early in the disease course. Nevertheless, our study highlights the progression and severity of the disease in adults and clinical features that require close monitoring and a need for definitive treatments. The major clinical findings are as follows: 4.1 Adult patients with GSD III are at risk of hypoglycemia In past reports, hypoglycemia was considered a problem primarily in early infancy and childhood [40] . Our study showed that adult patients are also at risk of hypoglycemia, with 40% of our cohort experiencing episodes of hypoglycemia. Dietary counseling aimed at improving dietary compliance, glucose monitoring, and nutritional needs during and after exercise may be helpful to prevent the risk of hypoglycemia and improve metabolic liver disease control. Furthermore, special attention is needed on pre-, peri-, and postoperative management with close monitoring of blood glucose levels as emphasized previously in the consensus guidelines for individuals with GSD III [6] . 4.2 Cirrhosis and HCC are long term complications of GSD III The incidence of patients with cirrhosis and HCC in our study (44% and 14% respectively) was higher than that previously published (hepatic cirrhosis, adenomas and/or HCC in 11% of their patients, Senter et al. [8] ). Hepatic disease in GSD III patients has been considered to progress slowly from fibrosis to stable cirrhosis. As such, adults with GSD III have a quiescent disease from a perspective of liver enzymes and normal synthetic function, and patients often do not qualify for a liver transplant. Hence, the significance of liver manifestations may be overlooked. Yet as shown previously in our report of liver manifestations in a pediatric GSD III population [7] and a canine model of GSD III [41] , the liver disease can progress to decompensated liver cirrhosis. Thus, patients with GSD III require routine assessment for the development of cirrhosis and portal hypertension. Furthermore, surveillance imaging for hepatocellular carcinoma is required as recommended by the consensus guidelines [6] . The potential development of cirrhosis may play an important role in considering the timing and the inclusion/exclusion criteria of future therapeutic options in these patients. Follow-up for more than 18 years of one of our GSD IIIa patients who underwent combined liver- heart and kidney transplantation showed that liver & heart transplantation corrects the hepatic and cardiac phenotype respectively, but does not prevent the muscle disease progression [42,43] . Liver transplantation can be curative in the GSD IIIb subtype [44] , nevertheless, it is remains problematic for patients with GSD III due to late listing, paucity of organs, effect of immune suppressants on the heart in patients with GSD IIIa, who also have myopathy that continues to progress after liver transplant. 4.3 Myopathy is a significant cause of morbidity in adults with GSD IIIa The nature of GSD IIIa myopathy appears to progress with age with onset increasingly reported in childhood [45] . Generalized muscle weakness was the most common form of myopathy found in our cohort and observed in a majority of patients. However, involvement of the small muscles of the hands and involvement of proximal and distal lower limb musculature were affected more commonly than proximal upper limb muscles. Progressive impairments in ambulation were also noted: individuals with profound weakness were unable to walk and required the full time use of wheelchairs to negotiate their home and community environments. Muscle testing by physical therapy (PT) was sensitive in detecting early signs of muscle weakness in our patients. Consequently, PT assessment is recommended every 6 months or even more frequently as stated in the consensus GSD III guidelines [6] . 4.4 Adults with GSD III are at risk for sudden death It is well known that accumulation of glycogen in different parts of the conduction system may predispose patients with glycogen storage diseases including GSD IIIa to arrhythmia [15] . In accordance with previous papers and case reports [15,17,18,19,20] , in our study 2 patients likely had sudden death due to arrhythmias. This suggests an increased risk of arrhythmia in these patients. Thus, a regular monitoring by a cardiologist and the use of a more detailed screening test, such as the 24-h Holter monitor, in addition to an electrocardiogram, should be considered in the routine care of patients with GSD III, as stated in consensus guidelines by Kishnani et al. [6] . With a growing body of evidence of the cardiac and muscle involvement in these patients, treatments targeting liver disease alone is not sufficient in GSD IIIa. 4.5 Endocrine (Osteopenia, osteoporosis, PCOS, and DM), renal stones, and lipomas are comorbidities that may occur in adults with GSD III As previously reported by Melis D, et al. [23] and Cabrera-Abreu J, et al. [46] , our study shows an increased risk of osteopenia and osteoporosis in our patients. Low bone mineral density (BMD) was more common in GSD IIIa patients than GSD IIIb patients in our cohort. Furthermore, all patients with GSD IIIa and osteoporosis/osteopenia showed signs of myopathy. It was previously reported that muscle weakness and decreased mobility in patients with Pompe disease may be considered predictors of bone mineral density [47] . Similarly, the altered bone-muscle interaction (decreased or lack of weight bearing and decreased strength of muscle pull on bone), vitamin D insufficiency or deficiency, decreased mobility, and overweight/obesity are all considered factors affecting bone health in neuromuscular diseases [48] . Nevertheless, the pathogenesis of reduced BMD in GSD III patients is currently not well understood. While our cohort demonstrates a higher rate of PCOS (21%) in comparison with the estimated prevalence in reproductive-aged women in the United States (6.6%) [49] , none of our female patients showed radiological features of PCOS before puberty, as described by Lee PJ, et al. [24] . Three patients in this cohort had type 2 DM and were on different forms of therapy - oral hypoglycemic, insulin or both based on their individual glucose levels and insulin needs. There is limited information in the literature regarding an association between GSD III and DM type II [21,22,50] . Some studies showed that liver cirrhosis and chronic liver disease is associated with the development of DM type II [51] , however other factors may be contributory, such as obesity, altered liver metabolism, and family history of DM. Based on the above, we recommend a routine BMD, pelvic US and glucose tolerance testing if clinically indicated with regular monitoring of serum vitamin D concentrations and related metabolites, for the early detection, diagnosis and treatment of these comorbidities. In addition, a regular follow-up by a metabolic dietitian is an essential component of the multidisciplinary care for these patients, as mentioned in the GSD III management guidelines [6] . In agreement with our study findings, kidney disease was absent in a majority of the patients described by Talente et al. [34] , but renal stones were observed in a few. Hyperuricemia was also reported in one of their patients, while two other patients received allopurinol for the same reason. Hyperuricemia in GSD III patients may occur secondary to a high protein diet and regular monitoring of serum uric acid level is advised. Furthermore, the accelerated breakdown of muscle purine nucleotides in myopathic disorders, such as GSD IIIa, may also be related to hyperuricemia [52] . To the best of our knowledge, we are the first to report the finding of lipomas in GSD III patients. Formerly, Yi H, et al. described glycogen accumulation in adipocytes of one GSD IIIa dog and proposed that this finding could be due to an imbalance between glycogen synthesis and breakdown, secondary to GDE deficiency in the adipose tissue of affected dogs [53] . It is also known that glycogen can be converted to fat in adipose tissue and it has an important role in regulating glucose and lipid metabolism during the fasted to fed status [54] . Despite these theories, we should keep in mind the fact that lipoma formation can be triggered by various factors, such as obesity and diabetes mellitus. 4.6 Neuropsychiatric problems may occur in adults with GSD III, but further studies are needed Neuropsychiatric and cognitive profiles in GSD IIIa patients were previously described in a small cohort [55] . In concordance with Michon et al., four GSD IIIa and two GSD IIIb patients were diagnosed with either depression, anxiety or both. Additionally, a seventh patient was diagnosed with ADHD. Michon et al. also showed impairment of the cognitive efficiency, executive functions and emotional skills, with sparing of memory in their patients. Formal psychiatric testing was not performed in our cohort; however, a review of our patients' development revealed that more than 50% attended college and pursued various careers that required different abilities, such as communication and executive skills. We recommend that further studies are needed for a formal neuropsychiatric assessment. 4.7 Biomarker trends in GSD III Routine serum biomarkers of liver and muscle disease may be challenging to interpret in patients with GSD III. Halaby et al. described a decrease in AST and ALT trends over time in pediatric patients with GSD IIIa and b [7] . In our adult cohort, AST and ALT concentrations trended lower than those observed in the pediatric cohort. The finding of lower serum transaminases in the adult cohort is in agreement with trends observed in a canine model for GSD IIIa, in which these enzymes increased during the first 2 to 3 years of life, and then gradually decreased [41] . This finding supports the previously proposed conclusion that the progression of liver disease (hepatic fibrosis and cirrhosis) is associated with a decrease in liver aminotransferases. In contrast, the trends of CK in the two cohorts were similar, which suggest ongoing muscle injury in the two groups of patients [7] . In pediatric studies, the urinary Glc 4 biomarker was positively correlated with liver transaminases, but not CK [7,56] . Furthermore, Heiner-Fokkema et al. described an association between urinary Glc 4 and serum CK levels, and clinical signs of myopathy in nine adult patients [56] . These observations were explained by the possibility that urinary Glc 4 in pediatric patients largely reflects glycogen accumulation in the liver. In adult patients, an ongoing increase in urinary Glc 4 excretion is most likely related to the progressive muscle disease, and with ongoing liver disease. This is supported by the correlation between urinary Glc4 and ALT, AST, and CK levels, as noted above. The prevalence of hypercholesterolemia and hypertriglyceridemia in our patients was 46% and 42%, respectively. In most patients, the findings of hypercholesterolemia and hypertriglyceridemia were not constant, which may reflect non-compliance with diet and poor metabolic control. The evidence in the literature in regards to the clinical significance of hyperlipidemia in GSD III patients remains controversial. For instance, normal lipid profiles and vascular endothelial function (assessed by brachial artery responsivity) described by Hershkovitz et al. in a small group of patients with GSD III suggested that there is no association of GSD III with hyperlipidemia or with a functional measure of vascular reactivity [57] . Alternatively, hyperlipidemia was implicated in a case report of a 24-year old male with GSD IIIb. This patient had a history of persistently elevated lipids and presented with cardiac arrest secondary to ventricular fibrillation. He was found to have an 80% mid-left anterior descending artery (LAD) stenosis without occlusion on coronary angiography [58] . To the best of our knowledge, we are not aware of any patient in our cohort who developed complications secondary to hyperlipidemia. However, since many of the adult patients with GSD III are lost to follow up, we suggest that further studies are needed to explore the clinical impact of hyperlipidemia in patients with GSD III. Finally, we explored genotype- phenotype correlation. As far as we know, the two nonsense mutations (c.100C > T(p.Arg34Ter) in exon 4 and c.2590C > T(p.Arg864Ter) in exon 20) were described separately, in homozygosity with variable clinical phenotypes [59,60,61] . In our cohort, the combination of the two above mentioned mutations was associated with a severe phenotype (hepatic cirrhosis, symptomatic cardiomyopathy, severe myopathy) in one patient, and with a less severe phenotype in the other two younger siblings (no cirrhosis, asymptomatic concentric LVH, myopathy with full independent mobility). This observation endorses the phenotypic intrafamilial heterogeneity of GSD III. Other factors may be contributory to the genotype-phenotype correlation but have not been studied in these patients. While the results from our study show some similarities with data published previously [8,23,34] , there are several differences including highlighting aspects of the disease not previously studied. These may be caused by variability between the cohorts studied, such as age range, genetic background, dietary management, or other environmental factors. Furthermore, longitudinal data captured in our study facilitates a better understanding of the long-term complications that occur in GSD III, drawing attention to the course and severity of the disease progression among adult patients. This is in contrast to cross-sectional studies that lack longitudinal assessments [8] . The main limitations of this study are the small cohort size, the variability in the length of time that patients were followed, the age when a diagnosis of GSD III was made, and also delays in when treatment was initiated, especially the oldest patients in this cohort. There was also missing data for some of the patients in the study, including incomplete information on dietary therapy and compliance. Although the frequency of the reported complications may be impacted by these limitations and also a bias for patients with more involvement to be seen at our center, we do not believe that it significantly affects the overall results and recommendations. As mentioned earlier, older patients in the study either were diagnosed at a later age, or were not treated early in the course of the disease, thus they may show more severe complications, probably representing the natural history of the disease. 5 Conclusion GSD III is a multisystemic disorder in which a multi-disciplinary approach with regular clinical, biochemical, radiological and physical therapy follow-up (including strength and functional motor testing) is required. Early recognition of complications of the disease with close monitoring may improve outcomes and quality of life. Although liver disease (fibrosis/ cirrhosis) in these patients is considered a significant cause for morbidity and mortality, myopathy is also a major problem in adults and should not be overlooked. Interventions such as physical therapy and diet modification may delay the progression of muscle weakness, but these interventions are not curative. A growing body of evidence suggests an increased risk of sudden death in adults with GSD III and close monitoring by a cardiologist is needed. Urinary Glc 4 is a promising biomarker in GSD III as it is correlated with serum transaminases and CK levels in adult patients with GSD III and merits further study. Despite dietary modification, complications can still occur. More definitive therapies, such as gene therapy and small molecule therapies that address both the liver and muscle aspects of the disease is needed for patients with GSD III. Acknowledgements The authors would like to express their gratitude to the patient participants who volunteered for the Duke GSD III Natural History study. Without them, this study would not be possible. The authors would also like to thank the study sponsors: Valerion Therapeutics , LLC, for partial support of the study, The Workman family for partial funding of the study and Ultragenyx Pharmaceutical for partial grant support for the preparation of this manuscript. Research reported in this publication was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number TL1 TR002555. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Appendix A Supplementary data Supplementary material. Image 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.ymgmr.2021.100821 .
|
[
"YANGFENG",
"BAO",
"VANHOOF",
"DING",
"AOYAMA",
"KISHNANI",
"HALABY",
"SENTNER",
"DEMO",
"YOUNG",
"MOGAHED",
"VERTILUS",
"CARVALHO",
"MOSES",
"CHONGNGUYEN",
"OLSON",
"AUSTIN",
"AKAZAWA",
"TADA",
"MILLER",
"OKI",
"ISMAIL",
"MELIS",
"LEE",
"RAMACHANDRAN",
"BHATTACHARYA",
"DAGLI",
"SLONIM",
"SENTNER",
"FRANCINIPESENTI",
"OLGAC",
"VALAYANNOPOULOS",
"ROSSI",
"TALENTE",
"COCHRANE",
"BIGGINS",
"KRATZER",
"KHAN",
"BATTS",
"DAGLI",
"BROOKS",
"DAVIS",
"ZOBEIRI",
"HAAGSMA",
"PASCHALL",
"CABRERAABREU",
"VANDENBERG",
"IOLASCON",
"AZZIZ",
"VANTYGHEM",
"LEE",
"MINEO",
"YI",
"MARKAN",
"MICHON",
"HEINERFOKKEMA",
"HERSHKOVITZ",
"LABARBERA",
"LUCCHIARI",
"LUCCHIARI",
"OKUBO"
] |
dfab30e3309740988b50f0f6993783f8_Metabolic changes and stress damage induced by ammonia exposure in juvenile Eriocheir sinensis_10.1016_j.ecoenv.2021.112608.xml
|
Metabolic changes and stress damage induced by ammonia exposure in juvenile Eriocheir sinensis
|
[
"Wang, Tianyu",
"Yang, Chen",
"Zhang, Shuang",
"Rong, Liyan",
"Yang, Xiaofei",
"Wu, Zhaoxia",
"Sun, Wentao"
] |
The application of nitrogen fertilizers in the rice-crab co-culture system may expose juvenile Eriocheir sinensis to high ammonia concentrations within a short period of time, potentially causing death. Currently, the molecular mechanism underlying ammonia toxicity in juvenile Eriocheir sinensis remains poorly understood. This study compared the effects of 24 h exposure to different total ammonia-N concentrations (0, 10.47, and 41.87 mg/L) on antioxidant enzyme activities and tandem mass tag (TMT)-based proteomics in the hepatopancreas of juvenile Eriocheir sinensis. During the experiment, water temperature and pH were maintained at 20.4 ± 1.4 °C and 7.69 ± 0.46, respectively. Proteomic data demonstrated that Eriocheir sinensis used different metabolic regulatory mechanisms to adapt to varying ammonia conditions. The tricarboxylic acid (TCA) cycle, glycogen degradation, and oxidative phosphorylation showed marginally upregulated trends under low ammonia exposure. High ammonia stress caused downregulation of the TCA cycle and provided energy by enhancing oxidative phosphorylation, fatty acid beta oxidation, gluconeogenesis, and glycogen degradation. The detoxification of ammonia into urea and glutamine was suppressed under high ammonia stress. Finally, ammonia exposure induced oxidative stress and caused protein damage. Antioxidant enzyme activity analysis further revealed that exposure to high concentrations of ammonia may induce more severe oxidative stress. This study provides a global perspective on the mechanisms underlying ammonia exposure-induced metabolic changes and stress damage in juvenile Eriocheir sinensis.
|
1 Introduction The Chinese mitten crab, Eriocheir sinensis , is high in nutritional value and possesses a distinctive flavor; as such, it is one of the most economically important species and food in China ( Chen and Zhang, 2007 ). The Liaohe River basin in northeast China is rich in Chinese mitten crabs and is one of the three major crab-producing areas in China ( Xu et al., 2019 ). In the last century, rice-crab ( E. sinensis ) co-culture has gradually developed into an important ecological agricultural model in the Liaohe River basin. The rice-crab co-culture organically combines planting and aquaculture, which generates mutual benefits between rice and crabs. The foraging activities of crabs improve soil health and promote rice growth ( Yan et al., 2014 ); rice provides a good habitat and diversified bait for crabs. Nitrogen is an essential nutrient for rice cultivation, and the application of nitrogen fertilizers may improve the yield and quality of rice ( Tang et al., 2019 ). Juvenile crabs are generally placed in rice fields from the end of May to the beginning of June, which is also a critical period for fertilization. Nitrogen fertilizer quickly transforms into ammonia and nitrates (NH 4 + -N and NO 3 -N) after application to the rice field. In our previous study, we measured the change trend in ionic ammonia (NH 4 + ) concentration in the water environment of a rice-crab co-culture system with different ratios of nitrogen fertilizer. The results showed that the concentration of NH 4 + reached the maximum rapidly after three days of fertilization, and then decreased and tended to stabilize. The concentration of NH 4 + ranged from 0.3 mg/L to 7.6 mg/L, and when the concentration was maintained above 6 mg/L, it was likely to have an effect on the growth of crabs ( Fan et al., 2010 ). Therefore, inappropriate amounts of nitrogen fertilizer may cause ammonia stress in juvenile crabs in a short period of time. Ammonia is one of the most common pollutants that can induce stress responses in animals ( Wicks and Randall, 2002 ). Many studies have shown that animals are affected by excessive ammonia; it can cause tissue damage ( Weihrauch et al., 2009; Jing et al., 2020 ), perturb the antioxidant system ( Jia et al., 2017; Wang et al., 2019b ), alter energy metabolism pathways ( Wang et al., 2019a; Tang et al., 2020 ), and induce endoplasmic reticulum stress and apoptosis ( Liang et al., 2016; Wang et al., 2020 ). In addition, the effects of acute ammonia stress vary based on the different tissues affected in aquatic biota. Compared with other tissues, the hepatopancreas of crustaceans is highly sensitive to ammonia stress ( Chen and Chen, 2000 ). The hepatopancreas plays an important role in metabolism, digestion, and detoxification ( Wang et al., 2014; Yu et al., 2019 ). As such, it is possible to reduce the mortality and prevalence of infectious diseases in juvenile E. sinensis by understanding the response mechanisms of the hepatopancreas of juvenile E. sinensis to ammonia stress. Proteomic analysis, an application based on high-throughput biotechnologies, has been widely used to identify key proteins that play an important role in physiological metabolic processes under different conditions or stresses within a particular tissue or organism. Proteomic analysis indicated that Paralichthys olivaceus responds to Cd stress by regulating the morphology, energy metabolism, stress resistance, and apoptosis in gill mitochondria ( Lu et al., 2020 ). Tandem mass tag (TMT) analysis has shown that Enterocytozoon hepatopenaei infection can trigger energy metabolism disorders in shrimp ( Ning et al., 2019 ). Using proteomic analysis, many proteins related to immune defense, protein synthesis and transport, and stress tolerance were identified as being involved in the defense process of Litopenaeus vannamei against acute ammonia toxicity ( Lu et al., 2018 ). Therefore, the results from proteomics analysis may help us better understand the response mechanisms of aquatic biota to stress. The proteomics method, TMT, is characterized by high reproducibility, sensitivity, and sample multiplexing capability. This method allows peptides from different samples to be identified based on their relative abundance with greater ease and accuracy than other methods ( Andrew et al., 2003 ). In this study, we first estimated acute ammonia toxicity in juvenile E. sinensis by measuring the 96-h median lethal concentration (LC 50 ) value. Juvenile E. sinensis were then treated with a low concentration (10.47 mg/L, close to the actual concentration of ammonia in the water of the rice-crab co-culture system after applying nitrogen fertilizer) and high concentration (41.87 mg/L, close to the 1/2 LC 50 value of ammonia) ammonia for 24 h, and then, TMT proteomics and the analysis of antioxidant enzyme activities were applied to investigate key proteins and pathways responsible for molecular responses to ammonia stress. This study aimed to elucidate the detoxification of juvenile E. sinensis during ammonia exposure. Additionally, this study provides a theoretical basis to assist farmers in improving crab production and profitability while ensuring environmental sustainability. 2 Materials and methods 2.1 Crabs and acclimation Juvenile Chinese mitten crabs were obtained from a local farm in Panjin, Liaoning Province, China, and acclimated for 1 week. During the acclimation period, all crabs were cultured in aerated municipal water under a 12 h light and 12 h dark cycle. Crabs were fed with pellet feed at a ratio of around 2% total crab biomass twice a day, and half of the water in the tanks was replaced with clean freshwater each day. Following acclimation, crabs (body length, 24.5 ± 6.3 mm; wet mass, 8.19 ± 0.78 g) were randomly divided into acute toxicity test groups. 2.2 Sublethal effect of ammonia Static bioassay tests were performed to evaluate ammonia toxicity in juvenile E. sinensis following the protocol described by Hong et al. (2007) , with slight modifications. A stock solution of ammonia (10 g/L) was prepared with ammonium chloride (NH 4 Cl), which was subsequently diluted to the desired concentration of ammonia. The concentration of ammonia described in this study refers to the concentration of total ammonia-nitrogen (NH 4 + + NH 3 ). The concentration of non-ionized ammonia (NH 3 ) was calculated according to the equation derived by Emerson et al. (1975) as follows: [NH 3 ] = [NH 4 + + NH 3 ]/[10 (pKa-pH) + 1], pK a = 0.09018 + 2729.92/T (T is the Kelvin temperature, T = 273 + t °C). After pre-test trials, the 96-h LC 50 value was determined using five ammonia concentrations of 80.00, 100.59, 126.49, 159.05, and 200.00 mg/L (or 2.35, 2.95, 3.71, 4.67, and 5.87 mg/L as NH 3 , respectively) designed with equal logarithmic intervals. There were three tanks in each treatment group, and 10 individuals in each tank. During the experiment, water temperature and pH were maintained at 22.8 ± 0.2 °C and 7.74 ± 0.13, respectively. Crab mortality was recorded at 24, 48, 72, and 96 h after ammonia treatment, and dead crabs were promptly removed from the tanks. The 96-h LC 50 value and 95% confidence intervals (C.I) for ammonia were calculated using the probit analysis method (SPSS 22, Inc., Chicago, IL, USA), as described by Reish and Oshida (1986) . 2.3 Ammonia exposure According to the changes in ionized ammonia concentration in the water of the rice-crab co-culture system under different fertilization modes measured by Fan et al. (2010) and the 96-h LC 50 of ammonia for juvenile E. sinensis , two ammonia concentrations were set as 10.47 mg/L (low-concentration ammonia-N treatment, LT) and 41.87 mg/L (high-concentration ammonia-N treatment, HT). The control (CK) group was treated with municipal water only, and the treatment groups were treated with NH 4 Cl solution until the tanks reached a pre-determined ammonia concentration. Ninety crabs were randomly allocated to the CK and ammonia treatment groups. Each group consisted of three replicate tanks, each containing 10 individual crabs. During the experiment, the water was changed with fresh water containing the same ammonia concentration every 12 h. Water temperature and pH were maintained at 20.4 ± 1.4 °C and 7.69 ± 0.46, respectively. The actual ammonia concentrations were determined every 6 h using the hypobromite oxidation method ( GB, 17378.4, 2007 ) and varied in the CK, LT, and HT groups by 0.12 ± 0.01, 13.29 ± 0.62, and 43.84 ± 0.64 mg/L (or 0, 0.26 ± 0.01, and 0.85 ± 0.01 mg/L as NH 3 , respectively). After crabs had been in their respective treatment tanks for 24 h, the hepatopancreas of all individuals from each treatment were separately dissected, frozen in liquid nitrogen, and stored at −80 °C for further analysis. 2.4 TMT proteomics analysis 2.4.1 Protein extraction and TMT labeling After grinding with liquid nitrogen, 200 mg of tissue samples was dissolved in lysis buffer (4% sodium dodecyl sulfate, 100 mM Tris-HCl, 1 mM dithiothreitol, pH 7.6) and sonicated three times on ice using an ultrasonic cell disruptor (Biosafer, Nanjing, China). The supernatant was collected by centrifugation at 13,000 × g for 20 min, and then the protein concentration was quantified using a Pierce™ bicinchoninic acid protein assay kit (Thermo Fisher Scientific Inc., USA). Protein digestion was performed using the filter-aided proteome preparation (FASP) method ( Wiśniewski et al., 2009 ). Briefly, 300 μg of protein from each sample was dissolved in 200 μL UA buffer (8 M urea, 150 mM Tris-HCl pH 8.0) and centrifuged at 14,000 × g for 30 min. The supernatants were alkylated with 100 μL of 50 mM iodoacetamide for 30 min in the dark. The filtrates were washed three times with 100 μL UA buffer and 100 μL of 100 mM/L DS buffer. Finally, the protein supernatants were digested with 52 μL trypsin buffer (6 μg trypsin in 40 μL of 100 mM/L DS buffer) at 37 °C for 18 h, and then filtrates were collected to quantify the peptide content based on optical density at 280 nm. After digestion, 100 μg of each sample was labeled using the TMT10plex™ Isobaric Mass Tagging Kit (Thermo scientific, USA) according to the manufacturer’s protocol, and ten samples (three biological replicates for each treatment group and the internal standard) were labeled with TMT tags, multiplexed, and vacuum dried. TMT-labeled samples were then pooled and fractionated by Dionex UltiMate3000 high-performance liquid chromatography (HPLC) using a Gemini NX-C18 column (Phenomenex, 00 F-4453-E0) ( Batth et al., 2014 ). A total of 40 fractions were collected along the gradient, compiled into 10 pools, and prepared for liquid chromatography-mass spectrometry (LC-MS) analysis. 2.4.2 Liquid chromatography with tandem mass spectrometry analysis Fractions were dissolved in 0.1% formic acid (solvent A) and directly loaded onto a Thermo Scientific analytical column (75 µm 25 cm, 5 µm, 100 Å, C18). The procedure led to an increase in solvent B from 5% to 28% (0.1% formic acid in 100% acetonitrile) in 40 min, 28–90% in 2 min, and then was maintained at 90% for the final 18 min. The peptides were subjected to a nano-spray-ionization (NSI) source followed by tandem mass spectrometry (MS/MS) in Orbitrap-ELite™ (Thermo scientific, USA) coupled online to the UPLC for 60 min. The parameters for UPLC were: a positive detection mode; parent ion scanning range of 350–2000 × m/z ; first-order MS resolution of 60 000 at mass/charge ( m/z ) 200; automatic gain control (AGC) target of 1e 6 ; first-level maximum IT of 10 ms; 1 scan range; and a dynamic exclusion of 30.0 s. The peptide secondary MS were obtained by sequentially selecting target peptides of precursor m/z for each full scan, based on the inclusion list for the second-order MS (MS2) scan. The parameters in MS2 were as follows: resolution of 15 000 at m/z 100; 1 microscan; AGC target of 5e 4 ; level 2 maximum IT of 100 ms; underfill ratio at 0.1%; and a normalized collision energy of 35 eV. 2.4.3 Database search The resulting MS/MS data were processed using Proteome Discoverer 2.1 (Thermo scientific, USA). Tandem mass spectra were searched against a protein database constructed using the deduced peptide sequences from the transcriptome of E. sinensis . Trypsin/P was specified as a cleavage enzyme, allowing up to two missing cleavages. The mass error was set to 20 ppm for precursor ions, and 0.1 Da for fragment ions. Carbamidomethyl on Cys was specified as a fixed modification, and oxidation on Met was specified as a variable modification. For protein quantification, the TMT-10-plex was selected in Mascot, and the false discovery rate (FDR) was adjusted to <0.01% ( Sandberg et al., 2012 ). Only proteins with a fold change higher than the cutoff of 1.2 and a p -value <0.05 were considered as differentially expressed proteins (DEPs). 2.4.4 Bioinformatics analysis Proteins were classified using gene ontology (GO) annotation into three categories: biological process, cellular compartment, and molecular function. For each category, a two-tailed Fisher’s exact test was utilized to test the enrichment of DEPs against all identified proteins. GO with a corrected p -value < 0.05 was considered significant. The GO classification annotation and enrichment analysis proteome was derived from the David 6.7 and QuickGO databases ( David Binns et al., 2009; Huang et al., 2009 ). The Kyoto Encyclopedia of Genes and Genomes (KEGG) database was used to identify enriched pathways using a two-tailed Fisher’s exact test to test the enrichment of DEPs against all identified proteins ( Kanehisa and Goto, 2000 ). The pathway with a corrected p -value < 0.05 was considered significant. 2.5 Determination of antioxidant enzyme activities Superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GSH-Px) activities were measured using the appropriate detection kits (A001–3–2, A007–2–1, and A005–1–2, Nanjing Jiancheng Bioengineering Institute, Nanjing, China). The hepatopancreas samples were homogenized ten times using ice-cold normal saline (1:9) and centrifuged at 3000 rpm for 15 min at 4 °C. Enzymatic activity in the supernatant was measured according to the manufacturer’s instructions. 2.6 Protein validation using parallel reaction monitoring (PRM) PRM was used to verify the TMT-based quantitative proteomics results ( Peterson et al., 2012 ). Briefly, 2 µg of peptide from each sample was used for LC-PRM/MS analysis. After sample loading, chromatographic separation was conducted using a Thermo Scientific EASY-nLC nano-HPLC system with two buffers. Solution A was a 0.1% formic acid aqueous solution, and solution B was a mixed solution of 0.1% formic acid, 84% acetonitrile, and water; the column was equilibrated with 95% solution A. The sample was injected into a trap column (home-made column, 100 µm 2 mm, 5 µm C18), and subjected to gradient separation through an analytical column (Thermo Scientific EASY column, 75 µm × 12 mm, 1.9 µm C18) at a flow rate of 250 nL/min. The liquid phase separation gradient involved: 0–42 min where linear gradient of B liquid increased from 5% to 23%; 42–50 min where linear gradient of B liquid increased from 23% to 40%; and 52–60 min where B liquid increased and was maintained at 100%. The peptides were separated and subjected to targeted PRM/MS using a Q-Exactive mass spectrometer (Thermo Scientific, USA) for 60 min. The parameters for PRM/MS were a positive detection mode; parent ion scanning range of 300–1800 × m/z ; first-order MS resolution of 60 000 at m/z 200; AGC target of 3e 6 ; and a first-level maximum IT of 200 ms. Peptide secondary MS were obtained by sequentially selecting target peptides of precursor m/z for each full scan, based on the inclusion list for the second-order MS (MS2) scan. The parameters used were: resolution of 30 000 at m/z 200; AGC target of 3e 6 ; a level 2 maximum IT of 120 ms; an HCD MS2 activation type; isolation window of 1.6 Th; and a normalized collision energy of 27 eV. Five proteins were randomly selected from the global proteomic analysis, including protein disulfide-isomerase A3 (ERp60), cytochrome P450 6k1 (Cyp6a2), heat shock cognate 70 kDa protein (Hsc70–4), protein spaetzle 5 (Spz), and hemocyanin A chain (PPO1). Skyline 3.5 (Skyline Software Systems, Inc., USA), was used to generate an initial PRM transition pair list for the five candidate DEPs. 2.7 Quantitative real-time polymerase chain reaction analysis (qRT-PCR) To detect changes in the mRNA levels, nine DEPs related to metabolism and stress damage were further evaluated by qRT-PCR using an ABI 7500 real-time system (Applied Biosystems, Foster City, CA, USA). The primers were designed based on Illumina sequencing data using the NCBI primer BlAST ( Table S1 ). β-actin was used as an internal control. Quantitative real-time PCR was performed in a total volume of 20 μL, containing 10 μL of 2 × SYBR real-time PCR premixture, 0.4 μL of each primer (10 μM), 1 μL of cDNA, and RNase-free dH 2 O added to reach a final volume of 20 μL. The parameters were set as follows: 95 °C for 5 min, followed by 40 cycles of 95 °C for 15 s and 60 °C for 30 s. Finally, a melting curve analysis was conducted to verify the specificity of the amplified product. All samples were analyzed in triplicates. Gene expression data were analyzed using the comparative CT (2 -ΔΔCT ) method ( Livak and Schmittgen, 2001 ). 2.8 Statistical analyses All data are presented as the mean ± standard deviation for the three replicates. A multiple comparison (Duncan) test was conducted to compare significant differences among treatments using the SPSS 22 program (SPSS Inc., Chicago, IL, USA). A p -value < 0.05 was considered statistically significant. 3 Results 3.1 LC 50 value The 96-h LC 50 value of ammonia was 90.92 mg/L (C.I. 87.17–94.27 mg/L) or 2.32 mg/L (C.I. 2.23–2.41 mg/L) as NH 3 for juvenile E. sinensis at pH 7.74 and water temperature 22.8 °C ( Table S2 ). 3.2 Proteomic alterations affected by ammonia exposure 3.2.1 Protein profiling All spectra obtained through tandem mass spectrometry were processed using the Mascot software (Matrix Science, UK), and a total of 152 415 spectra were detected, including 18 934 unique spectra ( Table. S3 ). A total of 2427 proteins were identified, of which 2358 contained quantitative information. Based on a benchmark of a 1.2-fold increase or a 0.83-fold decrease in protein expression to denote physiologically significant changes, there were 37 (75) and 156 (163) significantly upregulated (downregulated) proteins identified in the LT and HT groups, respectively, compared to the CK group ( Fig. 1 ). Among them, 31 DEPs were identified in both the CK vs. LT and CK vs. HT comparisons and represented proteins that responded to ammonia stress at different concentrations. Fig. 2 presents the hierarchical clustering heatmaps of DEPs to compare differences in the global hepatopancreas proteomes of the LT, HT, and CK groups. Cluster analysis also showed that similar samples were similar in distance and were preferentially sorted together. 3.2.2 GO and KEGG pathway enrichment analysis of DEPs GO enrichment analysis was conducted to investigate the functions of DEPs in control group compared to the two treatment groups ( Fig. 3 ). For the CK vs. LT group, many components involved in organonitrogen compound biosynthetic process, cytoplasmic translation, and spermidine metabolic process had been significantly enriched as per the biological process analysis. From a molecular function perspective, monophenol monooxygenase activity, catechol oxidase activity, and L -DOPA monooxygenase activity had been significantly enriched. This was also the case for cellular components, such as box C/D snoRNP complex, cytosol, nucleolus were all significantly enriched. In terms of the CK vs. HT, the cytoplasmic translation, oxidation-reduction process, and protein folding were all significantly enriched based on biological process analysis. GO analysis divided cellular components into 461 constituents, among which the cytoplasm, cytoplasmic part, and intracellular part were all significantly enriched ( p < 0.05). The major molecular functions of oxidoreductase activity, structural constituents of ribosomes, and oxidoreductase activity acting on nicotinamide adenine dinucleotide phosphate (NAD(P)H) were significantly enriched. KEGG enrichment was used to analyze the DEPs in the hepatopancreas of crabs against ammonia stress ( Fig. S1 ). Only the messenger ribonucleic acid (mRNA) surveillance pathway was significantly enriched in the CK vs. LT group ( Fig. S1A ); however, five significantly affected pathways were identified in the CK vs. HT group ( Fig. S1B ), including oxidative phosphorylation, arginine biosynthesis, alanine, aspartate, and glutamate metabolism, endocytosis, and metabolic pathways. 3.2.3 TMT quantification Table 1 lists representative DEPs; a greater focus was placed on proteins reported to play a role in amino acid metabolism, energy metabolism, and stress damage in crabs. In the LT group, five representative DEPs were downregulated, including glutathione peroxidase 2 (PHGPx), heat shock cognate 70 kDa protein (Hsc70–4), heat shock protein40 (HSP40), ras-related GTP-binding protein D (RagC-D), and GTP-binding protein 128up. These DEPs are related to oxidative stress (PHGPx) and protein damage (Hsc70–4, HSP40, RagC-D, and GTP-binding protein 128up). Additionally, four DEPs, nicotinamide adenine dinucleotide+hydrogen (NADH) dehydrogenase [ubiquinone] 1 alpha subcomplex subunit 13 (ND-B16.6), V-type proton ATPase 16 kDa proteolipid subunit (Vha16–1), succinate dehydrogenase [ubiquinone] iron-sulfur subunit (SdhB), and superoxide dismutase [Cu-Zn] (SOD) were upregulated. These proteins are involved in the tricarboxylic acid (TCA) cycle (SdhB), oxidative phosphorylation (SdhB, ND-B16.6, and Vha16–1), and oxidative stress (SOD). For the HT group, 16 significantly downregulated proteins related to the TCA cycle (2-oxoglutarate dehydrogenase E2 component, malate dehydrogenase, and succinyl-CoA ligase subunit alpha), lipid metabolism (acetyl-CoA carboxylase), oxidative phosphorylation (cytochrome b-c1 complex subunit 7, cytochrome b-c1 complex subunit rieske and cytochrome c oxidase subunit 6 A), amino acid metabolism (argininosuccinate synthase and glutamine synthetase), oxidative stress (thioredoxin reductase 1, glutathione S-transferase 1, and glutathione S-transferase Mu 1), and protein damage (protein disulfide isomerase A3, thioredoxin domain-containing protein 5 homolog, Ras-related protein Rab-5B, and Ras-related protein Rab-11A) were identified. Seven significantly upregulated proteins related to amino acid metabolism (alanine transaminase), carbon metabolism (glycogen debranching enzyme), lipid metabolism (thiolase), oxidative phosphorylation (NADH dehydrogenase [ubiquinone] 1 alpha subcomplex subunit 13 and V-type proton ATPase 16 kDa proteolipid subunit), and protein damage (heat shock protein 90 homolog and heat shock protein 22) were identified. 3.3 Antioxidant enzymes responses Fig. 4 summarizes the antioxidant activities in the hepatopancreas of E. sinensis following ammonia exposure. In the LT group, crabs had significantly higher SOD activity ( p < 0.05) than the CK and HT groups. However, CAT activity in the HT group was significantly lower ( p < 0.05) than that in the CK and LT groups. GSH-PX activity in the two ammonia treatment groups was significantly lower ( p < 0.05) than that in the CK group. 3.4 Parallel reaction monitoring results To validate the proteomics data, one protein that was upregulated after parasitism (PPO1) and four that were downregulated after parasitism (ERp60, Cyp6a2, Hsc70–4, and Spz) were selected for PRM analysis. The validated proteins showed expression trends similar to the proteomics expression trends, suggesting that the proteomics data were reliable ( Fig. 5 ). 3.5 Relative mRNA expression The mRNA levels of nine proteins related to oxidative phosphorylation (NADH dehydrogenase [ubiquinone] 1 alpha subcomplex subunit 13), amino acid metabolism (alanine transaminase), lipid metabolism (thiolase), TCA cycle (2-oxoglutarate dehydrogenase E2 component, malate dehydrogenase, and succinate dehydrogenase [ubiquinone] iron-sulfur subunit), carbon metabolism (glycogen debranching enzyme), oxidative stress (superoxide dismutase [Cu-Zn]), and stress damage (heat shock protein 40) were measured to compare the correlation between transcription and translation levels. The qRT-PCR data showed that the mRNA expression tendencies of most genes were similar to the proteomic analysis, except for the 2-oxoglutarate dehydrogenase E2 component ( Fig. 6 ). 4 Discussion Juvenile E. sinensis is highly susceptible to acute ammonia stress induced by the application of nitrogen fertilizers in a rice-crab co-culture system. The 96-h LC 50 of ammonia (90.92 mg/L) for juvenile E. sinensis found in our experiment was higher than the previously reported value for this species (26.21 mg/L; Roseboom and Richey, 1977 ), which may be owing to differences in wet mass and environmental temperature and pH. Based on the actual agricultural production and the 96-h LC 50 of ammonia, we used TMT proteomics technology to explore the molecular mechanism underlying ammonia stress in juvenile E. sinensis ; there were 112 DEPs and 319 DEPs when comparing the CK vs. LT and CK vs. HT groups, respectively. Proteomic data demonstrated that ammonia stress mainly caused changes in proteins associated with metabolism and stress damage. The following sections discuss the biological relevance of these DEPs. 4.1 Metabolic changes induced by ammonia exposure Hepatotoxicity is typically accompanied by metabolic disturbances ( Dong et al., 2020 ). Proteomic analysis showed that ammonia conditions differentially affected metabolism in the hepatopancreas of juvenile E. sinensis. In the CK vs. HT group, the majority of DEPs related to the TCA cycle were downregulated, including 2-oxoglutarate dehydrogenase E2 component, succinyl-CoA ligase subunit alpha (Scsalpha), and malate dehydrogenase (Mdh2). These changes may cause a reduction in the TCA cycle flux under high concentrations of ammonia stress, as has been described in Sparus aurata , where the induction of metabolic disruption because of the downregulation of TCA cycle-related genes has been observed under low temperature exposure ( Mininni et al., 2014 ). The TCA cycle is important in terms of providing energy for functioning and as a metabolic hub of carbohydrates, lipids, and amino acids in aerobic organisms ( Ibarz et al., 2007 ). Alanine transaminase was significantly upregulated in the CK vs. HT group, potentially enhancing gluconeogenesis as it catalyzes the transfer of the amino group from alanine to α-ketoglutarate ( Ndrepepa et al., 2019 ). We also found that the glycogen debranching enzyme displayed increased abundance when exposed to high concentrations of ammonia, indicating that glycogen degradation had accelerated. Previous studies have concluded that glycogen degradation can increase glucose levels in aquatic biota under environmental stress ( Tintos et al., 2007; León-Vaz et al., 2021 ). Taken together, we speculate that gluconeogenesis and degradation of glycogen may be regulatory mechanisms that maintain intracellular glucose homeostasis in juvenile E. sinensis under high ammonia stress. In our study, some lipid metabolism-associated proteins were significantly altered after exposure to high concentrations of ammonia. Mdh2 also plays an important role as a citrate carrier, catalyzing malate to oxaloacetate to participate in the transport of acetyl-CoA from the mitochondria to the cytoplasm ( Zara and Gnoni, 1995 ). Mdh2 was significantly downregulated in the CK vs. HT group, suggesting that the transfer rate of acetyl-CoA was reduced. At the same time, acetyl-CoA carboxylase was significantly down-regulated in the CK vs. HT group, which is the rate-limiting enzyme for fatty acid synthesis, catalyzing the conversion of acetyl-CoA to malonyl-CoA ( Grahame Hardie, 1989 ). These data indicate that high ammonia stress inhibits fatty acid synthesis. However, this is not of consequence as the significant upregulation of thiolase that is involved in fatty acid beta oxidation indicates that the oxidation of fatty acids in the hepatopancreas of juvenile E. sinensis was enhanced to produce more energy in response to high ammonia stress. Several studies have shown that hepatic glycogen and perivisceral fat reserves of aquatic biota may be utilized rapidly when energy consumption is excessive under environmental stress ( Nakamura et al., 1986; Andrew et al., 2003 ). Oxidative phosphorylation refers to the process through which electrons are transferred from NADH or flavin adenine dinucleotide- hydroquinone (FADH 2 ) to O 2 with the help of many electron carriers, ultimately forming ATP ( Fromm and Hargrove, 2012 ). Ten subunits of complexes were downregulated, and five were upregulated in the CK vs. HT group. Among them, the most affected DEPs, NADH dehydrogenase [ubiquinone] 1 alpha subcomplex subunit 13 (ND-B16.6) and V-type proton ATPase 16 kDa proteolipid subunit (Vha16–1), which were significantly upregulated (1.63-fold and 2.85-fold, respectively; p < 0.01) were focused on. These data are consistent with previous proteomics studies, where oxidative phosphorylation is upregulated when aquatic animals are subjected to environmental stress ( Fan et al., 2019; Li et al., 2021a, 2021b ). In this study, the significant upregulation of these DEPs suggests that oxidative phosphorylation may be enhanced, producing more ATP for juvenile E. sinensis to cope with high concentrations of ammonia stress. In the CK vs. LT group, there were a few DEPs related to metabolism, indicating that low concentrations of ammonia had little effect on juvenile E. sinensis . All DEPs related to oxidative phosphorylation were upregulated under low concentrations of ammonia exposure, including succinate dehydrogenase [ubiquinone] iron-sulfur subunit (SdhB), Vha16–1, and ND-B16.6, demonstrating that upregulation of oxidative phosphorylation may be an important energy regulation mechanism for E. sinensis to cope with low ammonia concentration stress. The regulation of gene expression is a complex biological process that responds to changes in the environment at multiple levels, such as gene, transcription, post-transcription, translation, and post-translational levels ( Li et al., 2021a, 2021b ). In the CK vs. LT group, several proteins involved in the TCA cycle and glycogen degradation were marginally upregulated, including SdhB ( p < 0.05), Mdh2 ( p > 0.05), and glycogen debranching enzyme ( p > 0.05), indicating that low concentrations of ammonia stress may cause an increase in TCA flux and degradation of glycogen. A similar study reported that L. vannamei upregulated several metabolites associated with the TCA cycle under ammonia stress ( Xiao et al., 2019 ). At the transcriptional level, qRT-PCR analysis revealed that the mRNA levels of 2-hydroxyglutarate dehydrogenase E2, Mdh2, and glycogen debranching enzyme were significantly upregulated after exposure to low concentrations of ammonia exposure. These results further supported the finding that the TCA cycle and degradation of glycogen have a marginally upregulated trend, producing energy for juvenile E. sinensis to adapt to low ammonia stress. Some aquatic animals have the capacity to convert ammonia to the less toxic urea via the ornithine urea cycle ( Saha et al., 2001 ). Argininosuccinate synthase was significantly downregulated in the CK vs. HT group, which is the rate-limiting enzyme for the ornithine urea cycle, catalyzing the conversion of aspartate to argininosuccinate. At the same time, Mdh2 was significantly upregulated under high ammonia stress, which may have a negative effect on aspartate supply. These results indicate that urea production may be reduced under conditions of high ammonia stress. Glutamine formation is also an important strategy for ammonia detoxification ( Randall and Tsui, 2002 ). Glutamine synthase was significantly downregulated in the CK vs. HT group, catalyzing the synthesis of glutamine. These results suggest that the detoxification of ammonia into urea and glutamine might be suppressed in juvenile E. sinensis when exposed to high ammonia conditions. 4.2 Stress damage induced by ammonia exposure Previous studies have demonstrated that ammonia stress can disturb the oxidant-antioxidant balance and induce the accumulation of reactive oxygen species (ROS), causing oxidative stress in aquatic biota ( Wang and Gallagher, 2013; Cheng et al., 2019 ). Similar results were observed in the CK vs. HT group, in which the cytochrome b-c1 complex subunit 7 (UQCR-14 L), cytochrome b-c1 complex subunit Rieske (RFeSP), and cytochrome c oxidase subunit 6 A (levy) were significantly downregulated. The downregulation of complex III (cytochrome bc1 complex) and complex IV (cytochrome c oxidase) may prompt the accumulation of ROS, and potentially induce oxidative stress ( Lin et al., 2017 ). Adjusting enzymatic processes is considered a primary defense mechanism to avoid excess ROS ( Li et al., 2018 ). However, thioredoxin reductase 1 (Trxr-1), glutathione S-transferase 1 (GstD1), and glutathione S-transferase Mu 1 (GstS1) were significantly downregulated in the CK vs. HT group. We also found that superoxide dismutase [Cu-Zn] was significantly upregulated and glutathione peroxidase 2 was significantly downregulated in the CK vs. LT group. This suggests that the production of H 2 O 2 may be greater than its elimination, leading to the accumulation of H 2 O 2 . These results are similar to those of a previous study, in which E. sinensis could not scavenge redundant ROS that had accumulated in cells and reduced or downregulated antioxidant enzyme levels ( Jin et al., 2017 ). This hypothesis was further confirmed by the detection of antioxidant enzyme activities in the hepatopancreas of juvenile E. sinensis under ammonia exposure. Compared to the CK group, SOD activity was significantly increased under low ammonia stress, whereas GSH-PX activity in the two ammonia treatment groups significantly decreased in a dose-dependent manner, which was consistent with the proteomics results. Moreover, CAT activity significantly decreased under high ammonia stress conditions. Therefore, we propose that exposure to high concentrations of ammonia may induce more severe oxidative stress in the hepatopancreas of juvenile E. sinensis . Oxidative stress may also be associated with protein damage; previous studies have indicated that ammonia stress upregulates the synthesis pathway of proteins in ribosomes ( Lu et al., 2018 ), induces endoplasmic reticulum stress ( Liang et al., 2016 ), and alters the expression of markers related to protein folding ( Liang et al., 2019 ). Ribosomes are sites of protein synthesis that play a central role in protein metabolism ( Liang et al., 2016 ). In ribosomes, we found that 15 proteins were significantly downregulated in the CK vs. HT group, whereas four proteins were significantly downregulated in the CK vs. LT group. These results indicate that ammonia stress may affect protein synthesis and reduce the concentration of proteins in juvenile E. sinensis ; this has also been reported in microalgae during exposure to Cd ( León-Vaz et al., 2021 ). Heat shock proteins (HSPs) are molecular chaperones that maintain homeostasis by refolding denatured proteins, degrading unstable or misfolded proteins, and preventing cells from oxidative stress, heat shock, and apoptosis ( Hendrick and Hartl, 1993 ). HSP70 and HSP40 function to maintain cellular homeostasis under stress conditions ( Parsell and Lindquist, 1993 ), although their expression was significantly downregulated by ammonia stress in the CK vs. LT group. A previous study reported that a key role of HSP90 is in preventing irreversible aggregation of proteins under stress conditions ( Feder and Hofmann, 1999 ). The upregulation of HSPs (HSP90 and HSP22) may be an adaptation of juvenile E. sinensis to high ammonia stress to prevent damage to cellular proteins. However, protein disulfide isomerase (PDI)-related proteins, including protein disulfide isomerase A3 (ERp60) and thioredoxin domain-containing protein 5 homolog (PRTP) were significantly downregulated in the CK vs. HT group, facilitating the proper folding of proteins ( Bulleid and Ellgaard, 2011 ). Furthermore, ammonia stress significantly downregulated the expression levels of Rabs or Rab-related proteins, including Ras-related GTP-binding protein D (RagC-D), Ras-related protein Rab-5B (Rab5), Ras-related protein Rab-11A (Rab11), and spartin. Rab GTPase proteins, members of the small G protein superfamily, are important regulators of critical cellular processes such as endocytosis and elimination of pathogens through immune responses ( Cha et al., 2015 ). The results showed that high and low concentrations of ammonia can cause a certain degree of protein damage. 5 Conclusion In conclusion, ammonia stress leads to metabolic changes and induces oxidative stress in juvenile E. sinensis . Proteomic data demonstrated that E. sinensis is sensitive to high ammonia conditions. Several metabolic pathways were upregulated for energy supply to cope with the demands of high ammonia stress, including oxidative phosphorylation, fatty acid beta oxidation, gluconeogenesis, and degradation of glycogen. The detoxification of ammonia into urea and glutamine was suppressed under high ammonia stress. Reduced antioxidant enzyme activities and downregulation of related proteins revealed that exposure to high concentrations of ammonia may induce more severe oxidative stress. Furthermore, ribosomal subunit proteins, HSPs, and Rab GTPase proteins were altered upon exposure to different concentrations of ammonia, indicating that ammonia could affect protein synthesis and cause protein damage in juvenile E. sinensis . The outcomes of this study will facilitate future research on the molecular mechanisms underlying the effects of ammonia toxicity in E. sinensis and provide technical support for agricultural production. CRediT authorship contribution statement Tianyu Wang: Experimental design, Writing – review & editing. Chen Yang: Sample pretreatment, Writing – original draft. Shuang Zhang: Sample collection. Liyan Rong: Investigation. Xiaofei Yang: Methodology. Zhaoxia Wu: Supervision, Validation. Wentao Sun: Project administration, Funding acquisition. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influenced the work reported in this paper. Acknowledgements This work was supported by the Science and Technology Innovation Talents Training Project of Liaoning Province from the Department of Science and Technology of Liaoning Province (grant no. XLYC1802044 ). Appendix A Supporting information Supplementary data associated with this article can be found in the online version at doi:10.1016/j.ecoenv.2021.112608 . Appendix A Supplementary material . Supplementary material . Supplementary material
|
[
"ANDREW",
"BATTH",
"BULLEID",
"CHA",
"CHEN",
"CHEN",
"CHENG",
"DAVIDBINNS",
"DONG",
"EMERSON",
"FAN",
"FAN",
"FEDER",
"FROMM",
"GRAHAMEHARDIE",
"HENDRICK",
"HONG",
"HUANG",
"IBARZ",
"JIA",
"JIN",
"JING",
"KANEHISA",
"LEONVAZ",
"LI",
"LI",
"LI",
"LIANG",
"LIANG",
"LIN",
"LIVAK",
"LU",
"LU",
"MININNI",
"NAKAMURA",
"NDREPEPA",
"NING",
"PARSELL",
"PETERSON",
"RANDALL",
"REISH",
"SAHA",
"SANDBERG",
"TANG",
"TANG",
"TINTOS",
"WANG",
"WANG",
"WANG",
"WANG",
"WANG",
"WEIHRAUCH",
"WICKS",
"WISNIEWSKI",
"XIAO",
"XU",
"YAN",
"YU",
"ZARA"
] |
f758f24a15ea42e1ab894db06c382abb_Role of nutrition in the development and prevention of age-related hearing loss A scoping review_10.1016_j.jfma.2020.05.011.xml
|
Role of nutrition in the development and prevention of age-related hearing loss: A scoping review
|
[
"Rodrigo, Luis",
"Campos-Asensio, Concepción",
"Rodríguez, Miguel Ángel",
"Crespo, Irene",
"Olmedillas, Hugo"
] |
Age-related hearing loss (ARHL) is a major and increasingly prevalent health problem worldwide, causing disability and social isolation in the people who present it. This impairment is caused by genetic and environmental factors. Nutritional status has been identified as a related risk associated with hearing loss (HL). This scoping review aimed to characterize the links between HL and nutritional status. PubMed, Embase, Cochrane and Scopus databases were searched up to December 2019. Studies examining the relation between nutrition and dietary habits and HL were included. After screening 3510 citations, 22 publications were selected for inclusion in the current review, all of which were published between 2010 and 2019. Diets rich in saturated fats and cholesterol have deleterious effects on hearing that could be prevented by lower consumption. Conversely, greater consumption of fruit and vegetables, and of polyunsaturated fatty acids (omega-3) and anti-oxidants in the form of vitamins A, C, and E, prevent the development of ARHL. The current literature suggests a possible association between nutritional status and hearing loss. More studies are needed to better characterize the clinical consequences of this association.
|
Introduction Age-related hearing loss (ARHL) or acquired hearing loss is one of the most frequent chronic diseases in elderly individuals, with a prevalence of 35%. According to data collected by the WHO, about 466 million people suffer from a disabling degree of hearing loss (HL), with men being more affected than women. Its appearance and development are influenced by genetic 1 and environmental factors, 2 including eating habits. In the context of the latter, HL has been analyzed from three different perspectives: a) the consequences of prolonged nutritional deficiencies 3–7 ; b) the positive influence of various components of the diet on preventing HL 8 ; c) the effect of supplementation with anti-oxidant components to prevent HL. 9 The relationship between food intake and hearing changes is of practical interest, since dietary changes can be made that can at least delay, or even prevent the development of HL. 10–12 13 Adoption of a diet whose main components are vegetables, fruits, whole grains, and an adequate quantity of fish is usually associated with a decrease in systemic inflammation, which could help maintain proper hearing. 14 15 The aim of this study was to use scoping review methods to analyze how a healthy diet may mediate the risk of, or protection against, HL. We limited our search to studies that have evaluated the composition of the diet in humans and in which the diagnosis of HL was made through objective audiometry tests. Methods This scoping review was performed according to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). 16 Data sources and search strategy MEDLINE [PubMed], Embase [via Embase.com] and CENTRAL (Cochrane Central Register of Controlled Trials [via Wiley]) were searched from their inception until Dec 16th, 2019. We imposed no language or temporal restrictions on any of the searches, although they were limited to studies on humans. The reproducible search strategy for PubMed can be found in Appendix 1 . The search strategies were developed by an experienced medical librarian [CC] and peer-reviewed by another librarian [JM] according to best practice recommendations. Keywords for the main search were identified with the Yale MeSH Analyzer text analysis tool, 17 Medical Subject Headings (MESH, via the US National Library for Medicine MESH browser), and Embase subject headings (EMTREE, via 18 Embase.com ). An exhaustive electronic search strategy was devised, based on the selected keywords, and a library of the retrieved articles was created with Rayyan software, which enables blinded screening of abstracts and titles. 19 Study selection To select articles obtained in this search, the following inclusion criteria were applied: (1) study carried out on humans; (2) study involves adult participants; (3) auditory function assessed by a trained audiologist by measuring pure tone air conduction; (4) study that evaluated the composition of the diet and (5) published in English. The following exclusion criteria were applied: (1) article is a review; (2) self-reported hearing-loss; (3) sudden deafness; (4) alcohol intake; (5) study analyzed dietary supplementation instead of whole food intake. All the articles identified were independently screened by two reviewers. After abstract screening, full-text versions of the articles deemed potentially suitable for inclusion were retrieved and independently evaluated against the eligibility criteria by the same reviewers. Disagreements between the reviewers about study eligibility were resolved by discussion. Data extraction and risk assessment of bias Data charting focused on relevant study characteristics and was conducted independently, in duplicate, by two reviewers, using a customized data-extraction form. Perceived discrepancies in the extracted data were resolved by the reviewers through discussion after rereading the full-text articles. The studies included in this scoping review were appraised for methodological quality according to appropriate research designs for observational studies: the Newcastle–Ottawa Scale (NOS) for cohort studies, and the modified NOS for cross-sectional studies. 20 21 Data synthesis The results of this scoping review were synthesized to indicate the significance and direction of the observed associations, and summarized as tables. Information about study characteristics was extracted to describe the studies and populations. Results Search findings A total of 3263 citations were retrieved by database searching. An additional 525 records were identified by reviewing the references cited in selected retrieved articles and by citation tracking of articles in Scopus (see Fig. 1 ) . After duplicates were removed, we screened 3510 article titles and abstracts. 3430 were excluded, leaving the full text of 80 articles to be retrieved and assessed for eligibility. This resulted in the exclusion of 58 further articles, for the following reasons: not a population of interest (n = 6); wrong outcome (n = 20); not dietary intake (n = 17); reviews (n = 7); alcohol intake (n = 6); unable to retrieve full article (n = 2). Finally, therefore, 22 studies were considered eligible for this review. Study characteristics The general data of the investigations included in this systematic review are presented in Table 2 . All the articles were published between 2010 and 2019 and studied the influence of nutritional factors on hearing function. Eleven studies analyzed macronutrients, 15 , eight studies evaluated micronutrients 22–31 13 , 29 , and five studies focused on other dietary factors. 31–36 29 , This scoping review covers twelve cross-sectional 37–40 15 , 27 , 29–32 , 34–36 , and ten cohort studies. 38–40 13 , 22 , 24–27 , 29 , 33 , 37 , Six studies included participants from the Blue Mountains Hearing Loss Study (Australia), 41 24–27 , 31 , while nine used data from the National Health and Nutrition Examination Survey (NHANES) of the USA 33 15 , 23 , and Korea. 32 28 , 34−36 , 38 , All the investigations used mixed-gender samples, except one 39 that comprised only males. The average age of participants varied between 40 41 and 75. 40 The period of the original interventions ran from 1971 (the first to start) 35 to 2016 30 (the last to finish), the last intervention beginning in 2012. 22 One study was carried out over two periods (1992–1994 and 1997–1999), 39 and another analyzed three age cohorts of 70-year-olds on three occasions (1971–1972, 1992–1993 and 2000–2001). 27 The duration of the cohort studies ranged from 2 30 to 18 years, 37 with an overall mean of 8.1 years. 13 The main results of the studies are summarized in Tables 1 and 2 . Nutrient intake was evaluated principally using the Food Frequency Questionnaire (FFQ) 22–26 , 30 , 31 , 33 , 37–39 , and 24-h dietary recall, 41 15 , 28 , 29 , 32 , although the semi-quantitative FFQ, 34–36 a dietary history, 13 and a food preference questionnaire 30 were also used for this purpose. 40 Audiological examination in all studies consisted of pure-tone audiometry (PTA), except for that of Shargorodsky et al., which analyzed self-reports of HL. Audiometric questionnaires were included in three studies, 13 15 , 23 , otoscopic examinations were carried out in three, 31 23 , 26 , and tympanometry was performed in two studies. 35 15 , Apart from these, Spankovich et al. 23 examined cochlear function by recording transient evoked otoacoustic emissions (TEOAEs), and Hwang et al. 31 assessed temporal ordering (central auditory system) using the pitch pattern sequence (PPS). 37 Table 3 summarizes the quality assessment of the 22 studies according to research designs for observational studies. 21 , 22 Discussion We performed a scoping review of 22 studies, summarizing the current knowledge about the factors associated with diet composition and hearing status. The principal findings were that some individual nutrients and diet types are associated with hearing level. Diets rich in cholesterol and unsaturated fatty acids are harmful to hearing. Conversely, diets rich in polyunsaturated fatty acids, such as omega-3 and those from certain fish, the regular consumption of vegetables and fruits, and the intake of anti-oxidants in the form of various vitamins have a protective effect against HL. Moreover, a previous study has shown that if prolonged nutritional deficiencies in children are corrected in time through provision of proper nutrition, the onset of HL in adulthood can be prevented. However, further research is needed to establish definitively the connection between nutrition and HL. 8 The study of dietary patterns has been based on two methodologies: the analysis of retrospective studies and the execution of prospective studies based on nutritional recommendations and established guidelines. Adherence to a healthy diet is associated with a lower prevalence of the development of HL. 42 Poor dietary habits are related to overweight and obesity. 41 41 , Lalwani et al. 43 identified that a high body mass index (BMI) are related to the presence of ARHL in children of both sexes. Likely, Croll et al. 43 identified similar results in older adults for the BMI and fat mass in a cross-sectional analyze, but this association disappeared after 4 years follow-up. These results suggest that maintaining a normal weight may help prevent HL in the elderly. However, we have not found any published research that confirms this hypothesis. 22 Nowadays, it is difficult to predict the importance that the intake of carbohydrates has in the development of HL. However, the preferentially consumed types of food that contain carbohydrates are not associated with a healthy diet (whole grain, vegetables and fruit), but rather with simple sugars, that include monosaccharides and disaccharides, which regularly involved foods with added sugar and high levels of triglycerides. 44 Additionally, the regular intake of low molecular weight carbohydrates also increases blood triglyceride levels, 45 which is associated with HL in men and women. 46 Regarding this point, in the Blue Mountains Hearing Study, a significant correlation was noted between high glycemic levels and the presence of HL in a group of adults. 30 Conversely, higher carbohydrate intakes were associated with better auditory function. 24 This observed discrepancy, despite the measurements being made in the same cohort, albeit over different durations, could be a consequence of the different methods employed. On the one hand, Gopinath et al. 31 studied the glycemic index and used the PTA measurement to evaluate HL, while on the other, Spankovich et al. 23 analyzed carbohydrate intake through the FFQ and evaluated HL by determining the TEOAEs. However, it should be pointed out that PTA is the most commonly used technique for evaluating hearing levels, and the most frequently employed in all the studies. 31 47 Diets rich in cholesterol are usually associated with harmful effects on hearing. Not all fatty acids have a clearly deleterious effect, since an inverse relationship between the consumption of polyunsaturated fatty acids (PUFAs) and/or the consumption of fish has been described with respect to the incidence and prevalence of HL. In relation to the frequency of consumption of fish that are rich in omega-3 fatty acids, eating them at least twice a week was found to reduce the frequency with which presbycusis developed by 42% by the end of a 5-year follow-up. The mechanisms that could explain this beneficial effect are based on modifying vascular disorders at the cochlear level and on inflammatory changes related to arteriosclerosis. 25 However, other researchers found no significant association between regular fish intake and auditory levels. 48 The discrepancy can be explained by the different study period (5 29 vs. 13 years). In another study, elevated serum lipid levels were observed in patients presenting with sudden sensory HL, but this association has not been confirmed in cases of slow progressive onset of HL, as occurs in patients with ARHL. Conversely, subjects with diets poor in both fat and protein are more likely to experience “hearing discomfort”. 49 28 Only one study reviewed here evaluated the role of protein intake and its influence on HL. Kim et al. reported a negative correlation between low-protein intake and hearing discomfort based on mean hearing thresholds, but not on the degree of HL. However, this interpretation should be regarded with caution, since only at low frequencies did the hearing threshold exhibit a statistically significant association with protein intake. Consistent with these findings, insufficient protein intake by pigs produced ototoxic side effects. 28 Therefore, low protein intake might have detrimental effects on the auditory system through its consequences for neural function. 50 The antioxidant effects of vitamins A, C, and E are known to be of potential benefit to the prevention and treatment of HL, having been studied as components of the regular diet and as dietary supplements. Vitamin A, in the form of its active metabolite, retinoic acid, is essential for the normal development of the inner ear, in addition to its effects protecting against continued exposure to ambient noise, and preventing infections, especially in malnourished children. 51 High levels of consumption of vitamin C are associated with better levels of hearing in the medium-frequency range; the consumption of beta-carotene, vitamins C and E, as well as magnesium, improves the average PTA response at high frequencies, its protective role being significantly stronger when administered in combination compared with as an isolated intake. 52 Likewise, Gopinath et al. reported that the high level of intake of vitamins A and E was inversely associated with the prevalence of HL, 32 but 5-years longitudinal analysis did not show any association with the incidence of HL. Findings about the effects of vitamin D on hearing presented in previous reports are not consistent. 33 High serum vitamin D concentration was associated with worse hearing at high frequencies. These results have been reported in an animal model in which a vitamin D-deficient diet was able to prevent hearing loss in mice with hypervitaminosis D. 34 Ahigh prevalence of vitamin D deficiency or insufficiency has been reported in patients with hearing problems. 53 On the other hand, the deficit of vitamin B 54 12 and folic acid (B 9 ), especially in older age, is associated with an increase in serum homocysteine (Hcy) concentrations, which have a detrimental effect on blood flow at the cochlear level. Serum vitamin B 55 12 is not significantly associated with hearing loss, but people with moderate levels of B 9 have 32% lower odds of experiencing HL at lower frequencies (0.5–4.0 kHz). Finally, vitamin C supplementation significantly decreases the permanent hearing threshold, while its deficiency has no effect on HL. 56 57 Limitations The most important limitation of the data is their heterogeneity across studies with respect to dietary factors, study design, and outcomes of interest. This makes it impossible to determine the true relationship between nutrition and HL. In addition, most studies have examined the role of each nutrient in isolation, without taking into account the overall intake. Thus, the limited and sometimes contradictory results make it important to carry out further research into this matter. Conclusions This scoping review leads us to conclude that diets rich in saturated fats and cholesterol have clearly detrimental effects in relation to the development of HL. This damage can be prevented by restricting their consumption, and by increasing that of vegetables and fruits, polyunsaturated fatty acids (omega-3), and of anti-oxidants in the form of vitamins A, C, and E, which have a protective effect against HL, especially in older people. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Authorship statement Conception and design of study: L. Rodrigo, H. Olmedillas; Acquisition of data: L. Rodrigo, M.A. Rodríguez, H. Olmedillas; Analysis and/or interpretation of data: L. Rodrigo, H. Olmedillas, C. Campos-Asensio, I. Crespo; Drafting the manuscript: L. Rodrigo, H. Olmedillas, M.A. Rodríguez; Revising the manuscript critically for important intellectual content: C. Campos-Asensio, I. Crespo. Declaration of competing interest The authors have no conflicts of interest relevant to this article. Acknowledgments The authors would like to thank Sergio Pérez-Holanda and Juan Medino Muñoz for their help with the peer review of the literature search. Appendix A Supplementary data The following is the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.jfma.2020.05.011 .
|
[
"MARLENGA",
"CABANILLAS",
"FRANSEN",
"BAINBRIDGE",
"FABRY",
"LANVERSKAMINSKY",
"BIELEFELD",
"EMMETT",
"CARLSON",
"RAUTIAINEN",
"KARLI",
"SEMBA",
"SHARGORODSKY",
"CURHAN",
"SPANKOVICH",
"TRICCO",
"MCGOWAN",
"OUZZANI",
"WELLS",
"MODESTI",
"CROLL",
"SPANKOVICH",
"GOPINATH",
"GOPINATH",
"GOPINATH",
"GOPINATH",
"KIM",
"PENEAU",
"ROSENHALL",
"SPANKOVICH",
"CHOI",
"GOPINATH",
"KANG",
"KIM",
"JUNG",
"HWANG",
"LEE",
"LEE",
"VUCKOVIC",
"GALLAGHER",
"BOUNTZIOUKA",
"LALWANI",
"PUGA",
"RIPPE",
"XI",
"FUNAMURA",
"FUJIMOTO",
"WENG",
"LAUTERMANN",
"IBRAHIM",
"ELEMRAID",
"CARPINELLI",
"GHAZAVI",
"PARTEARROYO",
"KABAGAMBE",
"MCFADDEN"
] |
6e5af17e59934584897cdcb2e8faa909_Battery energy storage systems providing dynamic containment frequency response service_10.1016_j.ijepes.2024.110288.xml
|
Battery energy storage systems providing dynamic containment frequency response service
|
[
"Cao, Xihai",
"Engelhardt, Jan",
"Ziras, Charalampos",
"Marinelli, Mattia",
"Zhao, Nan"
] |
Battery energy storage systems (BESS) have emerged as a critical component in maintaining power system stability through frequency regulation. Their rapid response and flexible characteristics have generated considerable interest among researchers. This study focuses on the provision of a fast frequency response service, known as Dynamic Containment Frequency Response (DCFR), in Great Britain (GB). It conducts a detailed assessment of BESS-based DCFR service for frequency regulation and State-of-charge (SOC) management, including the configuration constraints set out by the energy recovery rules and SOC management impact. A methodology is presented to investigate the performance of DCFR-based BESS in a power system, alongside a stability analysis focusing on the impact of the SOC management mechanism. The stability study investigates the potential influential factors of battery SOC management when providing DCFR via root locus. For simulation case studies, a power imbalance estimation method is utilized for gaining the input. Based on the stability analysis results, key BESS configuration parameters are examined in an integrated power system model: C-rate, SOC management range, ratio and target. Another influential factor, SOC management time delay, is also analyzed. Finally, a comparison between DCFR and the previous frequency regulation service, Enhanced Frequency Response (EFR), is conducted. The study reveals that improper SOC management in DCFR can lead to SOC oscillation, adversely affecting performance. However, with proper configuration, DCFR offers more favorable outcomes than EFR in terms of frequency quality, SOC levels, and battery degradation.
|
Abbreviations DCFR Dynamic Containment Frequency Response BESS Battery energy storage systems EFR Enhanced Frequency Response ER Energy recovery FEC Full equivalent cycles GB Great Britain NGET National Grid Electricity Transmission PFR Primary Frequency Response RES Renewable energy sources REV Response energy volume SFR Secondary Frequency Response SOC State-of-charge SP Settlement period TSO Transmission system operator 1 Introduction The need for decarbonization in recent years has resulted in a notable upsurge in the integration of Renewable energy sources (RES) in power systems, with renewables accounting for 50.9% of the total electricity generation in the UK during the first quarter of 2024 [1] . However, the low-inertia and intermittency of RES introduce challenges, such as more volatile frequency variation. Power system frequency is a critical factor indicating the balance between generation and demand, and requires regulation within specific limits. Various frequency response services, including Primary Frequency Response (PFR) and Secondary Frequency Response (SFR), are employed to achieve desirable frequency conditions. However, the increasing penetration of non-synchronous renewable generation compromises conventional frequency regulation capabilities. To overcome this shortcoming, BESS offer a promising alternative solution due to their fast-responding, flexible, and scalable features [2] . Therefore, there is a growing interest in researching BESS for their potential to provide frequency regulation services. While the utilization of BESS for PFR has been widely discussed [3–5] , fast frequency response services for BESS are emerging as new techniques to tackle frequency fluctuation. The Transmission system operator (TSO) in Ireland has developed fast frequency response services [6] , while fast frequency reserve service is being implemented in Denmark to address frequency deviation in a prompter way [7] . However, both products are specifically tailored for under-frequency scenarios. On the other hand, services such as primary containment reserve and fast instantaneous reserves are deployed in Germany and New Zealand [8,9] , enabling providers the ability to deliver the support in a symmetric manner, but fast instantaneous reserves do not include any energy recovery rules suitable for energy-limited units like BESS. Although primary containment reserve accounts for energy management, it lacks detailed requirements for BESS to follow during the process. In GB, EFR was introduced in 2016 to enable faster frequency response from BESS and studies have highlighted the importance of careful SOC regulation for satisfactory outcomes [10–12] . Meanwhile, frequency quality can be compromised in certain cases due to improper SOC management [13] . In late 2020, National Grid Electricity Transmission (NGET), TSO of GB, developed a new suite of fast-acting frequency response services as a step-up form of EFR [14] , with DCFR being the major service, requiring a full delivery time within 1 s. This makes DCFR more rapid than the aforementioned frequency regulation services. Additionally, the symmetric design and detailed energy recovery rules make BESS more suitable to deliver such a service. Therefore, as a frequency response service that is applicable to any power system, an evaluation of BESS providing DCFR in terms of frequency quality and SOC management is a valuable exploration. It has to be noted that DCFR is officially named as DC by NGET, however, it can be confusing to the readers since it conflicts with the terminology of direct current. Hence the term of DCFR is adopted in this paper. Several studies have been conducted since DCFR was launched. Researchers in [15] combine both EFR and DCFR and compare them with traditional PFR in the event of sudden generation loss, while the authors of [16] compare DCFR with other newly proposed fast services and discuss their distinct roles. However, both [15,16] do not specify the technology used for providing such services. Ref. [17] evaluates the performance of a flywheel system when delivering DCFR, which provides limited guidance for BESS units. On the other hand, a few works explore the possibility of utilizing BESS as the provider of DCFR. A sensitivity study of BESS performing DCFR is undertaken in [18] , while the authors of [19] compare PFR with DCFR for the effectiveness of improving frequency nadir when facing disturbance. However, the SOC management mechanism when delivering DCFR is not covered in neither research work. Article [20] proposes a strategy for BESS to better manage SOC levels for cost reduction, but this paper is short of the details of SOC management. An optimizing strategy for BESS-integrated wind farms to provide DCFR is introduced in [21] , the strategy introduces SOC management rules but without discussing its impact on BESS performance. Moreover, the article concentrates on the storage optimization and power exchange between a wind farm and BESS. Hence, the complete exploration of DCFR SOC management impact has not yet been discussed. Furthermore, the studies in [18–21] lack the analysis on the configuration constraints implicitly imposed by the DCFR energy recovery rules, a factor that significantly influences BESS performance. Additionally, these studies either rely on acquired frequency data or simply incorporate a random power loss in the system model, neglecting the dynamic interplay between grid frequency and BESS. Aside from frequency regulation, certain investigations also center on the impact of DCFR on other aspects such as local voltage or transient rotor angle stability of synchronous generators [22,23] . Table 1 explicitly compares this work with the above literature related to investigations on DCFR. Within the limited body of research on DCFR, there is a notable research gap in comprehensively understanding the overall operations of DCFR. The significant impact of SOC management rules on BESS performance also necessitates a thorough investigation of DCFR functionality. Besides, DCFR configuration constraints imposed by energy recovery requirements upon BESS have not been adequately addressed in previous studies. Furthermore, as an evolution product of previous EFR, a comparison between DCFR and EFR is yet to be discussed for better understanding DCFR characteristics. Given that previous research on DCFR has largely overlooked the interactions between grid frequency and BESS input/output, a well-developed power system model is necessary to dynamically simulate such mutual effects on a adequate manner. Additionally, as the acquisition of model input with high resolution is challenging to achieve, an input estimation method is also needed for completing the investigation. To address the identified gaps in the literature, this paper offers an in-depth exploration of the DCFR mechanism and operation as applied to BESS, with a focus on frequency regulation, SOC management, and the implications of configuration constraints. It also provides a stability analysis of the BESS system under various conditions. The main contributions of this study are: (1) DCFR service exploration: A comprehensive foundation of the BESS-based DCFR service is presented, covering power response characteristics, SOC management rules, and the associated service configuration constraints. This interpretation offers a perspective on the DCFR mechanism and its operational framework for BESS. (2) SOC management impact analysis: A stability analysis for BESS providing DCFR is conducted, assessing the impact from SOC management mechanism. Conditions to maintain a stable SOC management system are summarized, which are supported by case studies in the developed integrated power system model. (3) Identification of key BESS configurations: Carry out investigations to identify the key influential parameters of battery settings when providing DCFR. Furthermore, the study includes a comparative analysis between DCFR and its predecessor, EFR, to highlight the performance distinctions and improvements. The results offer guiding significance to DCFR operators. This article is structured as follows: In Section 2 , the technical specification of DCFR service is analyzed and described in a detailed manner. BESS configuration constraints due to energy recovery rules are also discussed. In Section 3 , the impact of BESS on power system frequency is discussed. A corresponding power system model integrated with BESS SOC management system is subsequently developed. A stability analysis of the SOC management system is also carried out in this section. In Section 4 , a power imbalance estimation method is introduced, and the service assumptions of DCFR are presented. In Section 5 , four parameters of BESS configurations that can affect DCFR service performance are analyzed individually. Finally, a comparison between DCFR and EFR is also carried out. Section 6 concludes this work. 2 DCFR Service Technical Specifications 2.1 DCFR response characteristics DCFR is developed to mitigate the risks associated with reduced grid inertia and volatile power imbalances in the power system [24] . NGET aspires to contract 1 GW of DCFR service [25] , with a requirement for contracted quantity delivery within 1s of the frequency deviation [26] , where the contracted quantity represents the maximum power of the service. BESSs are well-suited for providing DCFR due to their fast-responding capabilities. Fig. 1 shows the DCFR response characteristic, activation is triggered only when frequency is outside the deadband (49.985, 50.015 Hz). Positive power output indicates energy injection during low-frequency situations, while negative power output signifies energy absorption from the system during over-frequency conditions. The knee-points (49.8, 50.2 Hz) require a BESS to provide 5% of the contracted quantity, while the remaining 95% is allocated further until saturation points (49.5, 50.5 Hz). Thus, DCFR, as a frequency regulation service, accommodates all frequency situations but prioritizes significant deviations. It is noteworthy that DCFR reserve capacity can be partially or asymmetrically provided. DCFR high-frequency (DCFR-HF) involves offering the service exclusively during over-frequency situations, while DCFR low-frequency (DCFR-LF) focuses on under-frequency events. A bundled service can deliver frequency regulation in both directions but reserve capacity may be asymmetrical. 2.2 DCFR SOC management mechanism The previous fast frequency service, EFR, implements a SOC management strategy by introducing multiple response curves (see [13] ). In contrast, DCFR, as an evolved service, follows a single-curve line as SOC management is detached from the power response. BESS manages SOC through the submission of an operational baseline in each Settlement period (SP). SP is a half-hourly time interval during which electricity consumption and generation are measured and recorded in GB electricity market, as shown in Fig. 2 . The operational baseline represents the power output solely dedicated to managing energy levels and remains constant throughout each SP. The actual power output from BESS is the sum of the operational baseline and real-time frequency response power (Eq. (1) ). , P B E S S ( t ) and P F R ( t ) respectively denote the total power, frequency response power, SOC management power via baseline. The difference between the metered power output and the submitted baseline will be evaluated by NGET; failure to comply with the response delivery requirements may result to penalization. P S O C ( t ) (1) P B E S S ( t ) = P F R ( t ) + P S O C ( t ) . It is important to note two features of the operational baseline: Firstly, there are ramp rate limits when transitioning between two SPs, with a maximum limit of 5% of the contracted quantity per minute. Single-side DCFR suppliers have ramp rate limits in one direction only. Secondly, there is a 1-hour gate closure before baselines can be applied due to the convention in the balancing market [26] . BESS-based DCFR service providers calculate their SOC levels at the start of each SP and submit the corresponding baseline by the end of that SP. The baseline will then take effect after two consecutive SPs. Thus, it will take 90 min for the baseline to be applied. Fig. 2 depicts the functionality of DCFR SOC management. Fig. 3 demonstrates the generation of total power output by a BESS providing DCFR service. , P B E S S r and P F R r are the rated power, contracted quantity and maximum baseline of BESS. P S O C r represents the rated capacity of BESS, and Q B E S S r represents the ratio of K to P S O C r , indicating the headroom power reserved for SOC management. P B E S S r The operational baseline can be calculated via presetting the SOC management target ( ), as shown in Eq. S O C t (2) . is the settlement period SOC, which indicates the measured SOC level at the beginning of every SP, and S O C S P is the corresponding baseline power to be implemented 90 min later. P S O C is the time interval of each SP, which is 30 min. t S P (2) P S O C = S O C t − S O C S P t S P ∗ Q B E S S r . 2.3 BESS-based DCFR configuration constraints As part of the SOC management mechanism, NGET imposes the following mandatory energy requirements that influence BESS performance, as outlined in [26] : • Response energy volume (REV): the minimum energy a BESS should be able to deliver before SOC management is applied. It is calculated as 15 min of the full contracted quantity, shown in Eq. (3) . • Energy recovery (ER): the minimum energy recovered from SOC management in each SP. It is calculated as 20% of REV, shown in Eq. (4) . Since it is a minimum energy requirement, it is also the lower energy provision limit from baselines in 30 min. (3) R E V = P F R r ∗ t R E V , where (4) E R = R E V ∗ 0 . 2 ≤ P S O C ( t ) ∗ t S P , is the required time interval which is 15 min. Therefore, given t R E V is the maximum baseline ( P S O C r ) the relationship between contracted quantity and maximum baseline can be derived in Eq. P S O C r ≥ P S O C ( t ) (5) and subsequently leads to the final formulation in Eq. (6) . (5) P S O C r ∗ t S P ≥ R E V ∗ 0 . 2 = P F R r ∗ t R E V ∗ 0 . 2 , (6) P F R r P S O C r = n ≤ 5 ∗ t S P t R E V = 10 . Eq. (6) reveals that the SOC management ratio should be no more than 10 for BESS providing DCFR service, indicating the power reserved for SOC management should be at least 10% of the power reserved for frequency response. Other than the SOC management ratio, the energy requirement also imposes a limitation to the C-rate and SOC management range of BESS. C-rate represents the ratio of rated power to the rated capacity of a battery shown in Eq. n (7) . (7) C = P B E S S r Q B E S S r . SOC management range refers to the scope where SOC needs to be managed once falling outside, which spans from lower limit ( ) to upper limit ( S O C l ). REV specifies the minimum stored energy before SOC management takes place, hence SOC range should provide a higher threshold than REV, shown in Eq. S O C h (8) . It presents that the lower limit of SOC management range should contain at least the energy of REV; the upper limit works the same way reversely. (8) Q B E S S r ∗ S O C l ≥ R E V . Given equals to 15 min and the energy unit is MWh, Eq. t R E V (6) – (8) can be combined as Eq. (9) . (9) C ≤ 4 ∗ S O C l ∗ ( n + 1 ) n . For example, a system with will result in a C-rate no more than 4.4* n = 10 , and if the lower limit of SOC management range is set at 40%, then S O C l , which means the value of rated power of BESS should not be more than 1.76 times of the value of rated capacity. The smaller C-rate should be used if the SOC management range is asymmetrical. C m a x = 1 . 76 3 Modeling the power system with bess integration 3.1 Swing-equation model for system frequency System frequency can be mathematically derived through the swing equation below [27] where (10) 2 H ∗ d △ f ( t ) d t + D ∗ △ f ( t ) = P g ( t ) − P d ( t ) , is the equivalent system inertia constant, H is the self-regulating load response, D , △ f ( t ) and P g ( t ) are the frequency deviation, power generation and power demand in per unit manner. It is clear that the more unbalanced the system, the greater the frequency deviation. P d ( t ) Eq. (11) describes how frequency deviation is determined by power imbalance in the Laplace domain. (11) △ f ( s ) = P g ( s ) − P d ( s ) ( 2 H s + D ) . The swing equation is adjusted by including a BESS that aims to minimize the power imbalance, as presented in Eq. (12) . BESS power can be expressed in a simplified manner in Eq. (13) , where and k are the slope and intercept of the DCFR response curve. b (12) 2 H ∗ d △ f ( t ) d t + D ∗ △ f ( t ) = P g ( t ) − P d ( t ) + P B E S S ( t ) , (13) P B E S S ( t ) = ( k ∗ △ f ( t ) + b ) ∗ P F R r . As the intercept and SOC management power do not contribute to frequency response, they are excluded from the swing equation. Hence, the modified frequency deviation equation in Laplace domain will eventually result in b (14) △ f ( s ) = P g ( s ) − P d ( s ) ( 2 H s + D − k ∗ P F R r ) . Comparing Eq. (11) and (14) , the part of in the denominator is the impact brought by the BESS. Since − k ∗ P F R r is a non-positive value from the DCFR response curve, the denominator gets greater thanks to BESS contribution, resulting in a smaller frequency deviation. Hence, the provision of DCFR by BESS yields a favorable impact on system frequency. k 3.2 Power system model development Using GB power system as an example, a corresponding model is developed based on the swing equation, as shown in Fig. 4 , to incorporate system inertia, PFR, SFR, and DCFR service provided by BESS. The model takes power imbalance as input and determines system frequency in each simulation cycle. The calculated system frequency also feeds back to PFR, SFR, and BESS blocks for determining the corresponding frequency response power. Such dynamics are not accounted for in the existing literature. PFR and SFR are provided by conventional power plants with droop and integral control signals, respectively. BESS operate in parallel with PFR and SFR, providing dynamic containment service. The combined power output from frequency response minimizes the power imbalance, resulting in a smaller frequency deviation. Model parameters are listed in Table 2 , developed based on the following assumptions: • System inertia constant ( ) is determined based on comprehensive assessments of renewables, gas, and other generation types using UK generation data in 2020 February H [28] and the corresponding inertia constants [15] . • Damping constant is set to 1.0. D and T G depict the governor and turbines response, and transient droop compensator for stable frequency performance is represented by T T and T D 1 T D 2 [27] . • The model includes a deadband of 15 mHz for PFR and SFR activation ± [29] . • NGET requires the generator governor droop settings of 3%–5% for primary frequency response, therefore the denominator of PFR gain is set to 0.5 R [30] . SFR is represented by integral control, of which the gain is collected from another research k [31] . • The ratio of generation that provides PFR is represented by , which is derived from the electricity production by sources in the UK k P F R [28] . Since renewable generation accounts for around 40% and possesses little frequency regulation capability, is then set at 0.6. k P F R • The ratio of generation that provides both PFR and SFR is represented by . It is assumed that 80% of generation that provides PFR also participates in the SFR market k S F R [15] , hence is assumed to be 0.48. k S F R • The system model is developed based on the UK total power demand of 41 GW [32] . 3.3 SOC management and stability analysis The BESS model is integrated in the power system model in Fig. 4 . As described in the above section, the total power output is comprised of power for DCFR service ( P B E S S ) and power for SOC management ( P F R ). The gain P S O C in the model represents the power-frequency characteristics shown in k D C Fig. 1 . BESS SOC calculation is described by an integral block, indicating the energy accumulated in the battery and is the factor for normalizing the corresponding energy into percentage. k S O C indicates the target level of SOC management, hence it is introduced as a disturbance in the model. Similarly, S O C t represents the operational baseline power which converts the energy recovery into corresponding power. It is highlighted that the energy requirement of DCFR imposes a minimum energy recovery level for SOC management and the maximum power is also subject to the k O B ratio as outlined in Section n 2 , indicating that the power may not correct SOC to its target level, hence is constrained in a limited range. Finally, the operational baseline activation delay time of 90 min ( k O B ) is implemented. Therefore, BESS power can be calculated via Eq. τ D C = 5400 (15) . (15) P B E S S ( s ) = k D C ∗ Δ f ( s ) + k O B ∗ e − τ D C s ∗ S O C t ( s ) 1 + k S O C s ∗ k O B ∗ e − τ D C s . As an ancillary service provider, it is important to analyze the stability performance of BESS, especially for SOC management. According to Fig. 4 , the open loop transfer function of SOC management can be obtained as given by Eq. (16) . Note that the ramp rate limit, saturation and deadband blocks are not considered for stability analysis. (16) G ( s ) = k S O C s ⋅ k O B ⋅ e − τ D C s . The root locus of BESS model is therefore drawn in Fig. 5 . It is worth noting that the delay component in Eq. (16) expressed via the Taylor series expansion results in an infinite number of characteristic roots, hence there is supposed to be an endless number of locus. This paper simplifies the expression by adopting only the first eight terms of the expansion, and only the locus related to the selected terms are demonstrated in the figure. Fig. 5 demonstrates that as the open-loop gain increases, the closed-loop characteristic roots tend to cross the imaginary axis and enter the right-side (unstable area) in the s-plane, indicating that the system will lose its stability if the gain is large enough to make the locus cross the axis. The open-loop gain of the root locus is a combination of and k S O C , implying that the factors influencing these two gains are crucial in designing a stable SOC management system. The variable k O B represents the energy throughput of the charging/discharging process, determining the rate at which the SOC level changes corresponding to constant power, while k S O C denotes the operational baseline level, which is associated to SOC management process. This indicates that, for a stable SOC management system, the rate of energy accumulation during charging/discharging and the submitted operational baseline must be constrained within a limited range. Based on the description of Section k O B 2 , the possible means to achieve a stable system would be: (i) keeping an appropriate ratio; (ii) setting a reasonable recovery level; (iii) avoiding frequent SOC management actions. These will be tested and discussed in the case studies. n It is speculated that the unstable issue is caused by the long delay time for SOC management command to be implemented. Therefore, it is also worth investigating further on the impact of such delay time. Fig. 6 demonstrates the root locus of the same system with different delay time. Delays of 60 min and 30 min are investigated apart from the default setting. It can be observed that as the delay time reduces, the poles moves to farther left and zeros moves to farther right. Such behaviors change the system open-loop gain threshold to cross the imaginary axis. The critical level of the top locus for each system to cross is 0.0174, 0.0086 and 0.0060, which corresponds to a delay time of 30 min, 60 min and 90 min. The shown results imply that the system with a reduced time delay can handle larger open-loop gain without losing the stability, indicating a more stable and robust system. Therefore, a decrease of the delay can support a stronger SOC management system of DCFR. The investigation of the delay time will also be conducted in a separate case study in the following section. 4 Simulation setup 4.1 Power imbalance estimation The power system model generates the system frequency with power imbalance as input, however, both generation and demand data are difficult to acquire with high time resolution due to the lack of second-based measurements. As a result, it is necessary to employ estimation methods to derive the model input. System frequency, as the product of system imbalance, can be measured with a much finer precision. Therefore, a power imbalance estimation method based on historic frequency data is illustrated in Fig. 7 . The method consists of three parts, beginning with frequency deviation calculation, where historic frequency data will be compared with the nominal value. Then the frequency deviation will be subsequently fed into power deficit and frequency regulation blocks. The power deficit can be defined as the residual power imbalance left after the contributions from frequency response techniques, and it can be expressed in Eq. (17) . This step reversely derives the imbalance level of the system after being regulated. (17) △ P s y s t e m ( s ) = △ f ( s ) ∗ ( 2 H s + D ) . Frequency regulation, on the other hand, works in parallel with power deficit calculation. This section separates the frequency contribution of PFR and SFR from , as system frequency is also the frequency response input. Eq. P s y s t e m ( s ) (18) and (19) show the frequency response power from PFR and SFR if deadband and service lag requirements are met. Eventually the initial power imbalance of the system can be determined by Eq. (20) . (18) △ P P F R ( s ) = △ f ( s ) ∗ k F R ∗ ( − 1 R ) ∗ 1 T G s + 1 ∗ T D 1 s + 1 T D 2 + 1 ∗ 1 T T s + 1 , (19) △ P S F R ( s ) = △ f ( s ) ∗ k F R ∗ k S F R ∗ ( k s ) ∗ 1 T G s + 1 ∗ T D 1 s + 1 T D 2 s + 1 ∗ 1 T T s + 1 , (20) △ P i m ( s ) = △ P s y s t e m ( s ) + △ P P F R ( s ) + △ P S F R ( s ) . This estimation method builds upon the approach proposed in [33] . However, the previous one was applied to a small system (the Danish island of Bornholm), where only PFR is considered. In contrast, this estimation method can be applied to larger-scale systems since it involves not only PFR and SFR, but also accounts for the associated requirements such as service lag time and deadband, which are not discussed in previous research studies. The estimated power imbalance data is validated by reusing it as the model input, with the resulting system frequency data subsequently compared with the historical data. The comparison results show that the correlation coefficient between the two frequency datasets is 99%, demonstrating the capability of the method to replicate the behaviors of a power system. 4.2 BESS configurations and DCFR service assumptions One of the main objectives of this paper is to investigate the impact of different configurations of the battery and identify the corresponding challenges when providing DCFR. Therefore, the BESS used for the simulation is considered as an aggregated single large-scale BESS of all distributed participants, which allows for simplified modeling and analysis by considering the collective behavior. The DCFR contracted quantity of the BESS is set at 1 GW as planned by NGET. However, DCFR service also requires BESS to reserve certain space for SOC management, hence the rated power of BESS will be greater depending on ratio. The ideal and initial SOC level is set at 50% as it offers the maximum room for both charging and discharging. n At the same time, since a large variety of DCFR options can be selected, which adds complexity to the analysis, the following assumptions are made throughout the simulations: • There is no response delay as it is very small. • DCFR service is provided as a bundled service with 1 GW in both directions. • The ramp rate limit of baselines is assumed to be constant throughout each SP on a second-by-second basis, therefore the maximum ramp rate is calculated in Eq. (21) below. (21) ( d P S O C ( t ) d t ) m a x = P S O C r ( t ) ∗ 5 % t S P . 5 Investigation and results discussion In this section, four BESS configuration parameters that influence SOC management are described and investigated: C-rate and SOC management range, ratio and target. The four cases are shown in Table 3 , with several scenarios assigned to each case. Moreover, two additional case studies are also conducted to discuss the impact of delay time and to compare DCFR with EFR. Simulations are conducted for a 2-day time period between and 22 nd February 2020 using GB power system. This period is chosen because of high frequency volatility and numerous under-frequency events. The corresponding historical frequency data can be obtained in 24 th [34] . Since SOC level is one of the investigation focuses, the associated battery degradation is also considered. Battery cycle degradation, which is the degradation type caused by active charging and discharging, is assumed to be proportional to Full equivalent cycles (FEC) [35] . FEC refers to the number of complete charge and discharge cycles during operation and it is calculated in Eq. (22) below, where and E i m ( t ) are the energy import and export of BESS. E e x ( t ) (22) F E C ( t ) = Σ ( E i m ( t ) + E e x ( t ) ) 2 Q r . 5.1 Case study 1: C-rate investigation Different C-rates of BESS are compared in the simulation, with corresponding settings in Table 4 and results in Fig. 9 . According to the previous analysis in Section 2 , is the maximum C-rate when C = 1 . 76 and SOC management range is 40%–60%. n = 10 and P F R r remain constant with a fixed P S O C r ratio, while n varies based on the specific C-rate. Q r SOC exhibits significant oscillations at the highest C-rate due to the 90-minute delay in SOC management actions. Fig. 8 illustrates the energy and SOC evolution for the scenario during a 12-hour period on February C = 1 . 76 . Energy plots include energy provision for frequency response, SOC management, and the overall combined results. SOC plots display dynamic SOC variation and settlement period SOC (the value measured at the start of each SP). At 12:30, indicating the start of the 22 nd SP, a low-SOC situation is detected. A baseline for SOC management is subsequently submitted at 13:00, and implemented two SPs later during the 25 th SP at 14:00, causing a 90-minute delay. 28 th It is important to emphasize that SOC measurement is continuous. If SOC falls outside the predefined range without bouncing back, detection occurs at the start of each SP within the next 90-minute period. In Fig. 8 , SP 25, 26, 27, and 28 all detect low SOC, leading to corresponding baselines being implemented with a 90-minute delay in SP 28, 29, 30, and 31 respectively. The focus of Fig. 8 primarily highlights the first SP that identifies SOC outside the range as an example. Consequently, the delayed baselines result in excessive SOC management, causing SOC to breach the upper limit. This, in turn, triggers additional baselines submission to reduce SOC. However, such baselines for SOC reduction are also subject to delays, leading SOC to drop below the lower limit, initiating an oscillating cycle. Therefore, decreasing the time delay of SOC management action can significantly mitigate oscillation. In comparison to the scenario, the other two scenarios show no SOC oscillation. Lower C-rates indicate a larger battery capacity, as shown in C = 1 . 76 Table 4 , allowing for greater energy storage to reduce the likelihood of triggering SOC management. This behavior is also evident in Fig. 9 (a) and (c) in terms of power output, where higher C-rates activate power for SOC management more frequently. This oscillation also supports the stability analysis findings in Section 3 .C, where the lower C-rate reduce the need of frequency SOC management. In the cases of no SOC management, such SOC feedback loop component is removed in the BESS model in Eq. (16) , making the root locus result in the left-side of s-plane and increasing the stability. The overuse of SOC management power also leads to high energy throughput and extreme SOC fluctuations, as shown in Table 6 . The scenario exhibits four times greater energy throughput and a wider SOC span compared to the other scenarios. This energy throughput significantly impacts battery degradation, measured by FEC. The C = 1 . 76 scenario shows FEC values over 6 times and 16 times higher than the C = 1 . 76 and C = 1 . 0 scenarios, respectively. Considering the proportional relationship between FEC and cycle degradation, the battery with C = 0 . 5 experiences cycle degradation at rates 6 times and 16 times faster. C = 1 . 76 In general, the negative effects of high C-rates caused by SOC management delays necessitate a larger capacity BESS for DCFR service. However, larger battery capacity implies increased investment and maintenance costs, reducing financial benefits. Hence, it is crucial to determine an appropriate C-rate that ensures overall satisfaction. 5.2 Case study 2: SOC management range investigation The impact of SOC management also varies with different preset ranges. Fig. 10 illustrates the performance of three scenarios with different ranges, from narrow to wide. For all scenarios, the C-rate is set to 0.8 due to the configuration constraints from Eqs. (7) and (8) . and P F R r remain constant at 1000 MW and 100 MW, respectively, resulting in a battery capacity P S O C r of 1375 MWh. Q r A wider range reduces the need of triggering SOC management, thereby mitigating the impact of SOC management delays. In Fig. 10 (a) and (b), the baseline is submitted only once for the 40%–60% range scenario due to the larger capacity. Importantly, SOC fluctuations remain within the wider range for the other two scenarios, thus avoiding the need of SOC management. As a result, the behaviors of the BESS in these two wider-range scenarios are identical, with overlapping curves. Similar to case study 1, the stability of BESS model is improved with wider range due to the removal of SOC feedback loop component in the transfer function in Eq. (16) . In this case study, FEC-based battery degradation in the 40%–60% range scenario is only 1.35 times greater than the other two scenarios, as SOC management is triggered only once. However, such a small difference is influenced by the low C-rate value chosen specifically for investigating the SOC management range. In practical applications, higher C-rate settings are often preferred, highlighting the need for proper SOC management to ensure a constant supply of DCFR provision. Therefore, selecting a suitable range is vital to accommodate overall battery configurations. 5.3 Case study 3: SOC management ratio investigation Since SOC management significantly impacts power output, this case study investigates the impact of SOC management ratio. In Section 2 , is set to 10, with n m a x required to be at least 10% of P S O C r . However, reducing P F R r further can alleviate the impact of SOC management delay, while keeping C-rate and SOC management range at a challenging level. P S O C r Table 5 shows the selected scenarios, with increased to 15 and 20. As n remains constant at 1000 MW, P F R r varies with P S O C r , along with the associated battery capacity n . Q B E S S r Fig. 11 illustrates the battery behavior of the three scenarios. Both the and n = 10 scenarios exhibit SOC oscillations. However, compared to n = 15 scenario, the oscillation is mitigated when n = 10 is higher ( n ) due to less power allocated for SOC management. This can be observed in n = 15 Fig. 11 (a), where the first baseline is activated before the 16th hour, a higher ratio corresponds to less power for SOC management. Decreased SOC management power indicates a reduced negative impact caused by the 90-minute time delay. For the BESS system stability, the higher n ratio implies a smaller value of n in Eq. k O B (16) , which result in a smaller open-loop gain of root locus, hereby leading the relevant closed-loop poles to the left side of imaginary axis and improve the stability. Table 6 shows that the scenario results in energy throughput 1.7 times and 3.7 times higher than the other two scenarios, with no improvement on frequency quality. It also correspondingly translates to battery FEC values 1.66 times and 3.51 times higher. Therefore, optimizing the SOC management power ratio when providing DCFR improves the service quality and sustains battery lifetime. n = 10 5.4 Case study 4: SOC management target investigation Similar to case study 3, reducing SOC management power can be achieved by adjusting the SOC management target rather than adhering to the minimum energy requirement. In this approach, a factor of is introduced for the calculation. m The SOC management target refers to the desired SOC level to which SOC is recovered at the end of the SP, determining the baseline power level. The energy recovery rules of DCFR aims to recover at least 20% of the REV per SP, serving as the benchmark for the case studies. However, this energy requirement is excessively high during normal frequency situations due to the 90-minute delay, as indicated by previous results. Therefore, alternative SOC management targets are proposed for investigation, including the quarter line, middle line, and edge line. These targets aim for specific levels within the SOC management range. For instance, the middle line sets the SOC level at the midpoint between the SOC lower limit and the initial SOC level in low SOC situations or between the SOC upper limit and the initial SOC level in high SOC situations. For a BESS with a 40%–60% SOC management range, this corresponds to 45% and 55%, respectively, with . The quarter line represents the SOC level set at 25% between the range limit and the initial SOC level, closer to the limit. In the above example, the targets change to 42.5% and 57.5%, respectively, with m = 0 . 5 . Finally, the edge line requires the SOC to be recovered exactly at the range limit, resulting in m = 0 . 25 . Thus, m = 0 , and a smaller m ⊆ [ 0 , 1 ] value indicates less power requirement. Eq. m (23) demonstrates how the SOC management target is calculated for a specific factor: m where (23a) S O C t l = ( S O C n − S O C l ) ∗ m + S O C l , (23b) S O C t h = S O C h − ( S O C h − S O C n ) ∗ m , and S O C l are the lower and upper limit of SOC management range, S O C h and S O C t l denote the SOC management target in low SOC and high SOC situations respectively and S O C t h represents the ideal SOC set at 50%. It should be noted that the minimum energy recovery target required for DCFR service is considered as the maximum baseline power in this case. Therefore, any calculated power exceeding this level will be limited to S O C n . Other parameters are set to the most challenging values, with E R t S P , C = 1 . 76 , and the SOC management range between 40% and 60%. n = 10 Fig. 12 and Table 6 compare the performance of the four scenarios. The minimum energy scenario, serving as the benchmark, exhibits the strongest SOC management action, followed by the middle line scenario. Neither the quarter line nor the edge line scenario triggers SOC oscillation due to their minimal demand for SOC management. Such phenomenon also supports the conducted stability analysis of BESS SOC management, the system stability is improved with adjusted SOC management targets that reduce the value of in Eq. k O B (16) , making the corresponding closed-loop characteristic roots on the left side of the imaginary axis. From a statistical perspective, the minimum energy scenario results in energy throughput and FEC values 1.06 times, 4.02 times, and 3.69 times higher than the middle line, quarter line, and edge line scenarios, respectively. Thus, effectively adjusting SOC management targets helps mitigate the negative effects caused by the 90-minute time delay. 5.5 Case study 5: Operational baseline delay time discussion In this case study, time delays of 90 min, 60 min, and 30 min, as analyzed in the stability study, are investigated. Additionally, a 15-minute delay is also included to further explore the impact of shorter delays. During the simulation, the BESS is set up with a demanding configuration, with =1.76, C =10, SOC management range set as 40%–60% and the energy recovery aligned with the minimum scenario. When SOC management is required, a reduced delay time can help the battery to make swift actions, hence mitigating the issue of SOC oscillation, as demonstrated in n Fig. 13 . It is interesting that the extent of power fluctuation in case study 5 is similar among the different delay time scenarios, which differs from the results in study 3 and 4, as shown in Fig. 13 (a). It represents that the SOC management request via the submitted operational baseline remains constant regardless of the delay time, however, the decreased responding time help prevent excessive management of SOC, thereby reducing the need for subsequent requests to correct the SOC levels. The SOC plot in Fig. 13 (b) provides further validation that SOC oscillation issue is significantly tackled when the delay time reduces to 60 min and further mitigated with 30 min and 15 min. It demonstrates the impact of operational baseline delay time on the behaviors of the BESS. 5.6 Case study 6: DCFR vs EFR As the predecessor of DCFR service [14] , EFR was developed into two types — service 1 and service 2, where service 1 focuses on SOC management, while service 2 prioritizes frequency regulation [13] . This case study compares DCFR with EFR for understanding its characteristics. BESS configurations for both services are presented in Table 7 , with DCFR settings aligned to the required ratio and SOC management target. A narrow SOC range is selected to maximize the differences, and a C-rate of 1.0 is chosen to mitigate the negative impact of baseline delay. The maximum power for frequency regulation is set at 1000 MW. n Fig. 14 (a) shows the frequency comparison between DCFR and EFR services. All three services reduce frequency deviation compared to basecase frequency, which is the simulated system frequency without BESS integration. Meanwhile, the most stable SOC curve is achieved by DCFR according to Fig. 14 (b). Additionally, DCFR improves frequency quality by increasing the number of frequency data within the deadband, as shown in Fig. 15 . EFR service 1 and service 2 sacrifice some frequency data for SOC management, resulting in fewer frequency data within the deadband. In contrast, DCFR implements an alternative SOC management mechanism and enhances the frequency quality. On average, DCFR service leads to a 7.93% higher percentage of frequency data within the deadband compared to the two EFR services. This indicates reduced activation of PFR and SFR, which is advantageous for financial benefits. Table 8 provides statistical comparisons among the services. Although DCFR has a slightly wider frequency span than EFR, its SOC span is as narrow as EFR service 1. Moreover, the DCFR energy throughput is only 43.5% and 24.4% compared to EFR, corresponding to 42.9% and 22.1% of FEC, respectively. This shows that DCFR outperforms EFR by utilizing less energy while achieving similar frequency and superior SOC and FEC outcomes. It should be noted that battery configurations can be further optimized to fully unlock the potential of DCFR. Since a significant portion of the DCFR response curve is designed for frequencies outside the knee-point range, DCFR may play a more crucial role in large frequency deviations in contingency situations. Moreover, the collective impact from both DCFR and other future ancillary services is also worth investigating, particularly via experimental validation, this will be explored in the future research analysis. 6 Conclusion This paper provides an assessment and analysis of DCFR frequency response service, including the response curve, the SOC management rules, and the associated unit configuration constraints. A methodology is presented to investigate the performance of DCFR-based BESS in a power system, alongside a stability analysis focusing on the impact of the SOC management mechanism. The stability assessment is conducted via root locus study, where the theoretical findings show that large open-loop gains could cause instability to BESS SOC management system due to the long delay time. The results are supported by a comparative stability study aiming for different time delays, which demonstrates that the reduced delay enables the SOC management system to handle larger open-loop gains without losing the stability. Furthermore, four BESS configuration parameters that are relevant to the value of such gains in BESS SOC management system are identified via dynamic simulations for the service performance: C-rate, SOC management range, ratio and target. A power imbalance estimation method is utilized for gaining the simulation input for the integrated power system model. The simulation results show that DCFR can improve frequency quality, but SOC management affects the power output significantly due to its 90-minute time delay and it might cause SOC oscillation if the battery is configured improperly. Low C-rate and wide SOC management range are less likely to cause SOC oscillation and low FEC, as they can alter the value of open-loop gains of the transfer function of SOC management system. Two other parameters are also analyzed: SOC management ratio and SOC management target. Low SOC management ratio generates SOC oscillation and causes high FEC. Meanwhile, the impact can be alleviated by re-configuring the SOC management target, the minimum energy recovery requirement ends up with more serious SOC fluctuation and high FEC than adjusted SOC management target scenarios. Both methods affect the stability of the SOC management system by changing the value the open-loop gain, hence avoiding the oscillation. A case study is dedicated for the time delay, the findings showcase that the SOC oscillation can be mitigated with reduced time delay even the BESS is set up with demanding configuration. Finally DCFR service is also compared with EFR, the results present that it utilizes less than 50% of energy throughput and achieves the best SOC curve. DCFR also outperforms EFR by leading the share of frequency inside deadband by 7.93% on average. CRediT authorship contribution statement Xihai Cao: Writing – original draft, Visualization, Software, Methodology, Investigation, Formal analysis, Conceptualization. Jan Engelhardt: Writing – review & editing, Supervision. Charalampos Ziras: Writing – review & editing, Supervision. Mattia Marinelli: Writing – review & editing, Supervision. Nan Zhao: Writing – review & editing, Supervision, Resources, Methodology, Investigation, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
[
"DEPARTMENTFORENERGYSECURITYANDNETZERO",
"LI",
"ENGELHARDT",
"LI",
"KNAP",
"FERNANDEZMUNOZ",
"THINGVAD",
"FLEER",
"KAPOOR",
"SANCHEZ",
"GUNDOGDU",
"CANEVESE",
"CAO",
"NATIONALGRIDPLC",
"NEDD",
"HOMAN",
"HUTCHINSON",
"ABDULKARIM",
"SASOMPHOLSAWAT",
"OCHOAEGUILEGOR",
"FAN",
"SOMMERVILLE",
"ZHANG",
"KUNDUR",
"UKGOVERNMENTSDEPARTMENTFORBUSINESSENERGYINDUSTRIALSTRATEGY",
"NATIONALGRIDPLC",
"NATIONALGRIDPLC",
"CHENG",
"NATIONALGRIDPLC",
"MARINELLI",
"THINGVAD"
] |
4fbc1a5e34e441079b3555f7b7bfcdb2_Genetic variations from successive whole genome sequencing during COVID-19 treatment in five individ_10.1016_j.nmni.2022.100950.xml
|
Genetic variations from successive whole genome sequencing during COVID-19 treatment in five individuals
|
[
"Hemachudha, P.",
"Petcharat, S.",
"Ampoot, W.",
"Ponpinit, T.",
"Paitoonpong, L.",
"Hemachudha, T."
] |
We report multiple single nucleotide polymorphism taken at different time interval during treatment of COVID-19. Gene sequencing showed mutation within ORF1b at position P314L. Mutation at this point has been shown to impose structural remodelling that increases the affinity for remdesivir binding and may also affect binding affinity for favipiravir.
|
In 2020, first two outbreaks in Thailand were effectively controlled with isolation and contact restriction. Previous experience in our COVID-19 facility, patients were responding to favipiravir, a purine nucleotide analogue with a small fraction progressing to severe pneumonia. On the third and current outbreaks, several factors including new delta variant, lack of active surveillance and inadequate vaccination, number of infected were out of control. The situation is difficult with more healthcare workers becoming infected and people dying in their home and on the street. COVID-19 is caused by enveloped positive sense single-stranded RNA virus with multiple variants emerging and circulating around the world [ 1 ]. The viral genome consists of spike (S) protein responsible for host cell viral entry, along with envelope (E) and membrane (M) forming viral envelope. The genome nucleocapsid (N) protein holds viral RNA genome in place and non-structural open reading frames (ORFs) govern RNA transcription via RNA polymerase [ 2 ]. ORF1ab occupies majority of ORF region and is the target for binding and inhibition of antivirals such as favipiravir and remdesivir. Mutation within the drug binding site may increase or reduce binding affinity of antiviral and its effect on inhibiting viral replication theoretically could be altered [ 3 ]. On this outbreak, we observed many more patients progressing to severe pneumonia with successive nasopharyngeal and throat (NP and T) swabs showing reduced cycle threshold (CT) from RT-PCR reflecting higher viral load after five days of Favipiravir. Patient who progresses while on favipiravir will be re-swab and if CT remains low, they will be switched to remdesivir, a ribonucleotide analogue and increase in dexamethasone. As a result, many patients were showing improvement in pneumonia with higher CT value. This approach is necessary due to limited supply of remdesivir in Thailand. To explain this differing response to antiviral, nasopharyngeal and throat swab samples from patients with worsening COVID-19 pneumonia, defined as significant increase in oxygen requirement from nasal prong to high flow nasal cannula (HFNC) despite receiving favipiravir were examined. Initial and consecutive swab of five patients with CT showing reduction despite at least 5 days of favipiravir undergo SARS-CoV-2 whole genome sequence library construction using QIAseq SARS-CoV-2 Primer Panel for next-generation sequencing ( Table 1 ). In this evaluation, we found single nucleotide polymorphisms (SNPs) in the same individual taken at different time interval during active treatment of COVID-19 with favipiravir. Two consecutive samples were taken from four patients with first sample for diagnosis and second sample taken on clinical deterioration. Three consecutive samples were taken from one patient, additional sample was taken during clinical improvement. In the first patient, we found substitution at position N:S255A and S:N501Y on the first swab and revert back to its original sequence on the second swab and additional substitution at N:R203K and N:S235F. Second patient, we found substitution at position N:R203K and N:S235F on the first swab, additional substitution at ORF1a:T1001I and S:N501Y on the second swab. We found reversion of all substitution except at ORF1a:T1001I on the third swab perform during recovery. Third patient, we found substitution at N:L139F, ORF1a:P1640L and ORF7a:L116F on the first swab and all substitution revert to original on the second swab. Fourth patient, we found substitution at ORF1b:P314L on the second swab. Last patient, we found substitutions at N:D3Q and ORF1b:P314L on the first swab and reversion of both substitutions with addition substitution at N:R203K on the second swab. Mutations in COVID-19 during treatment were observed. Significance of mutations is not known, but widespread use of antiviral may have driven selective pressure. Analysis is also showing mutation within ORF1b at position P314L (P323L) in the initial swab sample in four out of five individuals. Mutation at this point has been shown to impose structural remodelling that increases the affinity for remdesivir binding [ 4 , 5 ]. Further mutation will occur in every replicative cycle due to its RNA structure and the pandemic will provide a platform for rapid turnover and accelerated mutations. Constant global systematic surveillance of significant mutation is urgently needed to study the viral behaviors and assists in epidemiological data gathering. Transparency declaration The authors report no relevant disclosures or conflict of interest. Acknowledgement We thank the emerging infectious disease team, internal medicine and nursing colleagues of King Chulalongkorn Memorial Hospital for their exceptional care for these critically ill patients.
|
[
"MAHASE",
"FINKEL",
"MOHAMMAD",
"CHAND",
"PACHETTI"
] |
b2eedf73782b42f29045e1ae09aff955_Temperature and sex shape Zika virus pathogenicity in the adult Bratcheesehead brain A Drosophila mo_10.1016_j.isci.2023.106424.xml
|
Temperature and sex shape Zika virus pathogenicity in the adult Brat
cheesehead
brain: A Drosophila model for virus-associated neurological diseases
|
[
"Tafesh-Edwards, Ghada",
"Kalukin, Ananda",
"Bunnell, Dean",
"Chtarbanova, Stanislava",
"Eleftherianos, Ioannis"
] |
Severe neurological complications affecting brain growth and function have been well documented in newborn and adult patients infected by Zika virus (ZIKV), but the underlying mechanisms remain unknown. Here we use a Drosophila melanogaster mutant, cheesehead (chs), with a mutation in the brain tumor (brat) locus that exhibits both aberrant continued proliferation and progressive neurodegeneration in the adult brain. We report that temperature variability is a key driver of ZIKV pathogenesis, thereby altering host mortality and causing motor dysfunction in a sex-dependent manner. Furthermore, we show that ZIKV is largely localized to the brat
chs
brain and activates the RNAi and apoptotic immune responses. Our findings establish an in vivo model to study host innate immune responses and highlight the need of evaluating neurodegenerative deficits as a potential comorbidity in ZIKV-infected adults.
|
Introduction Zika is a single-stranded positive-sense RNA virus that belongs to a mosquito-borne group of flaviviruses such as dengue, yellow fever, Japanese encephalitis, and West Nile. Flaviviruses are mainly transmitted by Aedes (subgenus Stegomyia) mosquitoes including Aedes aegypti and Aedes albopictus . Zika virus (ZIKV) emerged as a global health threat, causing widespread epidemics across the Americas with severe health outcomes in humans. 1 Clinical presentation of ZIKV infection is strongly associated with abnormal functions of neuronal cells causing severe neurological disorders such as microcephaly in newborns and Guillain-Barré syndrome in adults. 2 3 , These conditions are characterized by a progressive loss of neuronal tissue and currently remain untreatable. More specifically, research shows that ZIKV directly infects fetal neural stem cells and impairs brain growth, which induces several brain damages including early immature differentiation, apoptosis, and stem cell exhaustion. 4 5 , 6 , Recent reports of ZIKV active circulation and rising infection cases in densely populated areas of South Asia highlight the high risk of its full-scale resurgence and stress the urgency of understanding host-pathogen interactions and development of targeted treatments and control measures. 7 8 , 9 Drosophila melanogaster has been instrumental in deciphering the molecular mechanisms underlying innate immunity, primarily due to its resourcefulness and abundance of genetic tools. Our current knowledge of immunity in insects is largely owed to the fly model, with some significant genomic and functional approaches uncovering evolutionarily conserved immune mechanisms such as the stimulator of interferon genes (STING) and Toll pathway. 10 , 11 , 12 , Moreover, 13 Drosophila has been useful for the study of arbovirus infections, especially flaviviruses such as Zika, dengue, and West Nile. 14 , 15 , While not a native host, the broad conservation between 16 Drosophila and mosquitoes as dipteran insects allows arboviruses to infect flies and provides novel insights into their pathogenesis and host immune function. As in higher organisms, pathogen infections in Drosophila initiate an inflammatory response mediated by the NF-κB signaling pathways Toll and immune deficiency (Imd), resulting in the secretion of antimicrobial peptides (AMPs) to defend the host. 17 , Even though these antibacterial and antifungal effectors have been widely studied, their roles in antiviral immunity remain largely unknown. 18 Other significant humoral and cellular immunity mechanisms such as the activation of JAK/STAT signaling, autophagy, and melanization are similarly unclear in the context of viral infections in 19 Drosophila . 14 , 20 Recent studies in Drosophila indicate that ZIKV is largely restricted to the brain, where antiviral autophagy is activated to control neuronal infection. 11 , 21 , However, the specific molecular innate immune mechanisms that protect neurons against ZIKV infection are unclear. Like humans, neurological disorders and abnormalities in flies can be a result of mutations that affect cell division, as demonstrated with the 22 Drosophila mutant cheesehead ( chs ) that exhibits both aberrant continued the proliferation of cells and progressive neurodegeneration in the adult brain. 23 , The name “cheesehead” suitably refers to the numerous holes present in the 24 Drosophila brain neuropil. chs is an allele of brain tumor ( brat ) ( brat ), a chs Drosophila gene that has been investigated extensively for its role in asymmetric cell division of neural stem cells (neuroblasts), which limits stem cell proliferation in developing brains. 25 , 26 , 27 brat encodes a conserved Tripartite Motif- NCL-1/HT2A/LIN-41 (TRIM-NHL) RNA-binding protein composed of two B-Boxes (zinc finger domains), a coiled-coil domain, which mediates protein-protein interactions, including multimerization, and an NHL domain, which has several functions, including binding to mRNA to regulate translation. Notably, while most reported brat alleles have mutations in the NHL domain, the chs mutation is in the coiled-coil domain of the TRIM motif. 24 , The neurodegenerative characteristics of 28 brat mutants are intimately linked to neural hypertrophy, a condition that can be relevant to neurodevelopmental and neurodegenerative disorders in humans including ZIKV. chs Therefore, 24 brat mutant phenotype, exhibiting progressive loss of adult brain neuropil in conjunction with massive brain overgrowth, is an ideal model system that allows simultaneous monitoring of ZIKV molecular pathogenesis strategies and host antiviral immune processes in the adult brain. chs Interestingly, brat mutants are temperature-sensitive for neurodegeneration and survival to eclosion. chs Early studies show that 24 brat mutant flies reared and aged for 2–4 days at 18°C do not show any neurodegeneration, whereas the phenotype was partially penetrant (60% in males and 40% in females) for flies reared and aged for 2–4 days at 25°C and more penetrant (70% in males and 100% in females) for flies reared and aged for 2–4 days at 29°C. chs The over-proliferation phenotype is also reported to be temperature-sensitive. Brains of 24 brat mutant flies reared at 18°C and then shifted to 29°C post-eclosion had no tumors, while chs brat flies reared to adults at 25°C or 29°C do exhibit over-proliferation. However, a significant fraction of chs brat mutants die before eclosion at much more elevated temperatures, such as 29°C. chs Furthermore, 24 brat mutants carrying the chs proliferating cell nuclear antigen ( pcna )-GFP reporter that labels dividing cells ( brat chs ; pcna-GFP ), and that were reared at 25°C, were shown to exhibit more severe neurodegeneration and cell proliferation phenotypes than brat flies lacking the reporter. chs Based on this knowledge, our study observes sex and temperature differences to establish how the 24 brat mutation contributes to ZIKV infection in correlation to these two factors. Additionally, studies suggest that ZIKV replication is dependent on temperature changes in the host environment, which further calls for a deeper understanding of the molecular immune responses triggered by these temperature changes. chs 29 Here we use brat mutants to investigate the tissue-specific responses required to regulate innate defenses against ZIKV, thus providing novel insights into the neurological phenotypes associated with this infectious disease. We show that in comparison to controls, ZIKV replicates at higher rates in adult chs brat mutants and causes motor dysfunction in a sex- and temperature-dependent manner, making it imperative to continue investigating the different responses between female and male flies. We also show that ZIKV infection triggers the RNAi pathway and apoptosis signaling in the brain of chs brat mutants. These important findings add to the very limited literature on ZIKV pathogenesis and the role of RNA-binding proteins such as TRIM-NHL proteins to identify potential therapeutic targets that may prevent or at least minimize the consequences in the early phases of disease and adulthood. chs Results Temperature alters the lifespan of brat mutants chs Vector-borne flaviviruses including Zika pose a major threat to human health and well-being worldwide. For successful transmission, ZIKV must efficiently enter host cells, propagate within, and survive the extrinsic incubation period (EIP). 30 , The EIP is an important factor in determining viral transmission potential, as it indicates how long it takes for a vector to become infectious following exposure to the virus. Because this is a temporal process, a vector’s lifespan is strongly linked to the EIP and consequently the virus’s transmission potential. 31 31 , Environmental factors such as temperature influence the aforementioned dynamics of vector-borne disease transmission, as well as vector competence and mortality. 32 29 , 33 , Even though many studies have already documented that the variation in environmental temperature can markedly shape various aspects of virus pathogenicity and vector physiology, the extent to which temperature impacts transmission directly, via effects on pathogen biology, or indirectly, via effects on vector responses to infection, remains largely unknown. 34 35 , To this end, we set out to determine how temperature changes influence the lifespan of 36 brat mutants, which is relevant for establishing this fly line as a model to study ZIKV and defines any biological constraints on transmission. A time course revealed an average life expectancy of 65 days for both uninfected female and male brat mutants at 25°C ( Figure 1 A) whereas flies maintained at 29°C succumbed at 25 days ( Figure 1 B). In addition, while female and male pcna-GFP flies had a life expectancy similar to their corresponding mutants at 25°C ( Figure 1 A), the same controls exhibited a shorter lifespan (45 days) at 29°C ( Figure 1 B). This dramatic decrease across all lines at 29°C indicates a temperature-dependent mortality that will directly impact the ZIKV successful replication and transmission. Zika virus replicates in adult brat mutants in a sex- and temperature-dependent manner chs Having established the lifespan of uninfected brat mutants, we next determined the flies’ survival following ZIKV infection at 25°C and 29°C. We found that the challenge with ZIKV at 25°C failed to reduce fly survivals in brat females and males, which were similar to the survival rates of PBS- and ZIKV-injected controls ( chs Figures 2 A and 2B ). Interestingly, survivals of infected brat females at 29°C were significantly reduced compared to their PBS controls ( chs Figure 2 C) whereas infected brat males at the same temperature showed no significant differences compared to uninfected and infected controls ( chs Figure 2 D). We then estimated ZIKV copy numbers in the infected female and male brat flies at both temperatures compared with their respective chs pcna-GFP controls at 4 days post injection by amplifying NS5 primer sequences, the largest and most crucial product coded by the ZIKV RNA. 37 , Both infected female and male 38 brat mutants showed a significant increase in fold change at 25°C next to infected chs pcna-GFP controls, with female brat flies exhibiting higher chs NS5 levels (3-fold increase) in comparison to males ( Figures 3 A and 3B ). Female and male mutant flies maintained at 29°C showed similar results ( Figures 3 C and 3D). However, both sexes exhibited strongly elevated ZIKV levels with a doubled fold increase compared to flies kept at 25°C, showing higher ZIKV replication at 29°C. Together, these results show that temperature and sex differences alter ZIKV infection outcomes, thus confirming them as key parameters in disease and immunity studies of this infection. Zika virus targets the brain of brat mutants chs To further characterize the ZIKV-induced pathology, we systemically challenged brat mutants and their controls with ZIKV and monitored the infection in the head compared to the body of the flies. chs pcna-GFP flies showed higher NS5 levels in the heads than bodies, with temperature-dependent replication patterns ( Figures 4 A–4D ) similar to those shown from whole flies at both 25°C and 29°C ( Figures 3 A–3D). At 25°C, ZIKV load in the heads of female, but not male, brat flies was substantially higher than the bodies, indicating that ZIKV infects and replicates in the female chs brat brain ( chs Figures 4 A and 4B). We also observed a significant increase in both female and male brat brains compared to their controls at 29°C ( chs Figures 4 C and 4D). Most importantly, ZIKV copy numbers were strongly elevated in the heads of both female and male brat mutants compared to their chs pcna-GFP controls at 25°C and 29°C, suggesting that ZIKV directly infects brat brains and possibly neural stem cells regardless of the temperature changes. To address this possibility, we next sought to determine whether ZIKV antigen co-localizes with cells in the chs brat brains that are positive for pcna-GFP. The pcna-GFP reporter transgene is activated in mitotically active cells, chs and its expression in 39 brat mutants was reported to mark aberrantly proliferating cells in the adult brain, which are not found in controls. chs Immunostaining using the anti-flavivirus envelope protein antibody 4G2 revealed the presence of ZIKV in the brains of both 24 pcna-GFP controls and brat mutants ( chs Figure 5 ). PBS-injected brains did not display marked ZIKV staining ( Figure 5 A). Consistent with the gene expression analysis, we observed that ZIKV staining was more widespread in the brains of brat mutants (1.33% stained area) in comparison to chs pcna-GFP controls (1.15% stained area) based on immunofluorescence quantification in Fiji ImageJ2. In PBS-injected controls, the background levels of stained area were 0.94% and 0.93% for both genotypes, respectively ( Figure 5 B). In brat mutants we find some co-labeling of GFP-positive cells and ZIKV ( chs Figures 6 and 7 ); however, the majority of GFP-positive cells are not ZIKV-positive. We found that in both controls and brat mutants, ZIKV does co-localize with Repo (Reversed-polarity) and with Elav (Embryonic lethal, abnormal vision), which are glial and neuronal cell markers, respectively. Yet, for most of the ZIKV labeling we did not find it to co-localize with the examined markers ( chs Figures 6 and 7 ). We note; however, that both Repo and Elav target transcription factors with nuclear localization in the cell while the pcna-GFP reporter is not exclusively nuclear. Zika virus induces severe motor dysfunction in brat mutants chs Drosophila has been widely used as a model system to study neurodegenerative disorders such as Alzheimer’s and Parkinson’s diseases. 40 , In particular, locomotion, the major output of the nervous system, is used to identify and study molecules or genes involved in these disorders. Consistently, locomotor impairment is a common phenotype of neurodegeneration that can be characterized in 41 Drosophila with simple climbing assays. 42 , 43 , 44 , These assays take advantage of 45 Drosophila ’s natural tendency to climb upward against gravity, a robust and reproducible behavior known as negative geotaxis. They are reliable parameters that provide a quantitative, cost-effective, general tool for measuring locomotor behaviors of wild-type and mutant flies in detail and can reveal subtle or severe motor defects, which are crucial to understanding the manifestation of locomotor disorders. Because ZIKV is closely associated with neurodegeneration, we performed a climbing assay to determine the behavioral phenotypes triggered by the virus in the brat mutants. Infected chs pcna-GFP flies showed longer climbing times compared to uninfected controls at both 25°C and 29°C ( Figure 8 ). In addition, we found that the climbing ability and speed were severely affected in infected female brat flies at 25°C with only 30% of these flies being able to climb compared to 55% of uninfected controls and %70 of infected controls ( Figures 8 A and 8B). Infected brat males kept at 25°C also showed lower climbing ability and speed compared to infected controls ( chs Figures 8 C and 8D). In addition, both brat female and male flies kept at 29°C displayed similar locomotive defects compared to their respective controls, therefore reflecting severe locomotor impairment as a disease outcome ( chs Figures 8 E–8H). Collectively, these results suggest that the detection of locomotion defects may contribute to understanding symptomatic behaviors associated with neurodegenerative pathology using the brat model. chs Zika virus infection activates the antiviral RNAi pathway in the brain of brat mutants chs The canonical RNA interference (RNAi) pathway is one of the major evolutionarily conserved defense mechanisms against arboviral infections in insect hosts. 20 , 46 , In 47 Drosophila , the RNAi pathway is initiated by the enzyme Dicer-2, which acts as a pattern recognition receptor that detects virus-derived double-stranded RNA (dsRNA) and generates small interfering RNAs (siRNAs). These viral siRNAs are subsequently loaded onto an RNAi-induced silencing complex (RISC) with Argonaute-2 (Ago2) as a central molecule. The complex then identifies complementary endogenous sequences, eventually leading to the cleavage and degradation of viral RNA after specific siRNA-mRNA hybridization. To examine whether ZIKV infection stimulates this antiviral response in 48 brat mutants, we determined the transcript levels of the RNAi machinery Dicer-2 and Ago-2 in infected female and male flies. We found that the heads of chs brat females kept at both 25°C and 29°C showed significantly upregulated Dicer-2 and Ago-2 expression levels compared to their bodies, consistent with our findings that ZIKV targets the brain ( chs Figures 9 A and 9C ). This effect was also observed in brat female heads compared to the chs pcna-GFP heads, indicating that the virus and the brain tumor gene mutation possibly enhance the host immune response ( Figures 9 A and 9C). In contrast, brat male heads showed only significantly higher Ago-2 expression at 25°C compared to their bodies and infected control heads ( chs Figure 9 B). This was not the case in infected male mutants kept at 29°C, as only Dicer-2 was significantly elevated in brat male heads compared to chs pcna-GFP heads, thus further confirming this effect as an outcome of the brat gene mutation ( chs Figure 9 D). Together, these results show that the RNAi pathway is activated against the ZIKV infection in the brain in a sex-dependent manner with temperature changes only evident in males. Also, ZIKV has a synergistic effect that enhances the host immune response activation against the brain tumor mutation and its resulting defects. Zika virus infection triggers apoptosis in the brain of brat mutants chs ZIKV is known to cause severe congenital and autoimmune neurological complications such as microcephaly in infants and Guillain-Barré syndrome in adults. 49 , 50 , 51 , ZIKV infection is especially linked to apoptotic cell death and cell-cycle disruption, providing a plausible mechanism for cellular stress responses and the resulting neurological defects. 52 53 , More specifically, ZIKV has been shown to reduce neural progenitor cell proliferation, induce their premature differentiation, and activate apoptosis to target them along with immature neurons. 54 54 , Given this, the neural over-proliferation and neurodegeneration caused by the 55 brat mutation in the adult chs Drosophila brain provide an excellent model to investigate the mechanisms underlying both conditions and possibly develop therapeutic strategies and more targeted treatments for ZIKV neurologic disorders. To test whether ZIKV challenge activates programmed cell death in the brat brain, we estimated the transcriptional activation levels of the three chs Drosophila pro-apoptotic genes hid , grim , and reaper in the heads and bodies of mutants via RT-qPCR. We found that grim expression was significantly increased in the heads of infected brat female and male mutants, at both 25°C and 29°C, compared to their bodies and infected controls ( Figures 10 A–10D ). Notably, grim was also significantly upregulated in the heads of pcna-GFP heads compared to their bodies, confirming that ZIKV infection activates apoptosis in the adult Drosophila brain. We observed no significant differences in the expression levels of genes hid and reaper among any of the various treatment groups and conditions, which highlights a mechanism through which grim induces apoptosis in response to ZIKV infection ( Figures 10 A–10D). Discussion Here we examine ZIKV pathogenesis in the presence of cheesehead , a mutation of brain tumor in Drosophila , and establish brat flies as a tractable experimental system to investigate the effects of ZIKV on the immune signaling and function in the adult chs Drosophila brain. Using this particular Drosophila model offers an advantageous insight in the case of neurodegenerative diseases due to brat ’s role as an RNA-binding protein from the TRIM-NHL family. During the asymmetric division of Drosophila neuroblasts, brat localizes at the basal cortex via direct interaction with the scaffolding protein Miranda and segregates into the basal ganglion mother cells after cell division. The cheesehead mutation in this model is in the coiled-coil domain, which acts as a scaffold for regulatory protein complexes; not the RNA-binding domain (NHL), which binds to mRNA and other RNA regulatory proteins, including Miranda. 24 , This in turn represents a previously unknown role for 56 brat that could reveal a new pathway that is relevant to human neurodegenerative diseases such as these caused by Zika with a possible implication in immunity against RNA viruses. Our findings indicate that higher temperature dramatically alters the longevity, climbing ability, and immunity of brat mutants and their chs pcna-GFP controls in both males and females, suggesting a temperature-dependent host fitness that modifies infection outcomes. brat mutants are temperature-sensitive for neurodegeneration and over-proliferation in adult brains, providing a unique opportunity for the genetic analysis of chs brat function that was not feasible before. For instance, this mutation can be a useful tool for the suppression or enhancement of the adult over-proliferation and/or neurodegeneration phenotypes to determine other genes with which brat interacts to regulate differentiation and growth. This is particularly crucial during infections such as Zika which inhibit brain development, as it will provide a valuable platform to screen for therapeutic candidates that arrest or block the impact of such diseases on neural development. Understanding how vectors respond to environmental variations, including temperature, is especially relevant for establishing how vector-borne pathogens emerge and spread, hence defining the biological constraints on vector transmission and competence. In this study, we model the effects of temperature on ZIKV, which belongs to the widespread and important flavivirus family that currently lacks complete temperature-dependent models. Our results show that ZIKV replication in brat flies is optimized at 29°C, which contributes to significant advances in our knowledge of the physiological and molecular interactions between pathogens and mosquito vectors. chs 29 , Temperature variation may alter the ZIKV infection process either through changing the 33 Drosophila response to the infection, modifying the efficiencies of viral-specific processes, or, more likely, both. Our study focused on fly responses and ZIKV pathology in the brain early in the infection process. However, disentangling the observed effects will require further analysis of the combinatorial effects of the cheesehead mutation and its characteristic phenotypes in the adult brain. Sampling of other immunological tissues and at later time points during which high levels of ZIKV can be detected will also contribute to our understanding of the physiological and molecular interactions between the virus and its host. Nonetheless, while further work is needed to determine the precise mechanisms at play, results from this study indicate that temperature shifts the balance and dynamics of the host environment, which results in direct and indirect consequences for the ZIKV infection process. Our findings indicate that sex is a significant factor in response to ZIKV infection and its outcome. Even though ZIKV replicates in similar trends in each experimental sex group and its corresponding controls at different temperatures, only female brat flies succumbed to the infection at 29°C. Moreover, we detected higher ZIKV levels from whole bodies and heads of infected female chs brat compared to their male counterparts. Infected female chs brat also exhibited more severe motor dysfunction and elevated immune responses compared to chs brat males, thus suggesting that sex differences in immune responses result in the differential susceptibility of females and males to ZIKV infection ( chs Figure 11 ). Such dimorphic survival and pathology could result from inherent costs associated with the induction of enhanced immune responses, whereby female mutants that raise a more potent immune response against ZIKV induce greater tissue damage that leads to higher mortality at 29°C. Similar immune studies investigating bacterial, viral, and fungal infections have also presented evidence of sexual dimorphism and sexual antagonism for resistance and tolerance, and a trade-off between the two traits. 57 , 58 , However, the mechanisms underpinning these findings are largely unresolved due to a lack of information about sex-specific genetic regulation of molecular immunity in 59 Drosophila . While there is a growing interest in studies exploring antiviral immunity and in reporting both sexes, most work in this field uses only one sex or does not stratify by sex. 57 , We recently reported sexually dimorphic responses to ZIKV infection, which is consistent with the evidence presented here. 60 Therefore, sex is an essential factor that impacts immunity and must be considered in the interpretation of data arising from similar immunological studies to improve rigor and reproducibility. These sex differences can potentially be exploited to gain valuable insight into the mechanistic underpinnings of hormonal, genetic, and environmental effects on infectious diseases, as well as the outcome of potential vaccinations for various individuals. 61 This research contributes to advances in the characterization of ZIKV-induced pathology in Drosophila by investigating the molecular events leading to the activation of immune responses. Consistent with previous studies, we report that ZIKV is preferentially localized in the heads of female and male 11 brat flies, as well as in respective chs pcna-GFP controls. The immunostaining co-labeling for the ZIKV antigen, GFP (proliferating cells), Repo (glia), and Elav (neurons) detected some ZIKV/GFP co-localization in brat brains and some ZIKV/Elav and ZIKV/Repo co-localization in both controls and mutants. Interestingly, however, the majority of ZIKV staining did not co-localize with the examined markers. This indicates that it is likely that some progenitor cells that are among proliferating cells in ch brat mutants chs are infected, as well as that both neurons and glia are targeted by the virus in the adult 24 Drosophila brain. However, because both Repo and Elav are transcription factors with nuclear localization in differentiated cells, we cannot exclude the possibility that ZIKV targets neurons and glia more widely in the adult brain. The antibodies we used label transcription factors in the nucleus without staining the cytoplasm and therefore further experiments are warranted to fully define the exact cell types targeted by ZIKV in the adult Drosophila brain and in brat mutants. For instance, one future experiment to consider is to generate fly lines that label each cell type with a cytoplasmic or membrane-targeted red fluorescent protein (RFP) and co-stain for ZIKV and anti-RFP. Furthermore, it is possible that we are also not capturing the exact neural progenitor stage (e.g., neuroblast, intermediate neural progenitor, ganglion mother cell, and so forth) targeted by ZIKV by only using the pcna-GFP reporter. Refining the cell types targeted by ZIKV could also help map the behavioral changes resulting from ZIKV infection such as impaired climbing, thus providing further insights into the underlying pathophysiological mechanisms. chs By developing an in vivo model for studying the molecular basis of innate immunity against ZIKV infection, we also show that the main mediators in the RNAi antiviral response, Dicer-2 and Ago-2 are upregulated in the context of ZIKV infection in the Drosophila brain. How exactly these RNAi effectors regulate viral replication in the brain and whether the differential roles we observed in the two sexes affect host-ZIKV interactions remain largely unclear. In a recent report, Dicer-2 was implicated as instrumental in regulating ZIKV replication while Ago-2 was dispensable. This distinction in the level of surveillance between the two RNAi components is likely due to the involvement of Dicer-2 in other immune pathways such as Toll signaling and expression of the antiviral gene 62 Vago . 63 , Identification of putative ZIKV dsRNA targets recognized by Dicer-2 may provide more insight into its intricate function during ZIKV and other flavivirus infections in 64 Drosophila . Consistent with our findings that ZIKV targets the brat brain, we, for the first time, also show that the chs Drosophila apoptotic gene Grim is associated with increased activation of the antiviral RNAi pathway in response to ZIKV infection in the adult brain. Notably, at 25°C Grim expression in the brat flies was higher than that of chs pcna-GFP controls and vice versa at 29°C. This finding can be attributed to the hypomorphic nature of brat , whose function progressively declines with increasing temperature, therefore potentially decreasing the number of apoptotic cells in the brain. Collectively, these results confirm the ability of ZIKV to replicate and induce cell death in the adult chs brat brain, which could be relevant to human cancer and neurodegenerative diseases. chs STAR★Methods Key resources table REAGENT or RESOURCE SOURCE IDENTIFIER Antibodies rabbit anti-Flavivirus (clone: D1-4G2-4-15 (4G2)) Enzo Life Sciences ABS491-0200 chicken anti-GFP Invitrogen™ A10262 mouse anti-Repo DSHB 8D12 (contributed by C. Goodman, University of California-Berkley) rat anti-elav DSHB 7E8A10 (contributed by G.E. Rubin, Janelia Farm) goat anti-rabbit AlexaFluor® 568 Invitrogen™ A11011 goat anti-chicken AlexaFluor® 488 Invitrogen™ A11039 goat anti-mouse AlexaFluor® 647 Invitrogen™ A21242 goat anti-rat AlexaFluor® 633 Invitrogen™ A21094 Bacterial and virus strains MR766 Harsh et al., 2018 62 N/A Chemicals, peptides, and recombinant proteins Phosphate buffered saline (PBS) Quality Biological 119-069-131 Phosphate buffered saline (PBS) VWR 97062–948 TRIzol Reagent Invitrogen Cat# 15596026 Triton X-100 (SURFACT-AMPS X-100) Thermo Scientific™ 28314 Prolong Diamond Antifade Mountant with DAPI Molecular Probes™ P36962 16% Paraformaldehyde Aqueous Solution, EM Grade Electron Microscopy Sciences 15710-S Normal Goat Serum MP Biomedicals™ ICN19135680 Experimental models: Organisms/strains Fruit fly: brat chs /Cyo; pcna-GFP/Tm3,ser Loewen et al., 24 N/A Fruit fly: pcna-GFP Loewen et al., 24 N/A Oligonucleotides Primer: ZikaNS5 Forward: CCTTGGATTCTTGAACGAGGA Harsh et al. 62 N/A Primer: ZikaNS5 Reverse: AGAGCTTCATTCTCCAGATCAA Harsh et al. 62 N/A Primer: RpL32 Forward: GATGACCATCCGCCCAGCA Harsh et al. 62 N/A Primer: RpL32 Reverse: CGGACCGACAGCTGCTTGGC Harsh et al. 62 N/A Primer: Dicer-2 Forward: GTATGGCGATAGTGTGACTGCGAC Harsh et al. 62 N/A Primer: Dicer-2 Reverse: GCAGCTTGTTCCGCAGCAATATAGC Harsh et al. 62 N/A Primer: Argonaute-2 Forward: CCGGAAGTGACTGTGACAGATCG Harsh et al. 62 N/A Primer: Argonaute-2 Reverse: CCTCCACGCACTGCATTGCTCG Harsh et al. 62 N/A Primer: Reaper Forward: CATACCCGATCAGGCGACTC This study N/A Primer: Reaper Reverse: ACATGAAGTGTACTGGCGCA This study N/A Primer: Hid Forward: ACTGCAATTTCAATGTCTTCGCA This study N/A Primer: Hid Reverse: AGATGTGCTTGTTTTTGTGGACT This study N/A Primer: Grim Forward: CAATATTTCCGTGCCGCTGG This study N/A Primer: Grim Reverse: ATCCCAGCATCCAAACTCCG This study N/A Software and algorithms PRISM GraphPad Software Version 9 Fiji Schindelin et al., 65 Version: 2.3.0/1.53q Other Bloomington Drosophila food LabExpress Cat# 7001-NV Baker’s yeast Carolina Biological Supply Cat# 173235 Nanoject III Drummond Scientific Cat# 3-000-207 Nanoject II Drummond Scientific Cat# 3-000-204 Nutri-Fly® Bloomington Formulation Drosophila food Genesee Scientific Cat# 66-113 Nikon Eclipse Ti2 confocal microscope Nikon N/A Resource availability Lead contact Further information and requests for resources and reagents should be directed to Dr. Ioannis Eleftherianos ( ioannise@gwu.edu ). Materials availability The brat chs /Cyo; pcna-GFP/Tm3,ser and pcna-GFP strains are available to other laboratories upon request to the lead contact . Experimental model and subject details D. melanogaster lines All fly stocks used in this study are Wolbachia -free and listed in key resources table . Flies were reared on Bloomington Drosophila Stock Center cornmeal food (LabExpress), supplemented with yeast (Carolina Biological Supply), and maintained at 25°C with a 12:12-h light:dark photoperiodic cycle. Flies used in the immunostaining experiments were reared on a Nutri-Fly Bloomington Formulation food (Genesee Scientific) and maintained at 25°C with a 12:12-h light:dark photoperiodic cycle. Homozygous female and male brat flies (5–7-day-old) carrying both the chs brat mutation and a reporter gene ( chs pcna-GFP ) were used for experiments. The pcna-GFP stock was used as a genetic background control. Both sexes were selected from the same generation and randomly assigned to experimental groups. Zika virus stocks Stocks of ZIKV strain MR766 were prepared as previously described. 62 Method details Fly lifespan assessment For lifespan assessment, newly eclosed flies were collected under light carbon dioxide (CO 2 ) anesthesia and housed at a density of 15–20 females and 15–20 males each per vial. At least 100 males and 100 females were tested for each fly line. Flies were kept on Bloomington Drosophila Stock Center cornmeal food (LabExpress), supplemented with yeast (Carolina Biological Supply), and maintained at 25°C or 29°C with a 12:12-h light:dark cycle. They were transferred to fresh vials every third day for the duration of the experiment, and mortality was recorded daily. Fly infection method Injections were performed by anesthetizing flies of the stated genotypes with CO 2 . For each experiment, female and male flies were injected with ZIKV suspensions in PBS (pH 7.5) using a nanoinjector (Nanoject II for immunostaining experiments and Nanoject III for all other experiments; Drummond Scientific). ZIKV stocks were prepared in PBS (pH 7.5). Live ZIKV solution (11,000 PFU/fly) (100 nL) were injected into the thorax of flies, and control flies were injected with the same volume of PBS. Following infection, flies were maintained at 25°C or 29°C and transferred to fresh vials every third day for the duration of the experiment. Flies were collected at 4 days post injection and directly processed for RNA analysis. Fly deaths occurring within one day of injection were attributed to injury and were not included in the results. Fly survival estimation For each fly strain, three groups of 20 male and female flies were injected with ZIKV, and control groups were injected with PBS. Following injection, flies were maintained at a constant temperature of 25°C or 29°C with a 12-h light/dark cycle, and mortality was recorded daily. RNA isolation and quantitative real-time PCR For each experiment, total RNA was extracted from 10 male or female flies, using TRIzol (Invitrogen) according to manufacturer’s protocol. Total RNA (500 ng–1 μg) was used to synthesize cDNA using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems). Quantitative RT-PCR (qRT-PCR) experiments were performed with two technical replicates and gene-specific primers ( key resources table ) using a CFX96 Real-Time PCR detection system (Bio-Rad Laboratories). Cycle conditions were as follows: 95°C for 2 min, 40 repetitions of 95°C for 15 s followed by 61°C for 30 s, and then one round of 95°C for 15 s, 65°C for 5 s, and finally 95°C for 5 s. Immunostaining and antibodies Flies of each genotype and sex were collected at 0–2 days after eclosion and aged to 5–7 days old. Then, ZIKV infection was administered via the injection procedure described previously. After injection, flies were maintained at 29°C. Brains were dissected in PBS1X from surviving flies at 4 days post injection and transferred into fixative solution. Brains were fixed for 30 min in 4% paraformaldehyde (4% PFA) in PBS1X and placed on a rotating shaker. The fixative solution was removed, and the brains were then washed with PBS-Triton X-100 0.1–0.3% (PBS-T). This included three wash steps of 30 min at room temperature on a shaker, removing the PBS-T at each step, and replacing with fresh PBS-T. After the final wash, brains were placed in a blocking solution of PBS-T and 4% Normal Goat Serum for 1 h at room temperature. Once blocking solution was removed, primary antibodies were added and incubated overnight at 4°C. The primary antibodies’ dilutions used were as follows: rabbit- anti -Flavivirus (4G2) 1:100, chicken- anti -GFP 1:500, rat- anti -Elav 1:100, mouse- anti -Repo 1:50. After removing the primary antibodies, three additional wash steps were performed with PBS-T on a rotating shaker for 30 min. Secondary antibodies were then added with brains and incubated at room temperature for 3 h on a rotating shaker. The secondary antibodies’ dilutions used were as follows: goat anti-rabbit AlexaFluor 568 1:1000, goat anti-chicken AlexaFluor 488 1:1000, goat anti-rat AlexaFluor 633 1:1000, goat anti-mouse AlexaFluor 647 1:1000. Next, the secondary antibodies were removed, and brains were washed with PBS-T for 15 min three times on a rotating shaker. Finally, brains were transferred into a drop of Prolong Diamond Antifade Mountant with DAPI on a microscope slide. Images were acquired with Nikon Eclipse Ti2 Laser Scanning Confocal Microscope and processed using Fiji ImageJ2 (Version: 2.3.0/1.53q). Image acquisition was done using the same camera settings between genotypes and treatments. Immunofluorescence images represent stacks of images that were generated using the Standard Deviation z stack function in Fiji ImageJ2. ‘Brightness and contrast’ function in Fiji ImageJ2 was used to improve visualization; however, all measurements and quantification were done on unmanipulated files. Quantification of flavivirus antigen immunofluorescence was done using the ‘Analyze particles’ function in Fiji ImageJ2. Briefly, a ‘Maximum projection’ function was applied to 68 Z-stacks for all experimental samples in the Grayscale mode. For each resulting image, a region of interest (ROI) was selected based on DAPI staining. The image threshold for all samples was similarly adjusted, and the ‘Analyze particles' function used to determine the % immunostained area compared to the total imaged brain area based on the selected ROI. Fluorescence intensity plots for all immunostainings (4G2, Repo or Elav, GFP and DAPI) were obtained as previously described using a single image chosen from the corresponding z-stacks. Measurements were done using the same ROI across all four fluorescence channels and across experimental groups. 62 Climbing assays Climbing assays were carried out as previously described. 45 , Groups of 10 adult female and male flies were transferred into empty vials and incubated for 1 h at room temperature for acclimatization. The flies were gently tapped down to the bottom of the vials and then the number of flies reaching an 8 cm mark was counted after 18 s of climbing. 66 Quantification and statistical analysis All analyses were conducted with data from three independent experiments. For survival curves, pairwise comparisons of each experimental group with its control were carried out using a log-rank (Mantel–Cox) test. For climbing experiments, a Student t test was used to measure the statistical significance (Scale bar, 100 mm ∗p < 0.05, ∗∗p < 0.05, ∗∗∗∗p < 0.0001). Data from quantitative real-time PCR was analyzed with gene specific primers in duplicates, with at least three independent experiments for both test and control treatments. Fold changes were calculated with the 2 -ΔΔC T method using Ribosomal protein L32 ( RpL32 ), also known as rp49 , as a housekeeping gene. 67 , All error bars represent standard error of mean. GraphPad Prism software was used for statistical analysis. 68 Acknowledgments We thank members of the Eleftherianos lab for maintaining and amplifying the laboratory fly lines and members of the Department of Biological Sciences at George Washington University for providing feedback to the project. Schematic figures were created using BioRender. The work was funded through a grant to I.E. from the Columbian College of Arts and Sciences at George Washington University . G.T.E. was funded through a Wilbur V. Harlan summer research fellowship from the George Washington University Department of Biological Sciences. Author contributions Conceptualization, G.T.E., S.C., and I.E.; methodology, G.T.E., D.B., S.C., and I.E.; investigation G.T.E., A.K., D.B., and S.C.; formal analysis G.T.E., D.B., S.C., and I.E; writing, G.T.E., S.C., and I.E. Declaration of interests The authors declare no competing interests.
|
[
"MUSSO",
"SHARMA",
"ROTHAN",
"BIDOMEDINA",
"COFFEY",
"LI",
"ADAMSWALDORF",
"SASI",
"ZHANG",
"LEMAITRE",
"LIU",
"GOTO",
"MARTIN",
"TAFESHEDWARDS",
"TRAMMELL",
"HILLYER",
"BUCHON",
"HOFFMANN",
"ELREFAEY",
"MUSSABEKOVA",
"LIU",
"DELORMEAXFORD",
"LINK",
"LOEWEN",
"BELLO",
"BETSCHINGER",
"LEE",
"ARAMA",
"FERREIRA",
"WINOKUR",
"CHRISTOFFERSON",
"VILLENA",
"TESLA",
"MORDECAI",
"MURDOCK",
"MURDOCK",
"BALM",
"ELSHAHAWI",
"THACKER",
"JAHN",
"AGGARWAL",
"CARPENTER",
"GREENE",
"INAGAKI",
"MADABATTULA",
"SALEH",
"SWEVERS",
"HEIGWER",
"BLAZQUEZ",
"ARAUJO",
"ACOSTAAMPUDIA",
"CHARNIGA",
"MEHRBOD",
"LEE",
"FERRARIS",
"LIU",
"BELMONTE",
"VINCENT",
"PALMER",
"KLEIN",
"TAFESHEDWARDS",
"HARSH",
"WANG",
"DEDDOUCHE",
"SCHINDELIN",
"PARK",
"LIVAK",
"SCHMITTGEN"
] |
deafcc5a13fe4968a565b910d6d87da1_Calidad de vida y grado de satisfacción de pacientes postoperados de funduplicatura de Nissen laparo_10.1016_j.rgmx.2013.11.003.xml
|
Calidad de vida y grado de satisfacción de pacientes postoperados de funduplicatura de Nissen laparoscópica
|
[
"Prieto-Díaz-Chávez, E.",
"Medina-Chávez, J.L.",
"Brizuela-Araujo, C.A.",
"González-Jiménez, M.A.",
"Mellín-Landa, T.E.",
"Gómez-García, T.S.",
"Gutiérrez-Zamora, J.",
"Trujillo-Hernández, B.",
"Millan-Guerrero, R.",
"Vásquez, C."
] |
Antecedentes
La cirugía antirreflujo tiene actualmente un lugar establecido en el manejo de la enfermedad por reflujo gastroesofágico. Algunas series han revelado buenos resultados a corto plazo, pero los resultados a largo plazo permanecen aún poco conocidos. Recientemente, los estudios se han centrado en evaluar la sintomatología residual y su impacto en la calidad de vida.
Objetivo
Determinar la calidad de vida en el postoperatorio y la satisfacción en pacientes intervenidos de funduplicatura de Nissen laparoscópica.
Pacientes y métodos
Se estudió a 100 pacientes (59 mujeres y 41 hombres) postoperados de funduplicatura de Nissen laparoscópica. Las variables fueron grado de satisfacción, calidad de vida (GIQLI), síntomas residuales y escala Visick.
Resultados
No se encontró variación en el sexo, siendo 49 hombres y 51 mujeres; el promedio de edad fue de 49 años. La valoración del grado de satisfacción fue: satisfactoria en 81 pacientes, moderada en 3 y mala en 2 pacientes. Más del 90% se sometería de nuevo o recomendaría la cirugía. En cuanto a la clasificación de Carlsson, se mostró mejoría al final del estudio (p<0.05). De acuerdo con el cuestionario GIQLI, se obtuvo una mediana de 100.61 puntos±21.624. Distensión abdominal, regurgitación y saciedad temprana fueron los síntomas residuales más frecuentes. La repercusión en el estilo de vida mediante escala de Visick fue excelente.
Conclusiones
El grado de satisfacción y la calidad de vida obtenidos son comparables con estándares reportados y los síntomas residuales son fácilmente controlables posterior a la cirugía antirreflujo.
Background
Today, antireflux surgery has an established position in the management of gastroesophageal reflux disease. Some case series have shown good short-term results, but there is still little information regarding long-term results. Studies have recently focused on evaluating residual symptomatology and its impact on quality of life.
Objectives
To determine the postoperative quality of life and degree of satisfaction in patients that underwent laparoscopic Nissen fundoplication.
Patients and methods
A total of 100 patients (59 women and 41 men) were studied after having undergone laparoscopic Nissen fundoplication. The variables analyzed were level of satisfaction, gastrointestinal quality of life index (GIQLI), residual symptoms, and the Visick scale.
Results
No variation was found in relation to sex; 49 men and 51 women participated in the study. The mean age was 49 years. The degree of satisfaction encountered was: satisfactory in 81 patients, moderate in 3, and bad in 2 patients. More than 90% of the patients would undergo the surgery again or recommend it. The Carlsson score showed improvement at the end of the study (p<0.05). In relation to the GIQLI, a median of 100.61 points±21.624 was obtained. Abdominal bloating, regurgitation, and early satiety were the most frequent residual symptoms. The effect on lifestyle measured by the Visick scale was excellent.
Conclusions
The level of satisfaction and quality of life obtained were comparable with reported standards; and the residual symptoms after antireflux surgery were easily controlled.
|
Introducción La enfermedad por reflujo gastroesofágico (ERGE) fue reconocida como problema clínico importante en 1935 por Winklestein y se identificó como causa de esofagitis por Allison en 1946 . La cirugía tiene actualmente un lugar establecido en el manejo de la ERGE 1,2 . La popularización en los últimos años de la cirugía antirreflujo, asociada a la introducción de la técnica laparoscópica con sus ventajas —como el abordaje menos traumático e invasivo y una rápida reincorporación a las actividades habituales— ha permitido la expansión de la funduplicatura laparoscópica. Por lo anterior, este procedimiento se ha logrado establecer como el estándar de oro en el manejo quirúrgico de la ERGE 3 . 4,5 Recientemente, los estudios se han centrado en evaluar los resultados postoperatorios, especialmente la presencia de sintomatología residual posterior a la cirugía y su impacto en la calidad de vida de los pacientes en el corto plazo. Se ha considerado que los indicadores de esta última medición y el grado de satisfacción del paciente son, hoy en día, una forma importante de estimar el resultado de la cirugía antirreflujo, ya que por mucho tiempo la persistencia de los síntomas gastroesofágicos en un paciente posfunduplicación han sido considerados sinónimo de fallo quirúrgica . 6,7 Por lo anterior, el objetivo de este estudio fue realizar una evaluación a largo plazo de los resultados de la funduplicatura de Nissen mediante 4 cuestionarios, como son el Gastro-intestinal Quality of Life (GIQLI), Score para disfagia, la Escala análoga visual para disfagia y la escala de Visick. Pacientes y métodos Se realizó un estudio en el cual evaluamos la calidad de vida y la sintomatología residual en una serie de 100 pacientes operados de febrero del 2005 a diciembre del 2010. Los objetivos específicos a analizar fueron los siguientes: 1. ¿Son los resultados obtenidos satisfactorios para el paciente operado de funduplicatura de Nissen laparoscópica? 2. Calidad de vida del paciente operado de funduplicatura de Nissen laparoscópica y su relación con el grado de satisfacción. 3. ¿Es frecuente la persistencia de síntomas después de la cirugía antirreflujo? 4. ¿Es indispensable la necesidad de medicación después de la cirugía? Métodos. Cien pacientes se programaron para ser intervenidos con funduplicatura de Nissen laparoscópica con técnica holgada (Floppy Nissen). Todos los pacientes fueron operados por el mismo equipo quirúrgico; la técnica quirúrgica fue estandarizada previo al estudio y se siguieron las pautas aceptadas internacionalmente. La selección de los pacientes para manejo quirúrgico estuvo basada en los lineamientos establecidos por el Consenso Mexicano para el Estudio de la ERGE . 8 El grado de satisfacción, la calidad de vida de los pacientes postoperados y la morbilidad quirúrgica fueron valorados utilizando cuestionarios estandarizados y validados para el efecto. A todos los pacientes se les aplicó el cuestionario de Carlsson para medir la intensidad del reflujo ; en el postoperatorio, se estudió la calidad de vida con el cuestionario de GIQLI 9 (instrumento adecuado, válido y útil para valorar la calidad de vida relacionada con paciente que padecen enfermedad por reflujo), ya que incluye preguntas específicas sobre síntomas digestivos y genéricas sobre la capacidad física, emocional y social. El cuestionario consta de 36 ítems con una escala de respuesta del 0 (peor resultado) al 4 (mejor resultado), aceptándose como efecto satisfactorio puntuaciones por arriba de 86 en la suma global, así como la presencia de síntomas residuales. La disfagia se evaluó con el Score de disfagia y la Escala análoga para disfagia 10 , además del cuestionario de Visik 11 . La encuesta acerca de grado de satisfacción y la calidad de vida se aplicó a cada paciente con un mínimo de 5 años de postoperatorio, siendo aplicados por un investigador independiente al manejo quirúrgico y con preguntas dirigidas de forma verbal. El grado de satisfacción se evaluó en forma nominal con las siguientes preguntas: volvería a aceptar una intervención o si recomendaría la intervención a un amigo o familiar. Se eligió el cuestionario de GIQLI por ser un cuestionario validado al español, el cual consta de 36 preguntas agrupadas en 5 divisiones y cuenta con una parte específica para las enfermedades digestivas. La puntuación y la escala análoga visual para disfagia se midieron en escalas validadas de puntuación, en tanto que la escala de Visick se midió de manera ordinal. 12 Análisis estadístico. El análisis se realizó bajo el precepto de intensión de tratar. Se utilizó estadística descriptiva, como promedios, desviación estándar, medianas y porcentajes. La comparación de proporciones se realizó con la prueba de la χ 2 . Para la comparación de promedios y medianas, se utilizaron las pruebas de la t de Student o U de Mann-Whitney (para variancias iguales o diferentes, respectivamente). Para el análisis del cuestionario se utilizó la prueba de la t de Student y para el análisis de los grupos se realizó un análisis de variancia, tanto para los resultados globales como en cada una de las divisiones con las que cuenta. Se utilizó un intervalo de confianza (IC) del 95% y se consideró significación estadística cuando p < 0.05. El estudio contó con la aprobación del Comité de Investigación del Hospital General de Zona N.° 1 del IMSS en Colima. Resultados De febrero del 2005 a diciembre del 2010, se estudió a 100 pacientes (59 mujeres y 41 hombres), quienes fueron operados de ERGE. Catorce pacientes fueron excluidos del análisis final por las siguientes razones: 10 por cambio de domicilio en el momento de la entrevista y 4 se negaron a responder el cuestionario. El seguimiento promedio del grupo restante fue de 5 ± 0.5 años. La tasa global de seguimiento a 5 años fue del 82% en los 100 pacientes. El promedio de horas que el paciente permaneció hospitalizado posterior a la cirugía fue de 18 ± 8.7 h. No reportamos conversiones ni mortalidad. La distribución por sexo fue de 49 hombres y 51 mujeres en un rango de edad de 18 a 87 años y un índice de masa corporal promedio de 28.33 ± 4.49. Las características poblacionales al ingreso de los pacientes para análisis se muestran en la tabla 1 . Evaluación de satisfacción En general, el grado de satisfacción de los pacientes sometidos a cirugía fue clasificado como totalmente satisfecho en 75 pacientes y bueno en 3 pacientes (78% con buenos a excelentes resultados). En cuanto a si se someterían nuevamente a tratamiento quirúrgico o recomendarían este a un amigo si fuese necesario, la respuesta fue afirmativa en el 90 y el 96% de los casos, respectivamente. Calidad de vida Los resultados de la valoración de calidad de vida en 85 pacientes encuestados con el cuestionario de GIQLI fueron, previamente, validados en 35 sujetos. El análisis global de calidad de vida muestra un nivel aceptable después de la cirugía (puntuación de 102 ± 16.8). Este nivel es más evidente en el grupo correspondiente a las actividades digestivas, físicas, emocionales y del ámbito social. En la tabla 2 se muestran las puntuaciones para el GIQLI, tanto totales como desglosadas por áreas de función. En la figura 1 se representa la correlación de la puntuación de GIQLI en función del género, edad, índice de masa corporal y síntomas; las puntuaciones se mantienen con el seguimiento y no existen diferencias estadísticamente significativas entre las pendientes de las curvas (p = 0.50). Evaluación de síntomas Las puntuaciones medias del cuestionario de Carlsson fueron de 7.8 ± 3.4 y 3.5 ± 3.1, respectivamente, para el grupo en preoperatorio y postoperatorio; estas diferencias resultaron estadísticamente significativas (p < 0.05). Entre los síntomas residuales presentados en el postoperatorio, encontramos: distensión abdominal 24 pacientes (27.5%), regurgitación 16 pacientes (18.3%) y saciedad temprana 16 pacientes (18.3%). Y otros que se presentaron con menor frecuencia, como: odinofagia en 7 pacientes (8%); eructos en 4 pacientes (4.5%); disfagia pasajera en 2 pacientes (2.29%); náuseas y vómito en 2 pacientes (2.29%) y tos en un paciente (1.1%). Observamos ausencia de síntomas residuales en 15 pacientes (17.24%). La figura 2 resume la puntuación para disfagia de un rango de 0 a 24 puntos y la Escala análoga visual para disfagia en un rango de 0 a 10 puntos en pacientes operados de funduplicatura de Nissen laparoscópica; esto a los 5 años de seguimiento. En el 47% de ellos fue necesario recurrir a la medicación, mientras que 53 pacientes no recurrieron a medicación por síntomas. En 89.3% los resultados fueron considerados muy buenos o buenos al ingerir igual o menor cantidad de medicamentos, mientras que solo en el 10.3% se consideró mala respuesta al ingerir mayor cantidad de medicación. En general, la repercusión en el estilo de vida, posterior a la cirugía —medida a través de la escala de Visick— se consideró: excelente (Visick 1) y buena (Visick 2) en 19 (21.8%) y 50 (57.4%) pacientes, respectivamente, mientras que 14 pacientes tienen síntomas moderados no controlados, pero no intervienen con su vida socioeconómica (Visick 3), y 2 pacientes presentan síntomas moderados que sí intervienen con su vida socioeconómica (Visick 4). Los resultados de las correlaciones obtenidas en los cuestionarios de Carlsson preoperatorio con postoperatorio, GIQLI y Visick se muestran en la tabla 3 . Discusión La ERGE es responsable del 75% de las manifestaciones que se presentan en el esófago, ocasionando síntomas o complicaciones, tanto a este nivel como en la vía respiratoria y en el estado nutricional. Se han reportado varios factores asociados como barrera antirreflujo, agresivos (ácidos, pepsina, bilis, enzimas pancreáticas, etc.) y defensivos (aclaramiento esofágico). Desde la introducción de la laparoscopia para el tratamiento del reflujo gastroesofágico, su uso se ha ido extendiendo rápidamente y ha sido aceptada como la vía de elección para el tratamiento del reflujo gastroesofágico . 13 En la actualidad, se considera el resultado de un procedimiento antirreflujo no solo tomando en cuenta las medidas de éxito técnico, sino también toma relevancia la perspectiva del paciente. Dichas medidas de resultado global que controlan el equilibrio del resultado técnico y la perspectiva del paciente son, sin duda, lo que definirá el concepto de fallo quirúrgica. Nuestros resultados muestran que al menos en los 5 años posteriores al procedimiento quirúrgico, el 78% de los pacientes sometidos a funduplicatura de Nissen laparoscópica están altamente satisfechos con su estado e incluso volverían a aceptar el procedimiento quirúrgico o lo recomendarían a un amigo si fuese necesario. Esto toma relevancia por conclusiones similares reportadas por Dallemagne et al. , Kelly et al. 14 y Bloomston et al. 15 en series de casos. Al igual que Díaz-de Liaño 16 , quien previamente reportó un grado de satisfacción medio de 8.1 sobre un máximo de 10, con porcentaje de pacientes satisfechos entre el 85 y el 95%. En este estudio, nuestros pacientes presentan un grado de satisfacción del 8.44 y el 78% de resultados buenos a excelentes, respectivamente. 17 Los objetivos de la operación antirreflujo han sido, hasta ahora, atenuar los síntomas de reflujo con riesgo mínimo y sin agregar efectos secundarios a largo plazo. El estado de salud poblacional y la incapacidad subsiguiente se han convertido en la piedra angular estudiadas por los investigadores clínicos. En la actualidad, se considera de gran trascendencia la perspectiva de salud, manifestada esta por el propio paciente en su ambiente biopsicosocial, a través de encuestas específicas o genéricas. Nosotros, al igual que Poves-Prim et al. , pensamos que el cuestionario de GIQLI, preparado en Alemania y validado en Francia para afecciones digestivas y luego validado al español para reflujo 18 , tiene la capacidad de proporcionar información general y específica, así como biopsicosocial, de la calidad de vida del paciente. 10 Nuestro estudio corresponde a 84 pacientes operados, con un seguimiento promedio de 5 años. De esta forma, ha sido posible demostrar, de manera incuestionable, una buena calidad de vida posterior a la cirugía (100.61), en contraste de los valores en sujetos normales, como lo muestran los hallazgos de Araujo Teixeira et al. y Dallemagne et al. 19 . 14 Los pacientes evaluados han presentado, a los 6 meses posteriores a la cirugía, resultados similares a los reportados en personas normales y un mejoramiento en la calidad de vida correspondiente a las actividades tanto del estado físico como emocional, que había sido confirmado por Araujo Teixeira , y que se mantienen durante el resto del periodo de observación, salvo en las actividades digestivas, lo cual ya ha sido advertido por numerosos autores 19 y que son atribuidos a síntomas residuales específicos, como la distensión abdominal, regurgitaciones y saciedad temprana. 14,20 Aunque el éxito de la cirugía antirreflujo se valora por el control de los síntomas, es frecuente que la presencia y la intensidad de efectos secundarios potenciales, manifestados por sintomatología residual, se interpreten como fracaso. Sin embargo, los conceptos de mejoría son de vital importancia en el entorno actual de salud, pudiendo ser los parámetros más importantes para evaluar la eficacia del tratamiento quirúrgico. En nuestro estudio como en otros informes , estas manifestaciones fueron evaluadas hasta a 6 meses después de la cirugía y, a pesar de la incomodidad inicial, no encontramos ningún efecto en los datos de la calidad de vida. No fueron documentados pacientes con presencia o persistencia de disfagia severa 6 meses posteriores a la cirugía. En este momento, no se tiene una explicación del mayor porcentaje de distensión abdominal en el postoperatorio. 6,21 Algunos pacientes continúan ingiriendo medicamentos a pesar de que todos los síntomas de reflujo desaparecieron por la cirugía. El aumento gradual en la utilización de medicamentos antirreflujo con el tiempo podría ser, en parte, debido a un riesgo bajo pero continuo de recurrencia de reflujo. Sin embargo, la tasa de recurrencia de reflujo es probablemente menor que lo que sugiere el consumo de medicamentos, ya que solo 24% de los pacientes que continúan ingiriendo medicamentos antirreflujo posterior a funduplicatura presentan realmente reflujo cuando son sometidos a pH-metría de 24 h . En nuestra serie, el 47% de los pacientes continuó ingiriendo medicamentos antirreflujo. Esto es considerablemente menor que el 62% reportado por Fenton 15,16 . 22 En otros reportes ha sido utilizada la clasificación modificada de Visick con resultados similares a los reportados en nuestro estudio , en donde grados 20,23 i y ii normalmente son considerados como un resultado satisfactorio y que correlaciona muy bien con síntomas de acidez. La evaluación de la calidad de vida, aunque a veces sutil, siempre ha desempeñado un papel central en los objetivos terapéuticos de la medicina. La mayoría de los procedimientos quirúrgicos se enfocan a la corrección de los trastornos fisiológicos o anatómicos que conducen a un procedimiento enfermedad. Sin embargo, desde el punto de vista de los pacientes, los resultados de estos procedimientos tienen poco impacto desde su perspectiva y la recuperación de la calidad de vida se torna en la piedra angular de su satisfacción, por lo que decidimos utilizar diferentes instrumentos de evaluación, tanto genéricos como específicos. Por lo tanto, concluimos, en general y de acuerdo a publicaciones anteriores , que las evaluaciones, de acuerdo con los instrumentos utilizados, demuestran una importante mejora en la calidad de vida. 17,19,20,24 Financiación No se recibió patrocinio de ningún tipo para llevar a cabo este estudio/artículo. Conflicto de intereses Los autores declaran no tener ningún conflicto de intereses.
|
[
"WINKLESTEIN",
"ALLISON",
"WATSON",
"DALLEMAGNE",
"RATTNER",
"SPECHELER",
"GALVANI",
"USCANGADOMINGUEZ",
"DENT",
"QUINTANA",
"WATSON",
"WATSON",
"FELIU",
"DALLEMAGNE",
"KELLY",
"BLOOMSTON",
"DIAZDELIANO",
"POVESPRIM",
"ARAUJOTEIXEIRA",
"VIDAL",
"DESAI",
"FENTON",
"WATSON",
"KAMOLZ"
] |
ed9d0d0780c34d0682f3547788048189_Impact of intersection type and a vehicular fleets hybridization level on energy consumption and emi_10.1016_j.jtte.2016.05.003.xml
|
Impact of intersection type and a vehicular fleet's hybridization level on energy consumption and emissions
|
[
"Boubaker, Samia",
"Rehimi, Férid",
"Kalboussi, Adel"
] |
A vehicle's energy consumption and emissions are two major constraints in sustainable development. Both of them have proportionally raised in recent decades with the exponential growth of world traffic demands. The reduction of road traffic-generated energy consumption and emissions have thus become unprecedentedly challenging and worth examining. This paper investigates energy consumption and environmental problems present at roundabout and signalized intersection to analyze the impact of the hybridization level's fleet and intersection type on vehicle consumption and pollution. Instantaneous fuel consumption and emission models coupled with simulation of urban mobility (SUMO) are in this study. The authors started with modeling energy consumption. Then, an emission model emissions from traffic (EMIT) was implemented to quantify vehicle emissions of CO2, CO and NO
x
. These models help investigate the influence of intersection type on energy consumption and environmental conditions. The authors implemented a signalized intersection and roundabout using SUMO. The input data are collected from the roundabout of Sousse (Tunisia) using video data collection. Since there is a lack of econometric models that emulate hybridized stream behavior near intersections, two energy consumption models for the roundabout and crossroad are developed using traffic flow and hybridization level as the input variables. Compared to crossroads, a roundabout can obtain more environmental improvements and substantial reductions in energy consumption and road traffic emissions.
|
1 Introduction Road traffic encounters several problems, such as air pollution and energy consumption, which result in a major constraint for sustainable mobility. With increasing concern over urban air pollution from motor vehicles, it is imperative that vehicles take energy consumption and emission into consideration. One of the focal questions in transportation science is the evaluation of environmental and energetic impacts of vehicular traffic ( Chen and Borken-Kleefeld, 2014; Sekhar et al., 2013 ). Emission rates and consumption depend on road traffic characteristics, vehicle type, and road intersection type ( Pandian et al., 2009 ). In fact, the intersection type can play a substantial role in reducing vehicle emissions. Research shows that emissions are generated in greater quantities at intersections with traffic signals than at roundabouts. Therefore, replacing a signalized intersection with a roundabout results in fuel consumption and emissions decreasing ( Ahn et al., 2009; Mandavilli et al., 2008 ). This paper aims to study the energy and emission problems of road traffic at intersection using computer micro simulation modeling tools ( Coelho et al., 2006; Zamboni et al., 2015 ). Since many consumption models depend on microscopic variables, such as velocity and acceleration, one must start by modeling road traffic for simulation. The kinematic variables of traffic flow are obtained by the simulation of urban mobility (SUMO) tool ( Krajzewicz et al., 2002, 2012 ). Thus, the authors have implemented an instantaneous energy consumption model ( Demir et al., 2011 ) and emissions model (EMIT) ( Cappiello et al., 2002 ). This work contributes the integration of a microscopic simulation traffic tool with an instantaneous energy consumption and emission model. Secondly, using data collected at the roundabout of Sousse (Tunisia) the authors have studied how intersection type and traffic state influences energy consumption at a roundabout and a crossroad. Finally, this paper introduces a statistic model at a roundabout and crossroad that enables authors to estimate energy consumption while taking into account the hybridization level and traffic demand. Since increasing traffic congestion causes complications at intersection, the authors have compared the fuel consumption and vehicle emission at a roundabout and crossroad for both congested and uncongested cases. Moreover, they have studied the influence of traffic flow on the two intersections and implemented two energy consumption models for the roundabout and crossroad that combines traffic flow and hybridization level. The hybridization level reflects the percentage of Hybrid Electric Vehicle (HEV) among the total fleet. This work analyzes microscopic energy consumption and emission traffic models. Secondly, this work describes the implementation's geometry and vehicles dynamic of the crossroad and the roundabout. Thirdly, results are presented to illustrate the influence of intersection type on fuel consumption and emissions in both congested and uncongested cases. Also presented are details about the development of energy consumption models, which take in consideration the traffic flow and hybridization level, for the roundabout and crossroad. Finally, this study's main findings and potential for future research are summarized. 2 Related work Many studies have investigated energy consumption and the environmental effects present at signalized intersections and roundabouts, but very few researchers have used instantaneous traffic simulation models in conjunction with microscopic energy and emission models. The main contribution of this study is the quantification of energy consumption and emissions using instantaneous models coupled with a microscopic traffic simulator at both a roundabout and crossroad intersection. Authors developed a multiple linear regression model that estimates energy consumption at two types of intersection (i.e., crossroad and roundabout) using the traffic demand and the hybridization level as input variables. The principal objectives of this paper include studying the influence of intersection type on energy consumption and environmental effects, as well as showing the relevance of hybridization level at an intersection. A study by Mustafa and Vougias (1993) demonstrates that vehicle emissions at signalized intersections exceed emissions at roundabouts by about 50%. In fact, the hydrocarbons (HC) emitted at a signalized intersection is twice as high as what is emitted at a roundabout. In Sweden, a study of the environmental impacts of roundabouts found that vehicle emissions of carbon monoxide (CO) and nitrogen oxides (NO ) at roundabouts are 20%–29% less than emissions produced at signal controlled intersections ( x Hyden and Varhelyi, 2000 ). Varhelyi (2002) demonstrated that replacing a signalized intersection with a roundabout generates a reduction in vehicle emissions of CO and NO by 29% and 21%, respectively. Fuel consumption is also reduced by 28% at roundabouts. x Mandavilli et al. (2008) used the signalized and unsignalized intersection design and research aid (SIDRA) software to study the environmental impacts of roundabouts. They concluded that HC, CO, NO , and CO x 2 emissions can be reduced by 65%, 42%, 48%, and 59%, respectively, by converting stop-controlled intersections to roundabouts. Another study by Ahn et al. (2009) shows that roundabouts do not usually lead to a reduction in vehicle emissions and energy consumption compared to other types of intersection. Chamberlin et al. (2011) applied the Paramics microsimulation model in combination with the motor vehicle emission simulator (MOVES) and the comprehensive modal emission model (CMEM) to estimate levels of CO and NO emissions at intersections. They concluded that, under congested traffic conditions, a pre-timed traffic signal can reduce vehicle emissions compared to a roundabout. x The study by Gastaldi et al. (2014) used a traffic microsimulation tool (S-Paramics) combined with an instantaneous emission estimator (AIRE) to investigate the environmental performance of two intersection types (i.e., roundabout and fixed-time signal control). The authors concluded that a roundabout can decrease pollutants more than a fixed-time signal control. Clearly, the literature review presents diverse results regarding the energy consumption and environmental impacts at signalized intersections and roundabouts. This is due essentially to road characteristics, vehicle demands, and emission estimation methods ( Gastaldi et al., 2014 ). 3 Modeling energy consumption and emission There is a variety of analytical emission models, and each estimates fuel consumption differently or takes different parameters into account during the estimation ( Demir et al., 2014; Liu et al., 2015 ). Many factors affect the rate of fuel consumption ( Franco et al., 2013; Kim and Choi, 2013 ), and they can all be categorized into four general groups, vehicle, environment, driver, and traffic conditions. In this study, the authors concentrate on instantaneous consumption and emission models. 3.1 Instantaneous consumption model This model is used to estimate the instantaneous energy consumption of each vehicle on a road section. Characterized by its simplicity and capacity to produce relevant results, this model is used in the present work. Many vehicle characteristics, such as mass, efficiency parameters, drags force, and fuel consumption components associated with aerodynamic drag and rolling resistance, are used ( Demir et al., 2011 ). Thus, the instantaneous energy consumption of a vehicle along an urban roadway section is estimated with the following formula: where (1) f t = { α + β 1 R t v + β 2 M a 2 v 1000 R t > 0 α R t ≤ 0 is the fuel consumption per unit of time (mL/s), f t is the tractive force, measured in kilo-Newton (kN), required to move the vehicle. It is calculated as the sum of the drag force, inertia force, and grade force: R t where (2) R t = b 1 + b 2 v 2 + M a 1000 + g M ω 10 5 a is the instantaneous acceleration (m⋅s −2 ), v is the speed (m/s) ( Bowyer et al., 1985 ). The energy consumption of vehicular traffic depends strongly on the vehicles' velocity profiles. Table 1 shows the parameters and respective descriptions used in this paper's model. 3.2 Instantaneous emission model Coupled with the traffic simulation model, the microscopic emissions model quantifies vehicle emissions such as emissions from traffic (EMIT) ( Cappiello et al., 2002 ). The EMIT model consists of the engine-out (EO) and tailpipe (TP) emission modules. The first module calculates the instantaneous engine-out emission rates of pollutant i using instantaneous speed v and acceleration a : where (3) EO i = { α i + β i v + γ i v 2 + δ i v 3 + λ i a v p > 0 α i ′ p = 0 are model coefficients. α i , β i , γ i , δ i , λ i For a conventional vehicle, the only power source of tractive power p is the internal combustion engine (ICE): where (4) p = A v + B v 2 + C v 3 + M a v + M g sin ( θ ) v A is the rolling resistance coefficient, B is the speed correction to the rolling resistance coefficient, C is the air drag resistance coefficient , M is the vehicle mass (kg), g is the gravitational constant (9.81 m/s 2 ), θ is the road gradient. The second module calculates the instantaneous tailpipe emission rates TP based on the first model emissions (EO ) and catalyst conversion efficiency ( i ). CCE i where (5) TP i = EO i · CCE i is defined as the ratio of tailpipe to engine-out emissions for pollutant CCE i i . It is calculated in the EMIT model as: (6) CCE i ( t ) = m i EO i ( t ) + q i Table 2 summarizes the different values of EMIT parameters for each estimated pollutant. The tailpipe emission of CO 2 is not markedly different from the engine-out ( Ma et al., 2012 ). The EMIT model allows to estimate the engine emissions first and then the tailpipe emissions. In fact, vehicle emissions are greatly influenced by vehicle speed and acceleration, as well as by the vehicle's make and model. 4 Hybrid electric vehicle Hybrid electric vehicles (HEV) are one solution to the world's need for cleaner and more fuel-efficient vehicles. In fact, HEV technology is vital to the overall automotive industry, as well as to the user, in terms of both better fuel economy and environmental effect ( Mi et al., 2011 ). HEV uses the engine and an electric motor/generator for propulsion. Moreover, it uses the power of electronic converters and batteries in addition to mechanical and hydraulic systems ( Lam and Louey, 2006; Zhao et al., 2013 ). The major benefits of HEV include efficiency through improved technology, such as regenerative braking, less engine idling, and efficient engine operation. Additional benefits are better energy consumption and drivability, since electric motor characteristics better match the road load and reduce vehicle emission and energy consumption ( Lajunen, 2014; Lim et al., 2014 ). The advanced vehicle simulator (ADVISOR) ( Markel et al., 2002 ), is used to model the energy consumption and emissions of HEV. ADVISOR was created in Matlab/Simulink tool so that each subsystem is associated to a Matlab file. Moreover, the program is more flexible for users, offering the possibility to modify blocks if needed ( Markel and Wipke, 2001 ). 5 Description of SUMO and data collection 5.1 Description of simulation of urban mobility Description of simulation of urban mobility (SUMO) is a traffic simulation tool that was implemented in 2002 ( Krajzewicz et al., 2002 ). It is an open source road traffic simulation package based on microscopic car following models ( Han et al., 2012; Krauss et al., 1997 ). SUMO contains a suite of applications and requires a description of road networks and traffic demand. SUMO road networks include intersections, junctions, and traffic lights. The demand file uses existing origin destination (O-D) matrices, converting them into route descriptions. Much information is needed to build a route file, such as the vehicle's physical properties and the route it takes. Moreover, specific descriptions, such as acceleration, deceleration, vehicle length and maximum speed, and should be taken into account. 5.2 Data collection The authors collected available data by video in two steps. The first involves videotaping traffic movements at intersections with a video camera, and the second includes visually obtaining traffic counts from the video. The camera is designed to provide a full view when mounted above the intersection, and it was placed near the roundabout to monitor the traffic flow both coming towards and leaving the roundabout. The camera was mounted perpendicular to the ground, allowing the video image to be relatively distortion free in all directions. The number of cars passing through each section was controlled through each section every 5 min between 06:00 and 10:00 a.m. The authors measured also the turning movements of each section for all vehicles that pass through the roundabout. To determine the passing direction, the authors measured the traffic flow from one direction to the other three directions (e.g., turning left, going straight, and turning right). 6 Implementation of crossroad and roundabout of Sousse (Tunisia) using SUMO An intersection's geometric design is of a great interest for security. The authors of this study implemented two types of intersections—roundabout and crossroad—using SUMO. A roundabout offers simple traffic control and less traffic conflict points. An intersection consists of incoming and outgoing edges, where an “edge” represents a road with two lanes. The geometric dimensions of the Sousse roundabout are obtained with real measurements. Fig. 1 illustrates the characteristics of the roundabout and crossroad, and Table 3 contains real dimensions of the Sousse roundabout, which is composed of four entry points and four exit destinations. It is one of the most important roundabouts in Tunisia because it is located in an active zone near a university campus. It connects to a university hospital (Sahloul Hospital) in the west, the urban road to the east, the center of the city to the north, and industrial zones to the south. The roundabout has a central island diameter of 36.50 m, and the circulating lane widths range from 11.68 to 12.50 m. The crossroad geometry dimensions were approximated to the Sousse roundabout dimensions in order to illustrate the influence of type intersection on energy consumption. The real dimensions of both the roundabout and crossroad are used as inputs to implement them using SUMO ( Fig. 2 ). To estimate fuel consumption and emissions, the micro-simulation tool that integrates an instantaneous consumption and emissions model is coupled with SUMO. The flowchart in Fig. 3 explains the coupling process. The microscopic kinematic variables, such as velocity and acceleration for each vehicle, are obtained by using SUMO to simulate the dynamic traffic flow. The traffic simulation output results are used as inputs for the instantaneous consumption and emissions models. 7 Results and discussion 7.1 Influence of congestion on energy consumption and emissions The rapid rise of traffic demands has led to increasingly severe congestion. Thus, proper management of vehicle flow at intersections can significantly reduce congestion problems. Fig. 4 shows the entering flow to the Sousse roundabout during the hours between 06:00 and 10:00 a.m. To illustrate, a Tuesday has been selected to represent a working day characterized by good weather conditions. In addition, the authors present the turning flow proportion that reflects the origin-destination flow distribution throughout the intersection in Table 4 . More precisely, the authors present the evolution of traffic flow in Fig. 5 to illustrate the congested and uncongested phases. The congested phase ranges from 07:45 to 08:45 a.m., and the uncongested period lasts between 09:00 and 10:00 a.m. Table 5 illustrates the energy consumption and emissions in congested and uncongested cases of a roundabout and crossroad. The authors estimate important pollutants (CO, NO , CO x 2 ) using EMIT model. The energy consumption and vehicle emissions at the roundabout are less than the crossroad for congested and uncongested cases. Thus, the geometric characteristics of the intersection type have important effects. Traffic signals require vehicles to stop at a red signal, which increases negative impacts such as delay time and vehicle consumption and emissions. However, the roundabout generates a positive impact on the environment since it is a viable alternative to reducing vehicular emissions. The energy consumption and emissions for the congested case exceed that for the uncongested case. This is due to higher speed fluctuations and frequent stops that occur with congestion, which increases the fuel consumption and consequently results in higher emissions. When vehicles must wait at signals to cross intersections, drivers keep the engines on and, as a result, extra fuel is consumed. Different studies have different results. For example, studies conducted in Sweden found that turning a signalized intersection into a roundabout can produce savings in CO and NO emissions by 29% and 21%, respectively, and fuel consumption by 28% ( x Varhelyi, 2002 ). Another research using SIDRA software revealed that a roundabout could save HC, CO, NO , and CO x 2 emissions by as much as 65%, 42%, 48%, and 59%, respectively ( Mandavilli et al., 2008 ). 7.2 Influence of traffic flow (demand) on energy consumption at a roundabout vs. a crossroad The authors have computed energy consumption for different flow levels (e.g., normal flow to a saturated flow). Fig. 6 describes the evolution of energy consumption versus traffic flow in two phases: before and after saturation flow. Before the saturation phase energy consumption increases with the increase of traffic flow. The increase of traffic flow leads to the raise of energy consumption until saturation flow. The second phase (after saturation) is characterized by an increase in fuel consumption versus a significant reduction in traffic flow. This is due to the saturation flow at intersections. In fact, the ever increasing vehicular flow at roundabouts and crossroads are one of the major causes of environmental and energy problems. Comparing the two types of intersections, the authors note that energy consumption at crossroads exceeds energy consumption at roundabouts. The main cause of increased energy consumption at crossroads is the slowing and stopping of vehicles during the red phases. Thus, the engine's stop and go positions, braking, and acceleration significantly affect a vehicle's fuel consumption and emission rates. In contrast, the roundabout is an efficient type of intersection control, and can improve traffic flow by reducing intersection delays and stopped vehicles. Fig. 6 describes the main finding that roundabouts have significant advantages in terms of energy consumption compared to crossroad intersections. Also, traffic flow greatly influences energy consumption both at roundabouts and crossroad intersections. 7.3 Impact of hybridization level on energy consumption at roundabouts and crossroads A linear regression analysis was used to study the influence of hybridization level and traffic flow on energy consumption for both types of intersections (i.e., roundabout and crossroad). The data used were collected from Sousse roundabout between the hours of 06:00 and 10:00 a.m. The entering flows range from 0.2 veh/s to 0.45 veh/s for congested and uncongested cases. The following formula presents the developed regression model: where (7) c i = β 0 + β 1 F + β 2 H + ε i = round , cross and c round are designed energy consumption for the roundabout and the crossroad, c cross F and H are input variables of flow and hybridization, respectively. The regression results in Table 6 show that the energy consumption near a roundabout or signalized intersection can be modeled as the linear combination between traffic flow and hybridization level. The energy consumption models are statistically significant for the roundabout and crossroad. Furthermore, the fleet level of hybridization and entering flow largely affect fuel consumption. Thus, all the input independent variables (traffic flow and hybridization level) can explain the dependent variable (energy consumption). The multicollinearity verification among variables indicates that the variance inflation factor (VIF) values are all less than 10, meaning there is no multicollinearity. Therefore, multiple linear regressions are appropriate for the energy consumption estimation at both the crossroad and roundabout. By analyzing the results shown in Table 6 , one can see that the influence of the hybridization level and flow variables on energy consumption for the roundabout exceeds that for the crossroad. This is due essentially to the geometric characteristics of the intersection type and traffic rules. As a result, the increasing delay time at crossroad intersections is due to the traffic light. However, the process of entering the roundabout only requires respecting the minimum-security distance. Moreover, the regenerative braking mode generated by the stop and go maneuver affects the fuel consumption. In conclusion, the intersection type, hybridization level, and traffic demand are notable factors in the analysis of energy consumption and proposal of new strategies to manage and reorganize road traffic. In analyses of mean absolute percentage error (MAPE), the crossroad and roundabout error values are 2.03% and 1.45%, respectively. Residual analyses indicate that the linear regression approach is reasonable. There are no large differences between the measured and predicted values. To illustrate the influence of hybridization level on energy consumption for the roundabout and crossroad, the authors present the evolution of mean energy consumption versus hybridization level in Fig. 7 . The total mean consumption ( ) is obtained by the following equation ( c mean Boubaker et al., 2015; Zahabi et al., 2014 ): where (8) c mean = ∑ j = 1 N ∫ 0 T c j ( t ) d t ∑ j = 1 N ∫ 0 T v j ( t ) d t and c j ( t ) , v j ( t ) , N respectively represent the instantaneous energy consumption, instantaneous velocity of vehicle number T j , total number of vehicle, and total time. The hybridization level largely influences the mean energy consumption for both the roundabout and crossroad, and for both congested and uncongested case. 8 Conclusions In recent years, significant interest in energy consumption and vehicle emissions, combined with the influence of vehicle technology, has grown globally. The present study reveals the energy and environmental impacts of a crossroad and roundabout. Hybrid electric vehicles play an important role in reducing fuel consumption and emissions. The authors have developed instantaneous energy consumption and emission models coupled with a road traffic simulator (SUMO). As a result, the authors illustrate the influence of congestion and demand variation (traffic flow) on energy consumption and vehicle emission for crossroads and roundabouts using real data collected from the Sousse roundabout. The collected data are used as input for both the crossroad and roundabout in order to illustrate the influence of intersection geometry on energy consumption and emissions. The authors have also developed an energy consumption model for the roundabout and crossroad, taking into account the hybridization level and traffic demand. The results underscore the importance of intersection type in reducing energy consumption and vehicle emissions. Hybridization technology also considers an important solution in reducing consumption and emissions. Future research, such as studying energy consumption and emissions at road traffic networks, can enhance this paper's contribution. In addition, the authors can integrate the hybridization and electrification of the vehicular fleet to promote sustainable consumption.
|
[
"AHN",
"BOUBAKER",
"BOWYER",
"CAPPIELLO",
"CHAMBERLIN",
"CHEN",
"COELHO",
"DEMIR",
"DEMIR",
"FRANCO",
"GASTALDI",
"HAN",
"HYDEN",
"KIM",
"KRAJZEWICZ",
"KRAJZEWICZ",
"KRAUSS",
"LAJUNEN",
"LAM",
"LIM",
"LIU",
"MA",
"MANDAVILLI",
"MARKEL",
"MARKEL",
"MI",
"MUSTAFA",
"PANDIAN",
"SEKHAR",
"VARHELYI",
"ZAHABI",
"ZAMBONI",
"ZHAO"
] |
e76d462efd3445c38f8198b523e99436_Fatty acids regulate perilipin5 in muscle by activating PPARδS_10.1194_jlr.M038992.xml
|
Fatty acids regulate perilipin5 in muscle by activating PPARδ[S]
|
[
"Bindesb⊘ll, Christian",
"Berg, Ole",
"Arntsen, Borghild",
"Nebb, Hilde I.",
"Dalen, Knut Tomas"
] |
The surface of lipid droplets (LDs) in various cell types is coated with perilipin proteins encoded by the Plin genes. Perilipins regulate LD metabolism by selectively recruiting lipases and other proteins to LDs. We have studied the expression of perilipins in mouse muscle. The glycolytic fiber-enriched gastrocnemius muscle expresses predominantly Plin2-4. The oxidative fiber-enriched soleus muscle expresses Plin2-5. Expression of Plin2 and Plin4-5 is elevated in gastrocnemius and soleus muscles from mice fed a high-fat diet. This effect is preserved in peroxisome proliferator-activated receptor (PPAR)α-deficient mice. Mouse muscle derived C2C12 cells differentiated into glycolytic fibers increase transcription of these Plins when exposed to various long chain fatty acids (FAs). To understand how FAs regulate Plin genes, we used specific activators and antagonists against PPARs, Plin promoter reporter assays, chromatin immunoprecipitation, siRNA, and animal models. Our analyses demonstrate that FAs require PPARδ to induce transcription of Plin4 and Plin5. We further identify a functional PPAR binding site in the Plin5 gene and establish Plin5 as a novel direct PPARδ target in muscle. Our study reveals that muscle cells respond to elevated FAs by increasing transcription of several perilipin LD-coating proteins. This induction renders the muscle better equipped to sequester incoming FAs into cytosolic LDs.
|
The main function of the muscle is to perform work. Energy to drive contraction is primarily obtained by metabolizing glucose or fatty acids (FAs), which the muscle stores as glycogen or in triacylglycerol (TAG)-containing lipid droplets (LDs), respectively. These energy-stores may impact cellular signaling when they exceed the need for storage. High intracellular content of myocellular LDs is well-known to correlate with insulin resistance ( 1 , 2 ). This is, however, not an absolute phenomenon, as myocellular LD content increases in response to exercise and may even be higher in athletes than obese insulin-resistant individuals ( 3 ). Some beneficial effects of exercise are believed linked to increased capacity to oxidize LDs to prevent accumulation of lipid metabolites. Metabolism of muscular LDs is poorly described. Similar to other cells, LDs in muscle consist of a protein coat, a single monolayer of phospholipids, and an inner core of neutral lipids, such as TAG or cholesteryl esters (CEs) ( 4–6 ). Changes in the composition of proteins embedded into the LD surface likely control the release of FAs from myocellular LDs. The perilipin proteins are particularly interesting in this context, as members of this gene family are known to regulate lipolysis. Mammalian perilipins derive from a gene family of ancient origin encoded by five Plin genes ( 7–9 ), which encode for the LD-binding perilipin1 ( 10 , 11 ), perilipin2/ADRP/adipophilin ( 12 ), perilipin3/TIP47 ( 13 ), perilipin4/S3-12 ( 14 ), and perilipin5/Lsdp5/MLDP/oxPAT ( 7 , 15 , 16 ). These proteins are uniquely expressed. Expression of perilipin1 is confined to adipose and steroidogenic cells ( 17 , 18 ). Perilipin2 and -3 are broadly expressed ( 12 , 19 ), expression of perilipin4 is limited to adipose cells, brain, skeletal muscle, and heart ( 14 , 19 ), whereas perilipin5 is expressed in oxidative tissues ( 7 , 15 , 16 ). Muscle tissues express perilipin 2-5. Although the five perilipin proteins are likely to have unique features, they are all commonly shown to affect LD accumulation in various cell types. It is believed that the perilipins regulate lipolysis. This is best characterized for perilipin1 ( 4 ). Perilipin1 protects against adipose lipolysis when the energy status of the organism is high (postprandial), but facilitates lipolysis when stored energy needs to be released (fasted). These events are controlled by the phosphorylation status of perilipin1, which determines recruitment of lipolytic enzymes to the LD surface. This regulatory mechanism cannot be compensated for by other perilipins in the lack of perilipin1 ( 20–23 ). Distinct functional roles for the remaining perilipins are less clear. Other perilipin members similarly interact with lipases and associated factors at the LD surface ( 24–27 ). However, functional compensation precludes characterization of individual perilipins in cells and mice ( 28 ). It is clear that Plin2 - and Plin5 -null mice have reduced hepatic and cardiac accumulation of lipids, respectively ( 29 , 30 ). Perilipin5 has a unique ability to facilitate physical linkage of LDs and mitochondria when ectopically expressed ( 31 , 32 ), but the significance of this association is unclear. Depending on cell type, ectopic expression of perilipin5 either prevents ( 7 ) or enhances lipolysis ( 15 ). An obligatory role for perilipin5-mediated association of LDs and mitochondria for enhanced oxidation ( 31 , 33 ) might explain the discrepancies in the initial reports. The functions of perilipin3 and perilipin4, other than binding to LDs, are poorly understood. Depending on physiological conditions and the presence of other perilipin family members, perilipins may be cytosolic, bound to LDs, or rapidly degraded and thus nearly absent in the cell ( 7 , 34 , 35 ). Due to their role in lipolysis, the type of perilipins being expressed and bound to LDs are an important determinant of cellular LD metabolism. It is therefore important to identify transcription factors and pathways that control expression of the various Plin genes. Several of the Plin genes contain evolutionary conserved cis-regulatory elements occupied by peroxisome proliferator-activated receptors (PPARs). The PPARs belong to the nuclear receptor superfamily and consist of the isotypes PPARα, PPARβ/δ (hereafter referred to as PPARδ), and PPARγ. They all heterodimerize with retinoid X receptors (RXRs) onto mainly DR-1 type PPAR response elements (PPREs) in the promoter region of target genes ( 36 , 37 ). Adipose expression of Plin1 and Plin4 is switched on by binding of PPARγ to their respective promoter regions ( 19 , 38–40 ). Plin2 is regulated by PPARα ( 41–43 ) and PPARδ ( 44–46 ) in various cell types, whereas expression of Plin3 seems unaffected by activation of PPARs ( 19 , 41 ). Regulation of Plin5 by PPARs is poorly understood. Expression of Plin5 is enhanced by activation of PPARα ( 7 , 15 , 16 ), but no PPRE has been identified in the Plin5 gene. Little is known regarding transcription factors important for expression of perilipins in muscle. Given the known regulation of Plin genes by PPARs in other tissues, we analyzed FA and PPAR regulation of Plin genes. Our analyses revealed an unexpected importance for PPARδ as a FA sensor regulating the expression of the Plin4 and Plin5 genes in muscle. We further identified a conserved PPRE in the Plin5 gene. This PPRE is essential for FA-stimulated expression of perilipin5 and establishes Plin5 as a direct PPARδ target gene. EXPERIMENTAL PROCEDURES Materials Restriction enzymes were purchased from New England BioLabs (Ipswich, MA). PfuTurbo® DNA polymerase was purchased from Stratagene (La Jolla, CA). The PPAR ligands, WY-14643, GW6471, GSK0660, and GW9662 were purchased from Sigma (St. Louis, MO). GW501516, rosiglitazone (Rosi), troglitazone (Tro), and GW1929 were obtained from Enzo Life Sciences (Farmingdale, NY). Reagents for quantitative real-time PCR were from Applied Biosystems (Life Technologies Corporation, Carlsbad, CA). Cell culture reagents, oligonucleotides, FAs, and other chemicals were purchased from Sigma. All other chemicals and biochemicals were of the highest quality available from commercial vendors. Cloning of expression vectors Full-length pENTR-4r-3r Plin1-5 vectors have been described elsewhere ( 47 ). Full-length cDNAs encoding mouse PPARα ( 48 ), PPARδ ( 49 ), Addgene plasmid 8891, PPARγ1 and -2 (3T3-L1 cDNA), and alternative translation variants of Plin2-aa123-425 and Plin5-aa16-463 were cloned into the pDONR-221 P4r-P3r vector (Life Technologies Corporation). A Kozak translation initiation site ( 50 ) was generated by inserting ACC in front of the naturally occurring AUG start codon, and stop codons were changed to TAG. To generate an expression vector without a cDNA insert, a short multi-cloning-site linker (KHSS; Start-KpnI-HindIII-SacI-SpeI: ATG- GGTACC- AAGCTT -GAGCTC- ACTAGT ) was amplified for cloning into the pDONR-221 P4r-P3r vector to be exchanged with the suicidal attR4r-ccdB-chloramphenicol-attR3r- cassette. The primers used are listed in Table 1 . The amplified PCR products were recombined into the pDONR-221 P4r-P3r vector using BP clonase II (Life Technologies Corporation) to produce pENTR-4r-3r-PPAR vectors. The V5-6x-His-Gly tag was amplified from a synthesized template. The PCR product was recombined into pDONR-221 P1-P4 to generate the pENTR-R1-R4-V5-6xHisG vector. The pcDNA3-DEST-R4r-R3r vector was generated by replacing the multi-cloning site of pcDNA3 (Life Technologies Corporation) with an attR4r-ccdB-chloramphenicol-attR3r- cassette (R4r-R3r). The R4r-R3r cassette was amplified with PfuTurbo® DNA polymerase (Stratagene) using pDONR-221-P4r-P3r as a template (primers see Table 1 ). The amplified PCR product and the pcDNA3 vector was digested with HindIII and ApaI, ligated, and transformed into ccdB Survival TM -T1R cells (Life Technologies Corporation) to generate the pcDNA3-DEST-R4r-R3r vector. The similar strategy was used to generate a pcDNA3-DEST-R1-R2 vector. The pcDNA3-DEST-R1-R3r vector was generated by digesting the pcDNA3-DEST-R1-R2 and pcDNA3-DEST-R4r-R3r vectors with Pst I followed by ligation of the fragment containing the R3r-att site into the cut pcDNA3-DEST-R1-R2 vector. All vectors were confirmed by sequencing (Macrogen, Korea). The pcDNA3-DEST-R4r-R3r was recombined with the pENTR-4r-3r-PPAR vectors using LR clonase II (Life Technologies Corporation) to generate the pcDNA3-mRXRα, pcDNA3-mPPARα, pcDNA3-mPPARγ1, pcDNA3-mPPARγ2, and pcDNA3-mPPARδ expression vectors. The pcDNA3-DEST-R1-R3r was recombined with the pENTR-R1-R4-V5-6xHisG and pENTR-4r-3r-PPAR vectors to generate the pcDNA3-V5-His-PPAR expression vectors. Cloning and mutagenesis of the Plin5 reporter The mouse Plin2 and Plin4 LUC reporters have been described elsewhere ( 19 , 41 ). The full-length mouse Plin5 promoter (−2324/+244) was amplified by a PCR strategy described previously ( 51 ), cloned into pPCR-Script (Stratagene), digested out using Hind III, and inserted into the pGL3-Basic luciferase reporter vector (Promega, Madison, WI). Site-directed mutagenesis of the DR-1 element was performed with PCR as described previously ( 51 ). Primers used are listed in Table 1 . Preparation of fatty acids FAs were complexed to low-endotoxin FA-free BSA (Sigma, #A8806). FA (6 mM)/BSA (2.4 mM) stock solutions were generated by dissolving 6 μmol FAs in 60 μl 0.1 M NaOH, followed by FA binding to BSA at 50°C for 5 min. FA stock solutions were stored under argon at −80°C to prevent oxidization. FAs used: myristic acid (C14:0), palmitic acid (C16:0), stearic acid (C18:0), oleic acid (OA; cis-C18:1 n-9), vaccenic acid (cis-C18:1 n-7), linoleic acid (LA; C18:2 n-6), and γ-linolenic acid (C18:3n-6). Culturing and transfection of cells C2C12 (ATCC, #CRL-1772) and Sol8 (ATCC, #CRL-2174) cells were sub-cultured in high glucose DMEM (Sigma, #5648, supplemented with 5.958 g HEPES, 1.5 g NaHCO 3 , and 0.11 g sodium pyruvate/l) in the presence of penicillin (50 U/ml), streptomycin (50 μg/ml), and 20% FBS (Gibco, #26140-079, Life Technologies Corporation) at 37°C in 5% CO 2 . Myotubule differentiation was initiated by exchanging the 20% FCS with 2% horse serum (Diff-medium). Medium was refreshed every third day. The myoblasts decreased in number with differentiation and were replaced by a gradual increase in multi-nuclear myotubes from day 2 of differentiation (supplementary Fig. II ). Spontaneous contraction was observed from day 4. The differentiation marker paired box protein 7 (Pax7) decreased, myogenic differentiation 1 (Myod1) peaked at days 2–3, whereas myosin heavy chain 2A (Myh2) increased drastically (>5,000-fold) until day 7 (result not shown). These assays confirm that both cell lines were well differentiated into contractile myotubes. Unless otherwise indicated, C2C12 and Sol8 cells were seeded in 12-well dishes at a density of 3 × 10 4 cells/well. Two days later, differentiation was initiated by changing to Diff-medium. For transfection experiments, C2C12 and Sol8 cells were seeded at 6 × 10 4 cells/well in antibiotic-free medium. The following day, cells were given 1 ml antibiotic-free Diff-medium, prior to transfection with 2 μg DNA:4 μl Lipofectamine2000 complexed in 200 μl OPTI-MEM (Life Technologies Corporation). After 6 h, medium was replaced with Diff-medium containing antibiotics and allowed to grow for a maximum of 4 days before being harvested. Silencing of PPARδ using siRNA Duplexes of siRNAs (Sigma) targeting mouse PPARδ (PPARδ siRNA1: sense 5′-CCAUCAUUCUGUGUGGAGAtt-3′, antisense 5′-UCUCCACACAGAAUGAUGGtt-3′ PPARδ siRNA2: sense 5′-CCAAGUUCGAGUUUGCUGUtt-3′, antisense 5′-ACAGCAAACUCGAACUUGGtt-3′), and negative control siRNA (sense 5′-UAACGACGCGACGACGUAAtt-3′, antisense 5′-UUACGUCGUCGCGUCGUUAtt-3′) were transfected into C2C12 cells using reverse transfection. Briefly, 10 pmol RNAi duplexes (a 1:1 mix of siRNA1 and siRNA2 was used to target PPARδ) were complexed with 3 μl Lipofectamine® RNAiMAX reagent (Life Technologies Corporation) in 350 μl OPTI-MEM per well (24-well plate). Trypsinized C2C12 cells (3 × 10 4 cells/well) were added in 0.5 ml growth medium (containing 20% serum, but no antibiotics). Five hours after the transfection, differentiation was initiated by changing to Diff-medium. GAPDH siRNA (Life Technologies Corporation, #4390849) was used to establish transfection efficiency. The knockdown efficiency was found to be comparable 3 and 4 days after siRNA transfection. Protein isolation and Western blotting Cells were harvested in lysis buffer [see ( 41 )], 1× PBS, 1% NP-40, 0.1% SDS, and complete Proteinase Inhibitor Cocktail (Roche, #4693116001), frozen, and sonicated for 2 × 2 s using Branson Sonifier 450 (Branson Ultrasonic S.A.). Frozen tissues were homogenized in lysis buffer (as for cells) using Precellys®24 (Bertin Technologies, France) for 2 × 20 s at 5,000 rpm. Protein concentrations were quantified by BC Assay (Interchim, France, #FT-40840). Primary antibodies against mouse perilipin4/S3-12 (peptide-1 sequence MSASGDGTRVPPKSKGC) and perilipin5 (peptide-1 sequence CEAEPPRGQGKHTMMPELDF) were raised in rabbits (Affinity BioReagents). The novel antibodies were verified to not recognize any of the other mouse perilipin proteins (supplementary Fig. I ). Proteins were separated by SDS-PAGE on 4–12% NuPAGE (Life Technologies Corporation) or 10% Criterion™ Precast (Bio-Rad, Hercules, CA) gels, and transferred to a nitrocellulose membrane (GE Healthcare, UK). Membranes were incubated with the following primary antibodies: polyclonal rabbit anti-mouse perilipin1 ( 18 , 21 ) (1:2,000), rabbit anti-mouse perilipin2/ADFP (Novus Biologicals, #NB110-40877, 1:400), rabbit anti-mouse perilipin3/TIP47 ( 52 ) (1:4000), rabbit anti-mouse perilipin4/S3-12 (10 μg/ml), rabbit anti-mouse perilipin5/LSDP5 (9.2 μg/ml), or His antibody (Abcam, #ab1187, 1:5,000). β-actin (Sigma, #A5441, 1:10,000) or GRP78 (BD Transduction Laboratories, #610978, 1:1,000) were used as loading controls. Following binding of primary antibodies, membranes were incubated with species-specific horseradish peroxidase-labeled secondary antibodies (Abcam, goat to rabbit IgG-HRP, #ab6721, 1:10,000 and rabbit to mouse IgG-HRP, #524567, 1:10,000) and binding detected using ECL Plus (GE Healthcare), or alkaline phosphatase-conjugated-labeled species-specific secondary antibodies using Western Breeze® Chemiluminescent kit (Life Technologies Corporation). Chemiluminescent signals were visualized with exposure to Hyperfilm ECL (GE Healthcare). Carestream MI SE was used to quantify Western blots. Preparation and analysis of RNA Cells were lysed in 500 μl 1× Total RNA Lysis Solution (#4305895, Life Technologies Corporation) per well (12-well plate) and frozen at −80°C before isolation. Total RNA from cell extracts was isolated using ABI 6100 Nucleic Acid Prep-Station using the preprogrammed “RNA-Cell method” (Life Technologies Corporation). Muscle tissue was homogenized for 2 × 30 s/5,000 rpm with zirconium dioxide ceramic beads (1.4 mm; #03961-1-103) in a Precellys®24 homogenizer (Bertin Technologies). Total RNA was subsequently isolated using the RNeasy® Mini kit (Qiagen, #74104). RNA purity and quantity were determined using Nano-Drop ND-1000 spectrophotometer (Thermo Scientific, Waltham, MA). RNA from in vivo studies was subjected to an additional quality check using Bio Analyzer prior to gene expression analysis (Agilent Technologies, Santa Clara, CA; #kit 5067-1511). Total RNA (cells, 12 ng/μl; muscle, 50 ng/μl; and liver, 12 ng/μl) was reverse transcribed into single-stranded cDNA using high capacity cDNA reverse transcription kit (Life Technologies Corporation, #4368814). Quantitative real-time PCR amplification (1 μl cDNA reaction in 20 μl reaction volume) was performed using TaqMan® Universal PCR Master Mix on an ABI 7900HT system (Life Technologies Corporation) operating with standard settings. RNA was analyzed using predesigned TaqMan® Low Density Custom Arrays (liver and soleus) or predesigned single assays (gastrocnemius and cell cultures). Assays used: Plin1, #Mm00558672_m1; Plin2, #Mm00475794_m1; Plin3, #Mm00482206_m1; Plin4, #Mm00491061_m1; Plin5, #Mm00508852_m1; PPARα, #Mm00440939_m1; PPARδ, #Mm01305434_m1; PPARγ, #Mm01184322_m1; 36B4, #Mm00725448_s1; TBP, #Mm00446973_m1; and GAPDH, #Mm99999915_g1. Data were analyzed in RQ Manager using the ΔΔCt method. Results are presented as gene expression ± standard deviation (SD) relative to endogenous control (2 −ΔΔCT ). Reporter gene expression assay C2C12 cells (5 × 10 3 cells/well) were seeded in 96-well dishes in 75 μl antibiotic-free medium. The next day, 75 ng DNA (50 ng reporter, 10 ng expression vectors, and 5 ng pRL) and 0.5 μl Lipofectamie2000 were mixed in 2 × 10 μl OPTI-MEM I (Life Technologies Corporation) and added to cells incubated in 75 μl Diff-medium. After 5 h, culture medium was replaced with Diff-medium and incubated for 2 days. Fresh medium containing FAs and PPAR antagonists was added for an additional 24 h. Cells were washed in 1× PBS and lysed in 20 μl Passive Lysis buffer (#E194A, Promega). Dual luciferase activity was determined using Dual-Luciferase® Reporter Assay System (#E1910, Promega) and luciferase activity measured with a Synergy 2 Luminometer (BioTek, Winooski, VT). ChIP experiments Chromatin immunoprecipitation (ChIP) experiments were performed as described previously with minor modifications ( 53 ). Briefly, C2C12 cells were cross-linked with 1% formaldehyde for 10 min. Cross-linking was terminated by 10 min incubation with 0.125 M glycine. Cells were washed twice in cold 1× PBS and harvested in lysis buffer (1% SDS, 10 mM EDTA, 50 mM Tris-HCl, pH 8.0) containing complete Protease Inhibitor Cocktail (Roche, #4693116001). Lysed cells were sonicated using a Bioruptor (Diagenode, Belgium) to fragments of 300–500 bp. Chromatin was collected by centrifugation, concentration determined by A260, and diluted in RIPA buffer (0.1% SDS, 0.1% Na-deoxycholate, 1% Triton X-100, 1 mM EDTA, 0.5 mM EGTA, 140 mM NaCl, 10 mM Tris-HCl, pH 8.0) and immunoprecipitated with 2 μg antibody against RNAPII (Santa Cruz Biotechnology, CA; #sc-899) overnight at 4°C in the presence of protein A beads (GE Healthcare). Beads were washed three times in RIPA buffer and eluted in 1% SDS with 0.1 M NaHCO 3 . Chromatin was de-cross-linked by adding 0.2 M NaCl and incubating overnight at 65°C. DNA was purified by phenol-chloroform extraction, precipitated in ethanol with sodium acetate, and dissolved in water. DNA enrichment was quantified by real-time PCR (ABI, 7900HT) using SYBR Green Master Mix (Life Technologies Corporation). The following primers were used: Plin2 (5′-TCTGGTGCAGGACCTACCTAA, 5′-TTTGCTGTGTGGTGATCTGG), Plin3 (5′-GAGGAAACCTCCCCTACCAA, 5′-CCTCTGTCCTGTCACTCCAA), Plin4 (5′-GGTCTTCCAAACCAGCTCAC, 5′-TCTGCAGTGTCCACCAACTC), Plin5 (5′-ATCCCTACCCCACCCTCTAC, 5′-ATAAGGACGGAGGGCTGACT), and “no gene” (5′-TGGTAGCCTCAGGAGCTTGC; 5′-ATCCAAGATGGGACCAAGCTG). Primers against no gene align to a genomic region on chromosome 15 with no binding of RNAPII and served as a negative control ( 54 ). Animal experiments All animal use was approved and registered by the Norwegian Animal Research Authority. Mice were housed in a temperature controlled (22°C) facility with a strict 12 h light/dark cycle. Animals were euthanized by cervical dislocation; tissue samples were dissected, snap-frozen in liquid nitrogen, and stored at −80°C until further analysis. Male backcrossed congenic PPARα −/− mice (B6.129S4-Ppara N12; Jackson Laboratory, Bar Harbor, ME) and PPARα tm1Gonz +/+ controls (C57BL/6J; B and K Universal Ltd., Norway) 9 weeks of age were ad libitum fed a standard chow diet [64% carbohydrate, 31.5% protein, 2% fat (fat source soya oil)] or a high-fat diet (HFD) [36% carbohydrate, 20% protein, 35.5% fat (fat source lard); #F3282, 1/2″ pellet (BioServ, Frenchtown, NJ)] for a period of 13 weeks. Male C57BL/6N mice (Charles River, 17 weeks, ∼30 g) on a standard chow diet were given intragastric gavage of 0.5% carboxymethylcellulose (CMC) (Sigma, #C4888), 300 μl GW501516 (150 μg solved in 0.5% CMC; 5 mg/kg) or 300 μl glyceryl trioleate/triolein (Sigma #T7140). Mice were treated twice, 36 and 12 h before being euthanized (4–6 animals in each group). Mice were euthanized at the onset of the light cycle. Statistical methods All results are presented as means with standard error of the mean (SEM) or standard deviation (SD). One-way analysis of variance (ANOVA) followed by Tukey's multiple comparison tests or two-tailed Student's t -tests were used to assess significance ( P < 0.05). RESULTS HFD feeding of mice increases expression of selective perilipins in muscle Prolonged feeding of mice with a HFD increases circulating FAs and stimulates uptake and utilization of FAs as the preferred energy source in peripheral tissues. To determine if a HFD changes the expression of Plin genes in a FA-oxidative muscle type, Plin mRNA expression levels were evaluated in soleus muscle from wild-type (WT) and PPARα knock-out (KO) mice fed a chow or high-fat diet. PPARα KO mice were included in the study, because activation of PPARα is known to stimulate transcription of Plin2 and Plin5 in liver ( 7 , 15 , 16 , 41–43 ). Plin2 mRNA expression increased 2-fold by HFD in WT but not in PPARα KO mice ( Fig. 1 ). Plin5 mRNA expression was lower in PPARα KO mice compared with WT mice, similar to what was previously reported in liver ( 7 ). Interestingly, expression of Plin5 increased by similar magnitude in both genotypes fed a HFD (2.5-fold and 2.9-fold in PPARα KO and WT, respectively), demonstrating that regulation of Plin5 can occur independent of PPARα in muscle. In accord with the changes in gene expression, perilipin2 and perilipin5 protein increased with a HFD in WT mice ( Fig. 1B ). In the absence of PPARα, both perilipin2 and perilipin5 proteins were substantially lower. Perilipin2 was not detected, whereas perilipin5 levels increased by HFD also in PPARα KO mice. The mRNA and protein levels of Plin3 and Plin4 remained relatively unchanged by treatment or genotype. Next, we analyzed Plin mRNA content in gastrocnemius, which contains a mixture of glycolytic and oxidative fibers. In this tissue, Plin2 and Plin4 mRNAs increased by HFD in a PPARα-dependent manner ( Fig. 1C ), whereas the Plin5 mRNA increased by HFD in both WT and PPARα KO mice. Essentially the same changes in regulation were observed at the protein level (result not shown). These results implicate that a HFD alters the protein and mRNA levels of several perilipins regardless of fiber composition, and that other factors in addition to PPARα may regulate transcription of Plin5. FAs regulate Plin genes in cultured myotubes The increased expression of several Plin genes by the diet rich in lipids may be directly mediated by FAs or secondarily due to complex physiological changes that occur with increased levels of circulating FAs. To be able to identify molecular mechanisms inducing Plin genes in muscle tissues, we continued our investigation using cultured muscle cells. We established culturing conditions and confirmed differentiation of myoblasts into myotubes (see Experimental Procedures) in two different myotube cell cultures, C2C12 and Sol8 cells. Cells were seeded at day −2, grown to confluence, and subjected to myotube differentiation for up to 7 days. The expression of the Plin genes varied somewhat in the two cell lines, but was less affected by differentiation in C2C12 cells (see supplementary Fig. II ). As only C2C12 cells expressed Plin2-5, this cell line was used in further studies. To test if the Plin genes are regulated by FAs, C2C12 cells were differentiated for 6 days and cultured for 24 h in medium supplemented with various FAs complexed to BSA. The expression of Plin2 and Plin4-5 all increased by various FAs, but with different magnitude ( Fig. 2A ). Plin4 and Plin5 were mainly induced by unsaturated long chain FAs, whereas Plin2 mRNA was elevated by both saturated and unsaturated long chain FAs. Similar to what we have shown in liver ( 41 ), the expression of Plin3 remained unchanged. All of the added FAs stimulated formation of small intracellular LDs (result not shown). To further verify that the added FAs were incorporated into intracellular LDs, we determined perilipin2 and perilipin3 levels ( Fig. 2B ). In contrast to perilipin3, perilipin2 is known to be posttranslationally regulated and rapidly degraded by the proteasome in the absence of intracellular LDs ( 41 ). The solid increase in perilipin2 and unchanged levels of perilipin3 confirms that the FAs were incorporated into intracellular LDs. No clear protein signals were observed for the perilipin4 or perilipin5 proteins, in agreement with the low mRNA expression of these Plins. The Plin genes are differently regulated by selective activation of PPARs Long chain FAs have previously been shown to stimulate Plin2 mRNA expression in various tested cell types ( 41 , 46 , 55 ), but the mechanism involved has not been clarified. FAs might activate genes by acting as physiological ligands for PPARs ( 36 ). We therefore determined the expression level of each PPAR isotype during myotube-differentiation and used the Ct values at day 0 as an indication of relative expression of each PPAR isotype. PPARδ was highly expressed with unaffected levels during differentiation in both cell lines. PPARγ was slightly downregulated during differentiation, whereas PPARα was expressed at low levels with increased expression during differentiation (supplementary Fig. III ). To determine if there is a PPAR isotype-dependent regulation of the Plin genes in muscle cells, cells were differentiated for 6 days and stimulated with specific PPAR activators for 24 h ( Fig. 3A ). Activation of PPARα (WY-14643; 10 μM) in C2C12 cells had little effect on the Plin mRNA levels, except for an increase in Plin5 mRNA (5-fold). Given the low basal expression of Plin5 in C2C12 cells (Ct = 35), such a minimal regulation is expected. Activation of PPARδ (GW501516; 0.1 μM) increased mRNA content of Plin2 (4-fold), Plin4 (5-fold), and Plin5 (46-fold), whereas Plin3 mRNA remained unchanged. Activation of PPARγ (Rosi, Tro, and GW1929; 1 μM) increased mRNA levels of Plin4 (about 3-fold) and Plin5 (2-fold). The PPARγ antagonist (GW9662; 1 μM) had no effect on the mRNA levels of the Plin genes. The profound response to activation of PPARδ compared with other PPARs might be explained by the relatively higher expression level of this particular PPAR in cultured C2C12 cells. To determine the efficiency for each PPAR to regulate Plin genes, C2C12 cells were transfected to ectopically express PPARα, PPARδ, or PPARγ, differentiated for 3 days, and stimulated for 24 h with PPAR isoform specific ligands ( Fig. 3B ). Comparable ectopic expression of the various PPARs was confirmed by expressing 6xHis-PPAR vectors in C2C12 cells and visualizing the fusion proteins with an antibody recognizing the His epitope ( Fig. 3C ). Ectopic expression and activation of PPARα induced Plin2 and Plin5 mRNAs. Very similar results were observed with ectopic expression and activation of PPARδ, which induced expression of Plin2, Plin4, and Plin5. Ectopic expression and activation of PPARγ1 or PPARγ2 foremost increased Plin4 mRNA. These results demonstrate that each PPAR differently regulates expression of Plin2-5 mRNAs in C2C12 cells. To determine if addition of FAs and PPARδ alters expression of the perilipin4 and perilipin5 proteins, C2C12 cells were differentiated in the presence of BSA-OA and/or the PPARδ agonist (GW501516). The fold induction of Plin2, Plin4, and Plin5 mRNAs ( Fig. 3D ) were somewhat higher than overnight stimulation of differentiated myotubes (compare Fig. 2 and Fig. 3B ). Stimulation with the PPARδ agonist during differentiation induced the short perilipin5 protein [amino acids 16–463, see ( 7 )], whereas no clear signal was observed for OA treatment ( Fig. 3E ). A clear band indicative for perilipin4 protein was not observed. Given the low levels of perilipin5 protein, we tested if Plin5 behaves similarly to Plin2, which is posttranslationally stabilized by the cellular content of LDs ( 7 , 34 ). Vectors expressing the full-length perilipin2, the shorter alternative translated perilipin2 (plin2 a123-425), perilipin5, and perilipin5 a16-463 were transfected into C2C12 cells. As expected, expression of the two perilipin2 isoforms was highly dependent on BSA-OA supplementation of the culture media ( Fig. 3F ). In contrast, ectopic expression of the perilipin5 isoforms was unaffected by addition of BSA-OA. Together, these results show that activation of PPARδ induces Plin5 mRNA and protein in C2C12 cells. FAs are unable to induce Plin mRNAs in the presence of a PPARδ antagonist To test if the observed FA effect on Plin mRNAs depends on a particular PPAR, differentiated C2C12 cells were incubated with a combination of BSA-OA and BSA-LA (FAs; 50 μM each) in the presence of selective PPAR inhibitors GW6471 for PPARα ( 56 ), GSK0660 for PPARδ ( 57 ), and GW9662 for PPARγ ( 58 ). Incubation with FAs increased mRNA levels of Plin2, Plin4, and Plin5. Coincubation with antagonists for PPARα or PPARγ did not affect the FA-stimulating effect. In contrast, coincubation with the PPARδ antagonist blunted the effect of FAs on Plin2, Plin4, and Plin5 mRNAs ( Fig. 4A ). The PPARδ antagonist was also effective in preventing induction of the perilipin2 protein ( Fig. 4B ). To determine if the observed increase in Plin mRNAs by FAs was due to transcriptional stimulation, Plin2 ( 41 ), Plin4 ( 19 ), and Plin5 luciferase reporters were cotransfected with PPAR expression vectors into myoblast C2C12 cells. Cells were subsequently differentiated into myotubes prior to 24 h incubation with FAs and PPAR antagonists. Ectopic expression of each PPAR stimulated basal reporter activity with different efficiency (not shown). Addition of FAs increased reporter activity further, and in this context, PPARδ had a unique effect ( Fig. 5A–C ). Coexpression of PPARδ resulted in a marked boost in Plin4 and Plin5 reporter activity upon activation with FAs, whereas the PPARδ antagonist reversed the effect of FAs. To strengthen the observation that FAs increase Plin mRNAs by stimulating transcriptional activity, we analyzed Pol II recruitment to the Plin genes using ChIP with primers located 100–200 bp downstream of the transcriptional start sites. Stimulation with either FAs or the PPARδ agonist (GW501516) increased Pol II recruitment to the Plin2 , Plin4 , and Plin5 genes, whereas coincubation with the PPARδ antagonist (GSK0660) reversed the effect of FAs ( Fig. 5D ). Taken together, our results suggest that FAs stimulate transcription of several Plin genes in a PPARδ-dependent manner. Silencing of PPARδ attenuated the effect of FAs on Plin5 and Plin4 mRNAs To determine the effect of FAs in cells with reduced expression of PPARδ, C2C12 cells were reverse transfected with scramble siRNA or siRNAs against PPARδ and subjected for myotube differentiation. PPARδ mRNA was knocked down by 70% from day 2 to day 4 post-transfection ( Fig. 6 ). Two days post-transfection, cells were incubated with the PPARδ activator (GW501516; 0.1 μM), FAs (100 μM) alone, or in combination with the PPARδ antagonist (GSK0660, 1 μM). Incubation with the FAs increased Plin2, Plin4, and Plin5 mRNA levels in scramble-transfected cells, with the FA effect attenuated upon addition of the PPARδ antagonist ( Fig. 6 ). Strikingly, the stimulating effect on Plin5 mRNA by FA incubation was abolished in PPARδ siRNA-transfected cells, underlying the importance of PPARδ in Plin5 regulation. Silencing of PPARδ also blunted FA-stimulated induction of Plin4 mRNA, but had little effect on Plin2 mRNA. The Plin5 intron 1 contains an evolutionary conserved PPRE The Plin1 , Plin2 , and Plin4 promoters all contain functionally characterized PPREs ( 19 , 41 ). Our results prompted us to analyze the Plin5 gene for the presence of a functional PPRE. A promising DR-1 type element was identified in intron 1, in an area conserved to the human Plin5 gene ( Fig. 7A, B ). To test the functionality of this element, we mutated the DR-1 element and performed reporter assays with Plin5 WT or mutated reporter constructs. C2C12 cells were transfected with reporters and expression vectors encoding for RXRα and/or PPARδ and differentiated into myotubes for 3 days. Cells were then stimulated with vehicle or the PPARδ activator GW501516 (0.1 μM) for 24 h. Transfection with the Plin5 WT or mutated reporters had similar basal activity, demonstrating that the mutation itself did not affect basal transcriptional activity. Coexpression with RXRα and PPARδ, and stimulation with GW501516 gradually increased Plin5 WT reporter activity up to a maximal 70-fold increase ( Fig. 7C ). In contrast, no induction was observed with the mutated Plin5 reporter construct. We finally tested if the PPARδ-dependent regulation of Plin5 gene expression by FAs depends on the identified response element. Plin5 WT or mutated reporters were transfected into cells together with empty or RXRα and PPARδ expression vectors and stimulated as shown. For the Plin5 WT reporter, stimulation with GW501516 or FAs gave an expected increase in reporter activity, whereas GSK0660 attenuated the FA effect ( Fig. 7D ). In contrast, for the Plin5 mutated reporter construct, none of the treatments had any effect on reporter activity. Expression of Plin5 is induced by a PPARδ agonist and oleic acid in vivo Our results from cell studies support that FAs regulate expression of Plin4 and Plin5 in muscle by activating PPARδ. To determine if the same occurs in muscle tissue, mice were given an oral gavage of a PPARδ agonist (GW501516) or FAs in the form of triglycerides (glyceryl trioleate; triolein). Treatment with GW501516 increased Plin4 mRNA and showed a tendency for an induction of Plin5 mRNA in soleus muscle ( Fig. 8A ), whereas triolein treatment increased both Plin4 and Plin5 mRNAs. The same inductions occurred in gastrocnemius muscle (results not shown).The induction was stronger for Plin5, and only the perilipin5 protein was significantly elevated by these treatments ( Fig. 8B, C ). DISCUSSION Activation of PPARα and PPARγ are known to stimulate expression of Plin genes in liver ( 7 , 15 , 16 , 41–43 ) and adipose tissue ( 19 , 38–40 ), respectively. By using PPAR isoform-specific agonists and antagonists, we demonstrate that PPARs selectively regulate Plin genes in muscle cells. PPARδ is required for FA-induced transcription of Plin5 through a conserved DR-1 element in intron 1 of the Plin5 gene. The identification of this functional response element classifies Plin5 as a novel direct PPAR target gene, similar to Plin1 , Plin2 , and Plin4 . Plin3 seems to be an exception in the family, by not being regulated by PPARs. The different responses to PPAR activation are likely important for the distinct Plin tissue expression profiles ( 7 , 19 ). In muscle, enhanced PPARα and PPARδ activation increases Plin5 expression, whereas the adipose-enriched Plin4 is preferentially induced by enhanced PPARγ activity. In contrast, Plin2 shows no clear PPAR isoform preference, and is nonresponsive to manipulation of PPARδ expression. In addition to PPAR redundancy, the distinct regulation of Plin2 may also be influenced by other transcription factors ( 59 , 60 ). Several factors may influence recruitment of PPAR isoforms to a particular PPRE and subsequent transcriptional regulation. Among these are tissue variability in chromatin packing within each Plin locus, specific coregulator recruitment, the presence of activating or repressing ligands, or the nucleotide composition of the PPREs and nearby binding sites ( 36 ). The level of endogenous ligands clearly plays a role in our cell studies. PPARδ is highly expressed in muscle tissues and C2C12, yet perilipin5 is only detected in C2C12 cells after prolonged culturing in the presence of a specific PPARδ agonist. Another factor is likely the nucleotide compositions of the various PPREs found to regulate Plin genes. These sequences are highly conserved across species (human, mouse, and rat), but vary considerably among the various Plin genes ( 19 , 41 ). This suggests that the PPAR isoform regulating each Plin gene is evolutionarily conserved and that this PPAR-specific regulation is physiologically important. Expression of Plin5 was initially described to correlate with PPARα activity ( 7 , 15 , 16 ), but conflicting observations provided evidence for alternative regulation of the gene. In the lack of PPARα, the fold-induction of Plin5 is preserved by fasting in liver ( 7 ). We now demonstrate that Plin5 expression is elevated in muscle by a HFD in the absence of PPARα. Fasting and a HFD can be viewed as contrary physiological conditions with low or excess energy, respectively, but have elevated circulating FAs in common, caused by hormone-stimulated lipolysis of adipose TAG or increased digestion and uptake of FAs. Increased expression of Plins has been observed in various cell types upon stimulation with FAs ( 41 , 46 , 61 ), but the molecular mechanisms have been unclear. Our data provide a molecular explanation for regulation of Plin5 in the absence of PPARα. Long chain unsaturated FAs may bind directly to the ligand binding pocket of PPARδ and regulate its transcriptional activity ( 62 ), which in turn stimulates transcription of the Plin5 gene. The observation that PPARδ may sense circulating free FAs and regulate a subset of hepatic genes independent of PPARα ( 63 ) supports such a mechanism. The discovery of a phospholipid as the endogenous PPARα ligand ( 64 ) also questions the role of PPARα as a direct FA sensor. The role of PPARα may rather involve the receptor's ability to stimulate transcription of genes facilitating muscular FA uptake ( 65 ). Our data suggest that PPARδ is the physiological regulator of perilipin5 in muscle. PPARδ is nearly ubiquitously expressed and the predominate PPAR isoform expressed in rodent skeletal muscle. In contrast to PPARα ( 65 ), forced muscle-specific expression of PPARδ improves endurance in mice and promotes a shift from glycolytic to oxidative muscle fibers ( 66 , 67 ). Oxidative muscle tissues also contain higher levels of perilipin5 [see ( 7 ) and Fig. 1 ]. The direct regulation of Plin5 by PPARδ may therefore explain the uneven distribution of Plin5 expressed in muscle fibers. Other factors found to drive formation of oxidative muscles includes PGC-1α ( 68 ), PGC-1β ( 69 ), ERRα ( 70 ), ERRγ ( 71 ), and the corepressor NCoR1 ( 72 ). Overexpression of PGC-1α in muscle may stimulate Plin5 expression ( 73 ), but the importance of the other factors have not been studied. Nevertheless, it is clear that Plin5 belongs to the pool of genes that distinguish oxidative type I from glycolytic type II fibers. The function of genes enriched in oxidative fibers is primarily linked to FA oxidation, myosin fiber types, mitochondrial biogenesis, and increased mitochondrial oxidative capacity ( 66–72 ). The role of perilipin5 in this setting is not clear. Due to low endogenous perilipin5 expression in cultured cells, much of our molecular understanding of the protein is based on cellular studies with ectopic perilipin5 expression. When overexpressed, perilipin5 recruits Abhd5 ( 24 ), ATGL ( 25 , 26 ), and HSL ( 27 ) to the surface of LDs, and promotes association of LDs and mitochondria ( 31 ). It remains to be elucidated if a portion of these effects are caused by high ectopic expression. Although it is unclear how interactions between perilipin5 and the above mentioned proteins regulate lipolysis, accumulating evidence from mice demonstrate that perilipin5 preserves LDs. Perilipin5-null mice lack LDs in the heart ( 30 ) whereas cardiac-specific perilipin5 transgenic mice accumulate LDs ( 74 , 75 ). Alteration in cardiac perilipin5 primarily affects TAG storage, which would be expected based on the preferred binding of perilipin5 to LDs filled with TAG ( 47 ). The role of perilipin4 is less clear. PPARγ stimulates perilipin4 expression in adipocytes ( 19 ), which points to a role in energy storage. Perilipin4 may be recruited to nascent LDs formed in cultured adipose cells exposed to high concentrations of FAs ( 14 ). A more recent publication demonstrates that perilipin4 preferentially binds to LDs filled with CEs with an ability to enhance accumulation of such LDs when ectopically expressed ( 47 ). Additional functional analyses are required to fully understand the function of Plin4 and Plin5 in LD metabolism and their roles in muscle physiology. The regulation of Plin5 by PPARδ may provide insight into why lipid stores in muscles are beneficial and detrimental in athletic and obese subjects, respectively. PPARδ-mediated regulation of Plin4 and Plin5 may render the muscle tissue better equipped to fine-tune LD metabolism by having increased levels of these LD binding proteins. When present at the surface of LDs, they may help to preserve LDs and increase the cellular capacity to prevent lipotoxicity. Acknowledgments The authors thank Sverre Holm for technical assistance with animal work, Christin Zwafink, Christina Steppeler, and Tone Lise Aarnes Hj⊘rnevik for technical assistance, and members of the Nebb laboratory for scientific discussions. Supplementary Material
|
[
"ECKARDT",
"SAMUEL",
"AMATI",
"BRASAEMLE",
"BICKEL",
"FARESE",
"DALEN",
"LU",
"KIMMEL",
"GREENBERG",
"BLANCHETTEMACKIE",
"BRASAEMLE",
"WOLINS",
"WOLINS",
"WOLINS",
"YAMAGUCHI",
"GREENBERG",
"SERVETNICK",
"DALEN",
"TANSEY",
"SZTALRYD",
"MARTINEZBOTAS",
"MIYOSHI",
"GRANNEMAN",
"WANG",
"GRANNEMAN",
"WANG",
"SZTALRYD",
"CHANG",
"KURAMOTO",
"WANG",
"WANG",
"BOSMA",
"XU",
"XU",
"POULSEN",
"MANDARD",
"ARIMURA",
"NAGAI",
"SHIMIZU",
"DALEN",
"EDVARDSSON",
"TARGETTADAMS",
"CHAWLA",
"SCHMUTH",
"TOBIN",
"HSIEH",
"ISSEMANN",
"BRUN",
"KOZAK",
"DALEN",
"MIURA",
"HAKELIEN",
"BOERGESEN",
"GAO",
"XU",
"SHEARER",
"DAVIES",
"WEI",
"GU",
"LOCKRIDGE",
"XU",
"SANDERSON",
"CHAKRAVARTHY",
"FINCK",
"LUQUET",
"GAN",
"LIN",
"KAMEI",
"HUSS",
"RANGWALA",
"YAMAMOTO",
"KOVES",
"POLLAK",
"WANG"
] |
cc6b94f7bf42470c9cc6a1ef93c54c53_Effect of Bacillus sphaericus Neide on Anopheles Diptera Culicidae and associated insect fauna in fi_10.1016_j.rbe.2015.03.013.xml
|
Effect of Bacillus sphaericus Neide on Anopheles (Diptera: Culicidae) and associated insect fauna in fish ponds in the Amazon
|
[
"Ferreira, Francisco Augusto da Silva",
"Arcos, Adriano Nobre",
"Sampaio, Raquel Telles de Moreira",
"Rodrigues, Ilea Brandão",
"Tadei, Wanderli Pedro"
] |
We analyzed the effects of Bacillus sphaericus on Anopheles larvae and on the associated insect fauna in fish farming ponds. Five breeding sites in the peri-urban area of the city of Manaus, AM, Brazil, were studied. Seven samples were collected from each breeding site and B. sphaericus was applied and reapplied after 15 days. The samples were made at 24h before application, 24h post-application and 5 and 15 days post-application. We determined abundance, larval reduction and larval density for Anopheles, and abundance, richness, Shannon diversity index and classified according to the functional trophic groups for associated insect fauna. A total of 904 Anopheles larvae were collected and distributed into five species. Density data and larval reduction demonstrated the rapid effect of the biolarvicide 24h after application. A total of 4874 associated aquatic insects belonging to six orders and 23 families were collected. Regression analysis of diversity and richness indicated that the application of the biolarvicide had no influence on these indices and thus no effect on the associated insect fauna for a period of 30 days. B. sphaericus was found to be highly effective against the larvae of Anopheles, eliminating the larvae in the first days after application, with no effect on the associated insect fauna present in the fish ponds analyzed.
|
Introduction The Amazon environment is rich in water resources, very common mostly due to the existing mesh of rivers, thus enabling the formation of numerous breeding sites for many groups of aquatic organisms ( Sioli, 1984 ). These organisms particularly include insects, where some of them are vectors of pathogens that cause human tropical diseases. Accordingly, mosquitoes occupy a central role because of their plasticity in colonizing different aquatic environments, high density in this environment and food preference for human blood ( Tadei and Thatcher, 2000; Forattini, 2002 ). Control of the disease, according to the National Program for Prevention and Control of Malaria Brazil, has among other measures, early diagnosis and treatment of patients, major steps for lifting the movement of parasite. They are also recommended vector control measures: indoor residual insecticide application, the treatment of breeding sites with biolarvicides, the use of impregnated mosquito nets and long term, in special situations, spatial fogging of insecticides ( Ministério da Saúde/SVS, 2003; Oliveira-Ferreira et al., 2010 ). Breeding sites play an essential role in the maintenance of the disease, since adults emerge ready for their daily blood meal ( Rodrigues et al., 2008 ). The control of immature forms can be performed using chemical larvicides, but this technique is not used due to the risk of development of resistance and environmental contamination, and therefore, the use of biological larvicides is increasing. The main representatives of these larvicides are toxic crystal-producing bacteria of the genus Bacillus ( Galardo et al., 2013 ). The use of the species Bacillus sphaericus Neide, 1904 for mosquito control was advocated in 1985 by the World Health Organization. Since then, larvicides have been produced with this bacterium due to its recognized toxicity to the genera Anopheles and Culex ( Habib, 1989; De Barjac, 1990 ). The application of biological larvicides to control immature Anopheles sp. is carried out directly at the breeding sites ( De Barjac, 1990 ). However, these environments also have an associated insect fauna, which is formed by several other groups of aquatic insects that share the same habitat as these mosquitoes ( Lang and Reymond, 1994 ). About 13 orders of insects have an aquatic phase, and in some biotopes, they may comprise around 95% of the macroinvertebrate community. These invertebrates play a key role in the health of the water body, by participating in the cycling of nutrients and the transformation of organic matter, contributing to the flow of energy ( Brasil et al., 2014 ). The effect of B. sphaericus on the associated insect fauna under laboratory and field conditions has been investigated over the past decades, and most of these organisms have not shown any susceptibility to the bacteria ( Mulla et al., 1984; Aly and Mulla, 1987; Karch et al., 1991; Rodcharoen et al., 1991; Brown et al., 2004; Merritt et al., 2005 ). Rodrigues et al. (2008) conducted field tests in Manaus, Brazil, applying B. sphaericus in fish ponds and standing water in pottery, and observed the elimination of larvae at breeding sites 48 h after application. Rodrigues et al. (2013) investigated the effects of B. sphaericus in applications on the Negro and Solimões Rivers, where it was found more effective in the black water river than the white water, which has a higher amount of suspended material. Studies of the effect of B. sphaericus against Anopheles larvae have been conducted in Brazil, but there is a gap in our knowledge of its action on the associated insect fauna, particularly in the Amazon environment. This study aimed to analyze the effects of B. sphaericus on Anopheles and the associated insect fauna in fish farming ponds in the peri-urban area of the city of Manaus, Amazonas. Material and methods Bioassays in field The sampling sites are located in the peri-urban region of Manaus, Central Amazonia in northern Brazil. Five artificial breeding sites of Anopheles , namely fish culture ponds, were selected: C1 (S 03°04′32.2″, W 59°53′07.4″); C2 (S 03°02′07.6″, W 59°53′30.2″); C3 (S 03°03′45.8″, W 59°51′11.3″); C4 (S 03°02′44.4″, W 59°53′09.0″) and C5 (S 03°02′43.9″, W 59°53′09.0″) ( Fig. 1 ). The larvicide used was VECTOLEX CG ® , formulated granules (Valent BioSciences Corporation), concentrated dried serotype H5a5b at 7.5% with a power of approximately 670 Bs international toxin units (ITU), containing corn oil and corn cob granules. We used the dose recommended by the manufacturer, i.e., 11 kg/ha. The applications were done manually by reaching across the edge and over a perimeter of 3 m of the breeding center. Field bioassays lasted 30 days at each pond and two applications (application and reapplication) of biolarvicide were performed with an interval of 15 days between them. Samples of Anopheles larvae and associated insect fauna were obtained at the following times: pre-application – sampling performed before application; and after application of biolarvicide – three post-application samples and three post-reapplication samples at the following times: 24 h, 5 days and 15 days. The period of larvicide application began in December 2011 and lasted until April 2012, where there were 9 field trips at each breeding site, totaling 45 for the whole bioassay. Sampled of insects Anopheles larvae were collected for 20 min at three randomly selected points on the edge of each breeding site using an entomological scoop with volumetric capacity of about 350 mL, 11 cm aperture and capable of handling a meter. During sampling, the 4th instar larvae of Anopheles were separated and taken to the Malaria and Dengue Laboratory of the National Institute of Amazonian Research, maintained under laboratory conditions and grown to adulthood to facilitate species identification. The identification of Anopheles was performed using the dichotomous key proposed by Consoli and Oliveira (1994) . Aquatic insects were collected using an aquatic insect net, at four random points for 30 s at each point ( Merritt et al., 2005 ). Subsequently, the material was fixed in 70% alcohol and brought to the laboratory where it was screened with the aid of a stereomicroscope. The insects found were identified to the lowest possible taxonomic level using dichotomous keys ( Merritt and Cummins, 1996; Pes et al., 2005; Pereira et al., 2007 ). Subsequently, the individuals were classified according to the functional trophic groups separated into shredders, scrapers, collectors, filter feeders and predators, following recommendations by Merritt and Cummins (1996) and Cummins et al. (2005) . Data analysis To characterize the insect fauna, the relative abundance (%) of aquatic insects and species of Anopheles ( Magurran, 1988 ) was calculated. To evaluate the effect of B. sphaericus on Anopheles , larval reduction (%) was obtained by using the three post-application and post-reapplication data. The index uses the number of larvae before and after application of biolarvicide, obtaining the percent reduction of Anopheles larvae during the experiment ( Mulla et al., 1986 ). Larval density (LNMH) was determined before and after application of biolarvicide at breeding sites LNMH was obtained according to the formula described by Tadei et al. (2007) and was superimposed on abundance data over time insect fauna associated to check the behavior of these populations during the biolarvicide action period. To evaluate the effect of B. sphaericus on insect fauna associated vectors, richness and diversity data were analyzed by linear regression analysis, with the aid of the Statistica Statsoft 10.0 program. The independent variable was the application of B. sphaericus and dependent was the richness and diversity of associated insect fauna. The relationship between richness/diversity and the use of biolarvicide was assessed by regression models that best fit the data distribution. The values obtained after two biolarvicide application cycles at the five artificial breeding sites were taken into account. Results A total of 905 larvae of Anopheles were collected identified in five species: Anopheles darlingi , Anopheles albitarsis , Anopheles braziliensis , Anopheles triannulatus and Anopheles nuneztovari . The relative abundance showed that A. darlingi (54%) was the most abundant species and A. albitarsis (1%) the least. Considering the associated insect fauna, 4874 specimens belonging to 6 orders and 23 families were collected. Chironomidae was the most abundant family with 51%, followed by Ceratopogonidae (14%) and Coenagrionidae (11%), and the least abundant was Gerridae (0.02%) ( Table 1 ). Among the functional trophic groups, the collectors were the most abundant in three of the five breeding sites analyzed (C2: 85%, C3: 45.5% and C4: 60%); in the other two, the predators were the most abundant (C1: 73.5% and C5: 68%). The group of shredders was the least abundant (C4: 0.2%) in all breeding sites analyzed. Effects on Anopheles At three of the five treated breeding sites (C1, C2 and C5), the larvae were eliminated at 24 h (100%) and up to 5 days after the application of biolarvicide, and at C4, the reduction was 98%. After reapplication, the larval reduction rate was high in the initial readings but decreased 15 days after re-application. At breeding site C3, 24 h after B. sphaericus application, larval reduction was 56%, and at five days, negative values were obtained for larval reduction, because there was an increase in larvae. After reapplication, 100% reduction in larvae was observed after 24 h and also after 15 days ( Table 2 ). The larval reduction data demonstrated the effectiveness of B. sphaericus , which eliminated larval breeding in three of the five sites analyzed 24 h after application. However, recolonization was observed at 15 days after the application, indicating the short persistence of biolarvicide in these environments ( Table 2 ). Evaluation of associated insect fauna As a parameter of the effects of B. sphaericus on breeding, larval density values of Anopheles sp. (LNMH) were superimposed. There was variation in the abundance of specimens of Chironomidae and Coenagrionidae at C1, but 24 h after application of biolarvicide, the families were found exceeding the pre-implementation collection number. At C2 24 h after the application of biolarvicide, the abundance of Chironomidae increased compared to the values found in pre-sampling ( Fig. 2 ). At C3, the population of chironomids changed in abundance, which accompanied the larval density of Anopheles . At C4, two families were dominant, Chironomidae and Ceratopogonidae, both demonstrating variability in relative abundance during the experiment, but at the end, both showed high levels ( Fig. 3 ). At C5, chironomids also varied in their population throughout the experiment, but the abundance found before applying biolarvicide was less than that found at the end of the experiment, with no elimination of these individuals throughout the experiment ( Fig. 4 ). The model that best fit was a 2nd order polynomial regression, described by the following equations. y = a + b * x , where a = 0.54, b = 0. 001 ( r 2 = 0.002), for diversity, y = a + b * x , where a = 12.21, b = 0.21 ( r 2 = 0.047), for richness ( Fig. 5 ). The line does not indicate a negative variation of the variables analyzed, diversity followed a constant and the richness index showed a slight increase, possibly because of the large variation in the data between the points. This data shows that there has been no effect of B. sphaericus on the insect fauna associated with this study. Discussion The proximity of the urban area is possibly the biggest factor that determines the abundance of A. darlingi , since this species has high anthropophily ( Tadei et al., 1998 ). The order Diptera was the most abundant at four of the five breeding sites analyzed; it is commonly dominant in both lotic and lentic environments due to its tolerance to extreme conditions such as hypoxia and also because of its strong competitive ability ( Nessimian, 1995; Callisto et al., 2001 ). According to Amorim and Castillo (2009) , chironomids show generalist and opportunistic feeding habits, mainly being collectors-gatherers, which most often utilize periphyton organisms as food, a fact that explains the dominance of this group over the other taxa, at most breeding sites. A large number of collectors are associated with the presence of wetland and riparian vegetation at breeding sites, and according Koetsier and McArthur (2000) , macrophytes play an important role in the retention of organic matter, favoring the presence of this trophic group. Data from the present study corroborate those of Merritt et al. (2005) in studies on the application of B. sphaericus on the associated insect fauna, who found that of the total aquatic insects sampled, the majority belonged to the collector group. The low frequency of scrapers and shredders in the environments studied groups was influenced by the high abundance of Chironomidae, which according to the literature, compete for food (fine particulate organic matter) and space for shelter and protection ( Amorim and Castillo, 2009 ). Wilson (1991) and Peterson (1992) stated that in natural environments, the greater the presence of predators, the less the competition is between organisms, resulting in increased diversity. However, in the fish ponds studied, the greater abundance of predators did not increase the diversity, possibly because it is an isolated environment with a limited number of niches to be occupied. The LNMH values found prior to samplings in this study were low compared to those found by Rodrigues et al. (2008) in B. sphaericus applications in fish ponds (mean 13.7). However, our findings corroborate those regarding the decrease in density of larvae 24 h after larvacide application. Considering the results of larval reduction at breeding site C3, no high mortality was observed in the first days post-application. The marginal vegetation influenced the larvicidal activity since all the breeding site margin was covered by Brachiaria sp., thus increasing the amount of organic matter. This finding corroborates the results obtained by Alves et al. (2006) who reported low activity for VECTOLEX ® larvicide at organically enriched breeding sites of Culex sp. The variables studied here reinforce the specificity of B. sphaericus for Anopheles and Culex , as observed by Mulla et al. (1984) , Aly and Mulla (1987) , Lacey and Mulla (1990) , Becker (1997) , Lacey and Siegel (2000) and Lacey and Merritt (2003) , who also found no effect of B. sphaericus on chironomids, other Diptera families and predators of mosquitoes in the field. The family Chironomidae was possibly not affected due to the benthic habitat as larvae, causing it to have no contact with the VECTOLEX that when applied on the surface of the water is due to its composition of corn cobs. Finally, the fish pond breeding habitat was also associated with insect fauna that was not affected by the application of B. sphaericus over 30 days. However, studies monitoring water bodies for a longer period, with repeated applications and other formulations of biolarvicide, should be conducted to observe the behavior of aquatic populations. Conflicts of interest The authors declare no conflicts of interest. Acknowledgments The authors thank Augusto Leão for his help with statistical analysis, the Gervilane Ribeiro and Carlos Praia of the National Institute of Amazonian Research, who contributed to the identification of the individuals of Anopheles sp. collected and Raimundo Nonato who assisted in field sampling. We thank CNPq for the scholarship granted to the student Ferreira, F.A.S. Dr. A. Leyva helped with English editing of the manuscript.
|
[
"ALVES",
"ALY",
"AMORIM",
"BECKER",
"BRASIL",
"BROWN",
"CALLISTO",
"CONSOLI",
"CUMMINS",
"DEBARJAC",
"FORATTINI",
"GALARDO",
"HABIB",
"KARCH",
"KOETSIER",
"LACEY",
"LACEY",
"LACEY",
"LANG",
"MAGURRAN",
"MERRITT",
"MERRITT",
"MULLA",
"MULLA",
"NESSIMIAN",
"OLIVEIRAFERREIRA",
"PEREIRA",
"PES",
"PETERSON",
"RODCHAROEN",
"RODRIGUES",
"RODRIGUES",
"SIOLI",
"TADEI",
"TADEI",
"TADEI",
"WILSON"
] |
dfa49527e92e4283b2fd5567362b2eed_Societal norms and the shadow of the mind Averting the tragedy of the commons through a new understa_10.1016_j.ssaho.2024.101009.xml
|
Societal norms and the shadow of the mind: Averting the tragedy of the commons through a new understanding of cooperation
|
[
"Mamada, Robert"
] |
This exposition introduces a groundbreaking model that elucidates the role of societal norms and the preconscious in shaping cooperative behaviour, particularly with an eye towards averting the tragedy of the commons as conceptualised by the Prisoner’s Dilemma. Moving beyond the confines of rational-choice paradigms, this study harmonises Weber’s sociological theories with Freudian psychological insights, unveiling the nuanced interplay between conscious decision-making and deeper, preconscious motivations towards cooperation. It argues that an intricate understanding of both societal norms and the preconscious is crucial for crafting effective strategies to tackle the challenges of collective resource management.
|
1 Introduction This investigation sets out to illuminate the intricacies of the Prisoner’s Dilemma (PD), with a particular emphasis on the internalization of cooperative social norms as a vehicle for engendering a deeper understanding of the conundrum famously dubbed the tragedy of the commons. In her landmark study, Ostrom (2015) posits that the tragedy of the commons is often conceptualised within the framework of PD models, a notion further illustrated by the work of Mamada and Perrings (2022) , who examines the exploitation of common-pool marine resources through the PD paradigm. Such scholarly endeavours underscore the pressing need to explore viable solutions to the PD, aiming to circumvent the catastrophic outcomes synonymous with the tragedy of the commons. This pursuit, enriched by the integration of Weberian and Freudian perspectives, ventures into the realms of societal norms and the shadowy precincts of the preconscious mind, proposing a novel schema for understanding cooperative behavior beyond the confines of rational calculation. Hardin (1968) advances the thesis that averting the tragedy of the commons can be accomplished through either privatisation or state intervention. In contradistinction, proponents of institutionalism argue that there exist viable alternative modalities for the proficient stewardship of common-pool resources. These modalities encompass, inter alia, community-led management frameworks and the institution of tailored policies and organisational structures. The veracity of these viewpoints finds support in an array of empirical and/or theoretical studies, notably those by Mamada et al. (2017) ; Moffatt (1984, pp. 182–190) ; Ostrom (2000 , 2010) ; Uzawa (2005) ; Yitbarek et al. (2021) ; Young (2011) . Collectively, these studies underscore the efficacy of diverse management approaches that transcend the conventional dichotomy of privatisation and governmental oversight. The theoretical paradigms, underpinned by institutionalism, have played a pivotal role in enhancing our comprehension of the governance modalities pertinent to common-pool resources. Nonetheless, the extant corpus of academic inquiry manifests a conspicuous lacuna: it predominantly eschews the ethical and cultural facets which are quintessential to the stakeholders within these commons. Albeit the formulation of governance architectures in specific communal milieus has met with success, their enduring sustainability and effectiveness remain jeopardised. This predicament is particularly pronounced in scenarios where a substantial fraction of the community either exhibits non-compliance or disregards the upkeep of these structures. In their seminal work, Boyd et al. (2018) eloquently articulate the imperative of recognizing the fluidity inherent within societal norms and the consequent impact this has on the evolution of institutions dedicated to collective action. This fluidity, they argue, possesses the dual potential to either fortify or erode these pivotal institutions. Further enriching this discourse, Hirofumi Uzawa, in a particularly evocative narrative, recounts his profound interaction with Emperor Hirohito. During a dialogue wherein Uzawa was expounding his economic theories, deeply rooted in the school of institutionalism, Emperor Hirohito interposed with a remark of considerable poignancy: ‘Sir! Your discourse predominantly concerns economics, yet, at its core, it seems to me, is a profound recognition of the paramountcy of the human heart.’ This incisive observation by the Emperor catalysed a moment of significant epiphany for Uzawa, prompting him to reassess his prior neglect of the subjective elements intrinsic to economic theory. Consequently, he conceded the indispensable importance of social norms, cultural values, beliefs, and the quintessence of the human spirit in not only comprehending economic systems but also in effectively managing communal resources, as elucidated in his later works ( Uzawa, 2013 ). Supporting this viewpoint, Young (2011) illuminates the rise of modern resistance movements that contest the institutional structures enforced by local or national governments in managing common-pool resources. Simultaneously, Diamond (2011) delves into the historical societies that faced ruin owing to their mismanagement of these resources. Diamond contends that a significant element leading to these societal collapses was the lack of sufficient backing for governing bodies from the citizenry. His historical scrutiny accentuates the intrinsic vulnerability of these institutions, underscoring the imperative of maintaining unceasing legitimacy and endorsement from those governed. In his seminal work, Weber (1992) offers an incisive analysis into the pivotal role played by ethical and cultural bedrocks in shaping the institutions under scrutiny. He articulates a compelling argument regarding the symbiotic relationship between the Protestant ethic and the spirit of capitalism, asserting this nexus as instrumental in the genesis and subsequent development of the modern capitalist paradigm and its associated institutions. Weber further argues that the absence of modern capitalism in certain historical milieus may be ascribed to the dearth of religious and cultural precepts conducive to rational economic conduct and the systematic pursuit of profit, which are quintessential elements of the capitalist ethos. Consequently, it becomes paramount to comprehend and acknowledge the social and cultural values that are fundamental to the efficacious governance of communal institutions and the stewardship of resources. In this scholarly inquiry, the examination centers on the impact of the ‘human heart’ upon the resultant dynamics of the Prisoner’s Dilemma (PD) game, particularly highlighting the influence of social norms in moulding the decision-making paradigms of participants. The term ‘human heart’ is employed in a broad sense, encompassing an array of subjective facets including, but not limited to, cultural values, personal beliefs, and societal ethos. These facets, intrinsic to the fabric of social norms, exert a profound influence on the decision-making processes of individuals, a phenomenon well-documented within the realm of social psychology. In his seminal work, Berger (1963) posits that with the maturation of individuals, there is a concomitant internalization of societal norms, which themselves are a reflection of the cultural milieu, replete with its distinct values, beliefs, and ethos. This process of internalization exerts a profound influence on their cognitive schema. A pivotal juncture in this developmental trajectory is encapsulated within Erik Erikson’s stage of ‘Industry versus Inferiority’ in psychosocial evolution, primarily spanning the ages of five to thirteen, a period typically associated with elementary education ( Berns, 2007 ). It is within this critical phase that children are anticipated to cultivate cooperative skills, integral to societal norms. This epoch is instrumental in enabling them to assimilate the essence of cooperation. The principles and values imbibed during this formative stage are of significant consequence, shaping their future interpersonal dynamics and decision-making paradigms. Henceforth, it is imperative to acknowledge that social norms, intricately interwoven with the rich tapestry of cultural values, beliefs, and ethos, occupy a central position in the orchestration of not only individual behaviours but also the construction of personal identity and cognitive frameworks. These norms transcend mere external influences; they are inextricably enmeshed in the very warp and weft of our consciousness, profoundly influencing our perceptions and reactions to our surrounding milieu. Understanding the role of social norms in shaping the outcomes of the Prisoner’s Dilemma is crucial for comprehending cooperative behaviour. Tanimoto and Sagara (2007) offer valuable insights by highlighting how the structural aspects of games influence the emergence of dilemmas. Their work shows that in 2x2 games, static factors alone determine the occurrence of dilemmas, whereas more complex games require a consideration of both static and dynamic factors. This distinction underscores the importance of considering both inherent game structures and the evolving distribution of strategies when examining how social norms promote cooperation and mitigate dilemmas. Empirical evidence, garnered from a myriad of experiments, robustly substantiates the assertion that societal norms exert a considerable influence on the individual’s decision-making processes. The conceptual genesis of the Prisoner’s Dilemma (PD) game is firmly rooted in a hypothetical construct, drawing its essence from the quandary faced by two prisoners confronted with the dilemma of accepting a plea bargain in exchange for reduced sentences. In their seminal study, Khadjavi and Lange (2013) not only scrutinise this theoretical framework, but also adeptly extend its application to encompass real-world scenarios. This investigation meticulously juxtaposes the decision-making processes of female inmates with those of university students, employing a simultaneous implementation of the PD game. The findings of this study are particularly salient, revealing a marked divergence in cooperation rates between the two cohorts. Notably, the rate of cooperation observed amongst the inmates significantly surpasses that discerned within the student demographic. While Khadjavi and Lange (2013) shed light on this behavioural divergence, their analysis does not extend to a definitive explanation for the pronounced propensity for cooperation observed among prisoners. They astutely suggest that this discrepancy in behaviour could be attributed to disparate socio-demographic factors inherent to each group. These factors, albeit not exhaustively examined in their study, likely encompass a wide array of components, such as varied life experiences, social backgrounds, and intrinsic motivations. The scholarly inquiry conducted by Khadjavi and Lange (2013) postulates that within the carceral environment, entrenched societal conventions may wield substantial influence over the cognitive choices of individuals. Whilst the study refrains from an explicit delineation of these norms, it intriguingly intimates that the unique social dynamics and relational interplays, characteristic of the penal setting, may profoundly shape the conduct of prisoners within the milieu of the Prisoner’s Dilemma game. Consequently, an exhaustive exploration into these foundational sociocultural determinants is essential to comprehensively comprehend the pronounced disparities in cooperative tendencies discernible between the cohorts of incarcerated individuals and university students. Building on the study by Khadjavi and Lange (2013) on the Prisoner’s Dilemma (PD), Katchanovski and Center (2003) conducted a detailed analysis of political detainees during the Great Terror in the Soviet Union in the 1930s. Their investigation revealed that very few political prisoners made voluntary confessions, deviating significantly from the expected outcomes of the PD model. This finding suggests the need for a more detailed examination of the various factors influencing such behavior. Katchanovski and Center (2003) argue that torture and coercive methods significantly reduced the number of voluntary confessions, aligning with rational choice theory. The threat of severe consequences, such as prolonged detention or death, discouraged many from confessing. The study highlights the influence of strict social norms and political suppression during the Great Terror, pressuring individuals to conform to ideological orthodoxy. This example underscores the profound impact of external factors on decision-making, extending beyond the traditional boundaries of the Prisoner’s Dilemma model. A further exemplar, highlighting the significant influence of social norms upon the mechanisms of decision-making, is provided by the study conducted by Roth et al. (1991) . This academic endeavour employed a cross-cultural experimental paradigm, encompassing a diverse array of nations, with the objective of shedding light on the impact of differing subject-pool attributes and transaction scales on bargaining strategies and market dynamics. At the heart of their inquiry lay the analysis of an ultimatum game, participated in by pairs of individuals. The conclusions drawn from their scholarly inquiry have cast light upon the presence of unregulated discrepancies within cohorts of subjects spanning diverse nations. Whilst Roth et al. (1991) refrained from engaging in a comprehensive investigation into the precise catalysts or foundational reasons for these variances, the disparity in societal customs presents itself as a plausible agent of influence, potentially elucidating the noted variegations in outcomes across subject groups hailing from distinct cultural milieus. This inference is congruent with the notion that social norms, frequently embedded within the cultural weave of a society, wield significant sway over the decision-making frameworks of individuals, thus shaping the mosaic of economic interactions. In this context, it becomes imperative to exercise a discerning analysis of the role that such norms play, for a refined comprehension of the complexities that are intrinsically embedded within the framework of decision-making processes, particularly when considered within the realm of cross-cultural exchanges. The preceding exposition cogently accentuates the salient role that societal norms assume in moulding the mechanisms through which individuals make decisions. This facet is oftentimes undervalued within the traditional framework of the prisoner’s dilemma, a notable oversight that does not duly recognise the profound impact these norms impart upon the decision-making processes of the entities involved. In this scholarly endeavour, the initiation of discourse is marked by a meticulous examination of the quintessential paradigm known as the Prisoner’s Dilemma (PD), wherein I endeavour to explicate the nuanced manner in which societal mores exert a profound impact upon the utility functions of the involved players. As the exposition unfolds, it becomes manifest that, contingent upon the stimulus for collaborative engagement — propagated by the aforesaid social norms — attaining a critical juncture, players are predisposed towards electing a cooperative strategy, this predilection persisting even in the stark absence of any overt indicators that might otherwise facilitate the orchestration of their actions in concert. Notwithstanding the extensive corpus of scholarship probing the intricacies of the Prisoner’s Dilemma as a means to illuminate the tragedy of the commons, there remains a conspicuous lacuna in sufficiently incorporating the subtle impact of social norms upon the utility functions of individuals. Conventional paradigms have primarily concentrated on fiscal and tangible incentives, frequently neglecting the significant influence that internalised norms and cultural values exert in moulding the processes of decision-making. This neglect intimates a pivotal path for scholarly inquiry, wherein the indirect evolutionary methodology advocated by Güth and Kliemt (1998) presents an auspicious framework for ameliorating this deficiency. This investigation endeavours to ameliorate the previously identified lacuna by advancing a model that incorporates the notion of ‘propensity to cooperate’ within the utility functions of players participating in the Prisoner’s Dilemma. Through the application of this model, it is the author’s ambition to elucidate the manner in which cooperative norms may substantially modify strategic equilibria, thus fostering a transition towards more collaborative outcomes notwithstanding the prospect of immediate individual detriments. In pursuing this objective, the present research aspires to furnish a more nuanced comprehension of the processes by which social norms exert influence over collective action and the formulation of strategies for resource management. The contributions of this investigation transcend the purely theoretical domain, proffering pragmatic insights for those engaged in policymaking, organisational leadership, and communities confronting the quandaries associated with the management of common-pool resources. Through elucidating the critical function of cooperative norms in promoting the sustainable utilization of resources, this study accentuates the capacity of non-fiscal mechanisms to mould behaviour in favor of the collective welfare. Moreover, it revitalises the debate surrounding the tragedy of the commons, championing a reassessment of traditional economic paradigms via the perspective of social norms and cooperative conduct. The structure of this paper is meticulously delineated as follows: Section 2 provides an erudite exposition of the model being scrutinized within the ambit of this scholarly treatise. It specifically elucidates upon the metamorphosis of the archetypal Prisoner’s Dilemma (PD) construct into a nuanced iteration, wherein the utility functions are intricately modulated by the assimilation of internalised social norms. Pursuing this, Section 3 embarks on a thorough disquisition of the aforementioned Prisoner’s Dilemma paradigm, as rigorously defined in Section 2 . In its consummation, Section 4 serves as the culmination of this intellectual inquiry, encapsulating the pivotal findings and insights that have been illuminated over the span of this investigation. 2 Model This investigation embarks upon its scholarly journey with a detailed examination of the quintessential one-shot prisoner’s dilemma paradigm, a construct of notable intrigue within the domain of game theory. Within this paradigm, the players, henceforth designated as Player 1 and Player 2, are introduced to a symmetrically structured matrix of payoffs, meticulously outlined in Table 1 . It is imperative to underscore, upon inspection of this tabular representation, that the numerical allocations conform rigorously to the conditions c > a > d > b and 2 a > b + c . Within the intricate tapestry of the Prisoner’s Dilemma, the latter inequality emerges as a pivotal mechanism, assuring that the scenario in which both players opt for a cooperative stance (C) culminates in the optimal enhancement of the collective sum of their individual payoffs. This aspect not only underscores the strategic complexity inherent in the dilemma but also highlights the delicate balance between individual rationality and collective benefit. Numerous scholarly inquiries within the domain of game theory have traditionally engaged in the analysis of games by concentrating on the monetary outcomes, as exemplified in Table 1 . However, the perspective advanced by Güth and Kliemt (1998) represents a departure from this traditional economic paradigm, which predominantly conflates utility with the accrual of monetary gains or profit. They contend that such a reductive interpretation fails to capture the full spectrum of influences on human decision-making processes. Specifically, they argue that utility functions are inherently complex, shaped not only by tangible dimensions but also by intangible elements. This perspective aligns with the tenets of behavioural economics, which assert that human behavior — and, by extension, utility — cannot be fully apprehended through the prism of objective, financial incentives alone. Hence, utility functions embody a synthesis of subjective motivations and objective factors, moving beyond the narrow confines of purely monetary considerations. In view of this, it becomes essential to reconceptualise the monetary outcomes depicted in Table 1 in terms of utility functions that more accurately mirror the intricate confluence of factors that influence human decisions. Within the ambit of orthodox economic theory, an extensive array of models is predicated upon the axiom that utility functions pertaining to positive remunerations manifest a concave disposition, distinguished by an ascending progression concomitant with a decelerating rate of augmentation. Such a presupposition harmonises with the elemental tenets of prospect theory, wherein the representation of gains is effected through concave value functions, a notion meticulously expounded by Kahneman and Tversky in their landmark exposition of 1979, as delineated by Kahneman and Tversky (1979) . Pertinent to the discourse of this investigation, the utility function appertaining to a specified participant, denominated as u for player i i adopting the values 1 or 2, is delineated as follows: u = i u ( i s 1 , s 2 ) ≥ 0. Herein, s signifies the elected action by player i i encapsulating the strategic alternatives of ‘Cooperate’ or ‘Defect.’ This exposition facilitates the subsequent articulation of the utility function matrix as expounded in Table 2 . In the current scholarly investigation, the utility functions subject to scrutiny are characterised by their inherently concave disposition, manifested through an ascendant trajectory, though accompanied by a diminishing rate of augmentation. This phenomenon culminates in the perpetuation of the inequality c > a > d > b . To elucidate further, this can be eloquently expressed as u 1 ( D , C ) > u 1 ( C , C ) > u 1 ( D , D ) > u 1 ( C , D ). In a parallel vein, an analogous inequality finds relevance to u 2 ( s 1 , s 2 ), thereby underscoring a consistent pattern across distinct utility assessments. In the present discourse, due reflection is afforded to the ramifications of societal conventions upon the utility function attributable to each participant ensconced within the ambit of game theoretic paradigms. To encapsulate the essence of this influence, the introduction of a novel conceptual framework, designated as the ‘propensity to cooperate,’ is advocated. This construct operates in a manner whereby the utility function pertaining to player i is enhanced by a multiplicative coefficient, denoted as k , where i k > 1, predicated upon the adoption of the ‘Cooperate’ (C) action by said player. This adjustment precipitates a recalibration of the game matrix, as expounded in i Table 3 . To shed light on the pragmatic ramifications of the theoretical framework posited herein, it proves beneficial to undertake a review of historical episodes wherein the norms of cooperation have exerted a significant impact upon economic results. These instances serve not merely to corroborate the presuppositions of the model but also to highlight the critical necessity of amalgamating sociological acumen with economic theorisation. Through the analysis of the symbiosis between cooperative norms and utility functions within actual scenarios, we are afforded a more profound comprehension of the underpinnings that promote collaboration in the stewardship of common-pool resources. In contrast to the model proposed by Kandel and Lazear (1992) , which elucidates the conscious, calculative dynamics inherent in partnerships and profit-sharing arrangements through manipulation of each payoff, my approach delves into deeper, somewhat less tangible realms of cooperative behavior. The distinction lies in the treatment of utility: whereas Kandel and Lazear adjust payoffs based on explicit rational calculations, my model introduces a multiplier, k , to all utilities resulting from a player’s choice to cooperate. This distinction underscores a fundamental shift from mere rational calculations to the internalization of social norms, drawing inspiration from Max Weber and Sigmund Freud’s theoretical frameworks. The spirit of capitalism, as articulated by Weber (1992) , suggests that cooperative behavior is not merely a product of economic incentives but is profoundly shaped by cultural values and ethical norms. This perspective posits that the propensity to cooperate may be deeply embedded in the cultural and ethical fabric of society, guiding behavior in ways that transcend straightforward economic calculations. Building on this, I incorporate Freud’s notion of the preconscious, which refers to thoughts and feelings that are not in immediate conscious awareness but can readily become conscious ( Huffman, 2004 ). It implies that the inclination towards cooperation, influenced by internalised norms and values, operates at a level that, while not overtly conscious, significantly shapes behavior through a complex interplay of psychological predispositions. This nuanced approach to understanding cooperative behavior diverges from the conscious, calculative focus of Kandel and Lazear (1992) by exploring the role of deeply internalised cultural and psychological factors. By integrating the insights of Weber and Freud, the model offers a multidimensional view that encapsulates the intricate interplay of economic, cultural, and psychological factors. It posits cooperation as a phenomenon rooted in a rich tapestry of influences that span the conscious and the preconscious, the economic and the ethico-cultural, marked by a pivotal distinction in how utilities are conceptualised and manipulated. In addition to the work by Kandel and Lazear (1992) , within the ambit of cooperative behaviours and incentive mechanisms in formal organisational contexts, Petersen (1994) offers a salient discourse on the resolution of free-rider dilemmas through innovative incentive schemes. His exploration, particularly within work teams, elucidates the efficacy of group piece-rate and target-rate schemes, thereby providing a pragmatic lens through which the nexus between individual contributions and collective outputs may be examined. This stands in contrast to the present study, which ventures beyond the tangible confines of economic incentives, delving into the profound influences exerted by internalised cooperative norms on the decision-making panorama within the Prisoner’s Dilemma framework. Petersen’s pragmatic analysis, while addressing the immediate quandaries faced by organisations in fostering collective effort, inadvertently demarcates a scholarly lacuna concerning the deeper, more intrinsic motivators of cooperation. The current exploration seeks to bridge this gap, proposing a theoretical model that intertwines the seminal notions posited by Weber and Freud with the propensity to cooperate. This model, thus, transcends the mere economic rationality of cooperative endeavour, unveiling the sociopsychological undercurrents that underpin such behaviour. Moreover, the theoretical expansion proffered herein not only diverges from Petersen’s empirical gaze but also extends an invitation to re-examine the empirical manifestations of cooperative norms within organisational settings. By embedding the preconscious and societal norms within the utility functions of individuals, this study elucidates a nuanced schema of cooperation, thereby enriching the theoretical tapestry with deeper psychological and sociological insights. The empirical implications of this theoretical model beckon a broader application, suggesting avenues for future research to empirically test and validate the impact of societal norms and the preconscious on cooperative behaviour in various contexts, including but not confined to organisational settings. This gesture towards empirical exploration serves not only to complement Petersen’s findings but also to underscore the potential for a symbiotic relationship between theoretical insights and empirical observations in the quest to unravel the complex mechanisms driving cooperative behaviour. Now, in the scholarly discourse of game theory, a remarkably fresh perspective is proffered by Tóbiás (2023) through the introduction of an avant-garde equilibrium construct, designated as the ‘Rationally Altruistic Equilibrium.’ This innovative conceptual framework heralds a paradigmatic shift, enabling players within the schematic confines of a game to consider the welfare of their adversaries, thus, recalibrating the traditional dynamics inherent to game theory. Historically predicated upon the axiom of self-interest, strategic engagements within this theoretical model demonstrate the viability of altruistic comportment. The construct is meticulously tailored for finite games that encompass a public signal, thereby evidencing its applicability in contexts where the interests of the players diverge, exemplified by the prisoner’s dilemma. Tóbiás (2023) delineates both the merits and demerits of this model, highlighting the facilitation of enhanced cooperation and, concurrently, underscoring the model’s dependency on a public signal alongside its reduced effectiveness in scenarios where the interests of the players are either aligned or diametrically opposed. This exposition introduces a paradigm-shifting perspective on the arena of strategic decision-making, articulating the premise that the principles of rational self-interest and altruistic behaviour are not mutually exclusive but can indeed coexist harmoniously. Such a notion significantly enriches our comprehension of the intricacies inherent in economic and organizational landscapes. Nonetheless, the practical application of this theoretical construct, particularly in the governance of common-pool resources, unveils several discernible limitations. A principal critique lies in the model’s reliance upon a public signal, which stands as a conspicuous flaw. Elinor Ostrom, through her seminal empirical investigations into the governance of commons, has incontrovertibly demonstrated that numerous self-organized resource regimes adeptly navigate the challenges of management without recourse to a third-party signal or an external enforcement mechanism. Her findings, elaborately presented in Ostrom (2000) , cast a shadow of doubt over the sweeping applicability of the model proposed by Tóbiás (2023) in the tapestry of real-world scenarios, thereby challenging the assertion of its universal relevance. Secondly, the notion that players may consciously choose altruism simplifies the intricate landscape of human psychology. Drawing from Max Weber’s insights on the spirit of capitalism ( Weber, 1992 ) and Freud’s concept of the preconscious ( Huffman, 2004 ), it becomes evident that cooperative behavior in the context of shared resources may not solely be a matter of rational choice or instinctual drive. Instead, it is deeply influenced by the cultural and ethical fabric of society, internalised to the extent that cooperative decisions emerge from a complex interplay of conscious reasoning and preconscious inclinations shaped by societal norms. This view suggests that the propensity to cooperate is embedded within the individual’s psyche, influenced by a web of social norms and values that operate at both conscious and subconscious levels, thus reinforcing the stability and predictability of collective actions. Thirdly, the model in question encompasses a spectrum of potential outcomes, ranging from cooperative (CC) to non-cooperative (CD, DC, DD) scenarios. However, its applicability becomes notably attenuated within the context of common-pool resource management. The quintessential paradigm for efficacious management in this domain mandates unwavering cooperation amongst all stakeholders, a precondition that is not invariably assured by the framework proposed by Tóbiás (2023) in his seminal work on Rational Altruism. In summation, the treatise proffered by Tóbiás (2023) on the subject of ‘Rational Altruism’ presents an avant-garde exploration into the intricate nexus between societal norms and altruistic behaviours within the ambit of game theoretical analysis. Nonetheless, it appears to somewhat lack in its comprehensive engagement with the multifaceted challenges associated with the stewardship of common-pool resources. The ensuing section endeavours to elucidate a model more befitting this context, as explicated in Table 3 . Herein, it is posited that through meticulous adjustment of the variables k 1 and k 2 , a paradigm of consistent cooperation may indeed be realised. The ensuing segment, subsequent to laying the requisite foundational underpinnings, embarks upon a comprehensive inquiry into ascertaining the critical significance of k . This elucidation is of paramount importance, as it heralds the ascendancy of action C within the strategic paradigms of each player. The preeminence of action C fosters an inclination in player i i towards a collaborative disposition, a strategic inclination designed to ameliorate the peril of inordinate exploitation of the common-pool resource. Within this framework, altruism emerges as an endogenous phenomenon, epitomizing a strategically self-serving rational choice. Moreover, the presuppositions embedded within the model regarding the influence of internalised social norms upon individual decision-making mechanisms necessitate a discourse on the conceivable diversity of these norms across disparate cultures and societies. Such diversity prompts critical inquiries concerning the model’s universal applicability and the degree to which cultural particularities may affect the strategic equilibrium outcomes envisaged by the model. Delving into these inquiries not only augments our comprehension of the model’s constraints but also unveils pathways for forthcoming research to investigate the cultural facets of cooperative conduct. In anticipation of empirical validation, it is imperative to delineate potential methodologies for quantifying the ‘propensity to cooperate’ and its influence on utility functions within experimental paradigms. Such methodological deliberations are quintessential for the transmutation of the theoretical framework into tangible research endeavours capable of examining the posited hypotheses concerning the influence of cooperative norms on economic comportment. By furnishing a lucid schema for empirical scrutiny, the theoretical tenets of the model may be subjected to meticulous examination and refinement. 3 Analysis In the present discourse, I endeavour to explicate upon the strategic interplay as demarcated within Table 3 . A meticulous scrutiny of said table discloses that a paradigm wherein both players opt for collaboration emerges as a Nash equilibrium, contingent upon the fulfillment of the conditions that k 1 u 1 ( C , C ) > u 1 ( D , C ), k 1 u 1 ( C , D ) > u 1 ( D , D ), k 2 u 2 ( C , C ) > u 2 ( C , D ), and k 2 u 2 ( D , C ) > u 2 ( D , D ). To further illuminate the intricacies of this Nash equilibrium, with a particular emphasis on the phenomenon of mutual cooperation, it becomes exigent to delineate the functional form of the utility functions with precision. In pursuit of this objective, I have elected to employ the function u = ln( i x + 1), wherein x ≥ 0 signifies the payoff. The adoption of this logarithmic function, a canonical element within the corpus of economic theory, aligns with the previously enunciated criteria for utility functions, to wit, u ≥ 0 and it manifests an incrementing, albeit diminishing, correlation with the payoff. The strategic incorporation of ‘+1’ in the expression ensures that a payoff of nought yields a utility of i u (0) = ln(0 + 1) = 0. Following this delineation, the architecture of the game matrix is subject to alteration, as illustrated in i Table 4 . In consideration of the particulars meticulously outlined within Table 4 , the ensuing hypothesis is thus advanced with due solemnity. Proposition In the milieu of the game delineated in Table 4 , the scenario in which both players elect to engage in cooperative behaviour constitutes a Nash equilibrium, contingent upon the fulfilment of the ensuing conditions: (1) k 1 > max ln ( c + 1 ) ln ( a + 1 ) , ln ( d + 1 ) ln ( b + 1 ) , and (2) k 2 > max ln ( c + 1 ) ln ( a + 1 ) , ln ( d + 1 ) ln ( b + 1 ) . Proof. In the discourse at hand, I delve into the strategic nuances that bear significance for Player 1, with the proviso that my analysis holds a mirror to the strategic considerations of Player 2 in an analogous fashion. It has been previously posited, with a degree of rigour, that the supremacy of action C as the preferable stratagem for Player 1 hinges upon the fulfilment of two pivotal inequalities: k 1 u 1 ( C , C ) > u 1 ( D , C ) and k 1 u 1 ( C , D ) > u 1 ( D , D ). These inequalities encapsulate the quintessential prerequisites for Player 1’s election of action C to engender an outcome of superior utility, irrespective of the countermeasures undertaken by Player 2. Venturing into the particular circumstance in which the utility function assumes a logarithmic configuration, it is from the conditions previously delineated that one can extrapolate the ensuing inequalities: k 1 ln( a + 1) > ln( c + 1) and k 1 ln( b + 1) > ln( d + 1). Such formulations encapsulate the assessments of logarithmic utility corresponding to the myriad strategy permutations, wherein the variables a , b , c , and d signify the respective payoffs. In order to elucidate the quintessential value of k 1 that delineates the threshold of dominance, one embarks upon the resolution of the previously delineated inequalities. The elucidations emerge as k 1 surpasses both ned inequalities. The solutions manifest as k 1 yields ln( c + 1)/ln( a + 1) and ln( d + 1)/ln( b + 1), thus establishing a critical criterion for the ascendancy of action C, as encapsulated within the inequality (1). This formulation constitutes the bedrock of my enquiry into the strategic dominance of Player 1’s selections, considering the framework of logarithmic utility. □ To render the thesis under discussion with greater depth and clarity, one might consider the employment of numerical illustration. Let us posit, for the sake of argument, that a = 7, b = 1, c = 10 and d = 3. These variables satisfy the prescribed criteria c > a > d > b and 2 a > b + c , thereby affirming the applicability of this instance within the theoretical bounds of a Prisoner’s Dilemma framework. Within this model, the strategic paradigm whereby both participants elect the path of mutual cooperation emerges as the Pareto optimal choice. Delving further into the numerical delineations, it becomes evident that ln( c + 1)/ln( a + 1) ≈ 1.153 and ln( d + 1)/ln( b + 1) = 2. Thus, in the event that both parameters k 1 and k 2 surpass the threshold of 2, it is deduced that the strategy of unanimous cooperation ( C ) not only serves as a Nash equilibrium but also harmonises with the principles of Pareto optimality. In consideration of the preceding analysis, it becomes incumbent upon us to reflect upon the practical implications of these findings within the ambit of real-world scenarios. The explication of the conditions under which cooperative norms may act as a catalyst for mutual cooperation within the ambit of the Prisoner’s Dilemma framework not only augments our theoretical comprehension but also provides crucial insights for the formulation of policy, the conduct of organizational behavior, and the stewardship of common-pool resources. By weaving these theoretical principles into the very warp and weft of policy development and organizational strategising, we may cultivate milieus that are propitious to cooperative conduct, thus ameliorating the incidence of the tragedy of the commons. This necessitates a paradigmatic shift in our conceptualization of the nexus between individual utility functions and collective outcomes, advocating for a more refined understanding of the role that internalised norms play in sculpting strategic interactions. 4 Discussion and concluding remarks This inquiry meticulously elucidates the profound influence of cooperative social mores on decision-making processes within the ambit of the Prisoner’s Dilemma paradigm. Through rigorous examination, it has been ascertained that the embracement of such norms precipitates a marked transition towards the augmentation of collective welfare, thus surmounting the ephemeral temptation of individualistic advantage. This paradigmatic transition from discord to collaboration proffers innovative pathways for ameliorating the conundrum known as the tragedy of the commons. Recent advancements in evolutionary game theory have emphasized the complexity of measuring dilemma strength in heterogeneous networks. Wang et al. (2015) highlight that the variability in the number of interactions among individuals significantly impacts the strength of social dilemmas. Their findings suggest that both the structural properties of networks and the dynamics of individual interactions must be considered to fully understand how social norms influence cooperation. This perspective aligns with the notion that social norms embedded within the cultural fabric of a society play a crucial role in shaping cooperative behavior, especially in the context of the Prisoner’s Dilemma. Ito and Tanimoto (2018) provide a comprehensive framework that helps to visualize and quantify the effects of various reciprocity mechanisms on the strength of social dilemmas. By creating phase-plane diagrams, they illustrate how mechanisms such as direct reciprocity, indirect reciprocity, kin selection, group selection, and network reciprocity can transform the nature of these dilemmas. This is particularly relevant to understanding the role of internalised social norms, as these norms often underpin the reciprocity mechanisms that promote cooperation. The visual representation of how these mechanisms reduce the intensity of dilemmas offers valuable insights into how internalised cultural values and societal expectations drive cooperative behavior in the Prisoner’s Dilemma. By showing that social norms can shift the game dynamics towards more cooperative outcomes, their work underscores the importance of considering both static and dynamic factors in the study of social norms and cooperation. The study of social norms and their impact on decision-making in the Prisoner’s Dilemma can be further enriched by considering dynamic factors such as current wealth. Ito and Tanimoto (2020) introduce a dynamic utility function (DUF) that incorporates an individual’s current status into game theory models. Their findings show that the DUF promotes cooperation among poorer players by relaxing the gamble-intending dilemma (GID) while enhancing the risk-averting dilemma (RAD). This dynamic approach aligns with the concept of internalised social norms, which are influenced by both cultural contexts and individuals’ current conditions, providing a more nuanced understanding of cooperative behavior in evolutionary games. Recent developments in sociophysics have highlighted the significant role of social norms in shaping individual behaviors during epidemics. Tanimoto (2021) explores how evolutionary game theory can be applied to model the spread of infectious diseases, emphasizing the impact of cooperative and defector behaviors on epidemic outcomes. By integrating concepts from sociophysics, this work provides a robust framework for understanding how internalised social norms influence cooperation in public health contexts. These insights are crucial for comprehending how cultural contexts and societal expectations drive decision-making in the Prisoner’s Dilemma, particularly when public health is at stake. In this discourse, I have ventured beyond the foundational premises laid down by Kandel and Lazear (1992) , who adeptly elucidated the dynamics of peer pressure within the realms of partnerships and profit-sharing arrangements. Their theoretical exploration, focusing predominantly on the internal mechanisms of peer pressure and mutual monitoring, has provided a robust scaffold upon which I have sought to extend the discussion. My investigation diverges by delving into the substantive role of internalised cooperative norms, thereby broadening the conceptual understanding of decision-making processes within the Prisoner’s Dilemma. This shift from a focus on peer dynamics to the intrinsic values that underpin cooperative behaviour marks a significant departure from the existing paradigms. Furthermore, my elucidation on the application of cooperative norms to the commons dilemma introduces an innovative perspective that transcends the specific contexts of partnerships. By embedding societal norms directly into strategic considerations, I illuminate the profound impact these norms can have on fostering collective action, particularly in the stewardship of common-pool resources. This novel approach not only amplifies the dialogue surrounding cooperative behavior but also advances the discourse by underscoring the pivotal role these norms play in addressing larger socio-economic and environmental challenges. The practical ramifications of this investigation extend well beyond the purely theoretical domain, furnishing policymakers and organizational leaders with actionable guidance for the formulation of strategies and management practices. This highlights the paramount importance of cultivating cooperative norms as a non-fiscal, yet profoundly efficacious instrument for modulating behaviour, thereby rendering a substantial contribution to the discipline of behavioural economics. My investigation, though it furnishes a plethora of invaluable insights, is not devoid of limitations. The presuppositions underpinning the model, especially the notion of homogeneous assimilation of norms, might not entirely encapsulate the intricacies inherent in real-world contexts. Prospective studies, concomitant with the empirical methodology employed by Khadjavi and Lange (2013) , could proffer a more verisimilar approximation of the manner in which norms are internalised by each participant. Such an endeavor would significantly augment our comprehension of the dynamics that govern cooperative decision-making processes. The model’s employment of a static, singular encounter methodology in addressing the Prisoner’s Dilemma signifies an additional constraint, given that real-world situations seldom adhere to such rigid parameters. It is incumbent upon future research endeavours to entertain the possibility of dynamic or iterated engagements. Moreover, the constants denoted as the propensity to cooperate, k 1 and k 2 , are subject to potential variation over the passage of time, especially within the ambit of common-pool resource stewardship. This necessitates a thorough exploration into the chronological development of these coefficients. Pertaining to the quintessence of primary education, the liberal perspectives of Dewey, as explicated by Uzawa (2013) , underscore the cultivation of both inherent and acquired faculties of the human condition. Dewey (1916) , with particular acumen, elucidates the symbiotic relationship between democracy and the educational paradigm, envisaging democracy not solely as a political construct but as a way of life suffused with communal values and collective experiences. He champions an educational ethos that fosters democratic virtues such as critical discernment, receptivity to diverse viewpoints, and a profound sense of camaraderie. Furthermore, Dewey accentuates the imperative of moral and social maturation as pivotal outcomes of education. He conceives of the scholastic environment as a crucible for the inculcation of responsibility and ethical comportment, thus nurturing a spirit of collaboration. This pedagogical stance is in harmony with the notion that efficacious education during the ‘Industry versus Inferiority’ phase precipitates the emergence of adults who are both collaborative in nature and adept at stewarding communal resources with dexterity. Presently, the enquiry emerges concerning the quintessential paradigm of education that harmonises with Dewey’s liberalist philosophy whilst efficaciously nurturing a predilection for cooperative responsibility and ethical conduct. Young (2011) introduces the notion of a ‘diagnostic approach’ to the stewardship of the commons, accentuating the imperative for the adaptation of institutions and governance frameworks to the distinctive biophysical and socio-economic milieus. This methodology necessitates a comprehensive evaluation of each scenario pertaining to environmental management, with the aim of devising and enacting strategies that are both effective and sustainable. It underscores the premise that every commons is imbued with unique cultural and historical nuances, thereby necessitating an education approach that is customised to these specificities. Consequently, whilst the inculcation of responsibility and ethical sensibilities in pupils is a universal aim, the means through which this is achieved must be congruent with the particular societal and cultural backdrop of each community. Each community ought to undertake an exhaustive empirical analysis of their cultural and social frameworks, and craft educational material that resonates profoundly with their progeny, thus amplifying their inclination towards collaboration for the triumphant governance of the commons. The question of access to communal resources in the modern era presents a distinct quandary, especially when individuals, who have not been inculcated through local pedagogical frameworks, aspire to obtain rights of usage. These persons may not naturally embody the cooperative social mores anticipated by the society stewarding these assets. Nonetheless, it is manifestly impracticable for societies to exclusively limit access to those indoctrinated within their own educational confines. It is of particular note that Erik Erikson delineates the ‘Industry vs. Inferiority’ phase as pivotal for the assimilation of norms, yet this process is not confined to the precincts of early childhood. Indeed, social mores can be assimilated during later stages of adulthood, although this may occur with a lesser degree of ingrained depth ( Berns, 2007 ). This acknowledgment underscores the necessity for bespoke educational programmes for newcomers to these resources, aiming to enlighten them regarding the cultural and historical underpinnings of the specific commons. Such pedagogical endeavours could significantly enhance their integration and collaboration with incumbent resource users, thereby ensuring the efficacious governance of the commons. Moreover, the discourse at hand unveils a conspicuous lacuna within numerous models of game theory, inclusive of the quintessential Prisoner’s Dilemma, which frequently neglect the societal milieu in which the participants are embedded. These models conventionally abstract the players from their societal contexts, a neglect that has been astutely remarked upon by academicians such as Mamada and Perrings (2022) . They champion the integration of societal influences into the frameworks of game theory, as exemplified in their pioneering quantum entanglement model designed to encapsulate the influence of society on the dynamics of resource extraction games. This vantage point advocates for the augmentation of the prevailing model to incorporate societal influences on cooperative behaviours, thereby augmenting our comprehension of the dynamics that underpin the inclination towards cooperation within diverse societal frameworks. Additionally, the implications of this study extend well beyond the immediate sphere of resource management and economic modeling, delving into the arena of global challenges where the imperative of cooperation cannot be overstated. The insights gleaned from our analysis illuminate a promising pathway for devising strategies to combat climate change, enhance international cooperation, and confront global health crises. By evidencing the profound impact of cooperative norms on strategic behavior, my research highlights the essential need for embracing a more collaborative stance in global governance. This requires an interdisciplinary approach to cultivate and reinforce cooperative norms, utilising insights from economics, sociology, and behavioral science to create solutions that are both effective and sustainable. The quest to surpass traditional paradigms in favor of a more cooperative ethos is not solely an academic endeavor but a moral imperative in our shared journey towards a more equitable and sustainable future. In the twilight of this inquiry, the profound insights of Weber and Freud emerge as luminaries guiding our understanding of the underpinnings of cooperation. Weber’s exposition on societal norms as the fabric of economic action, juxtaposed with Freud’s delineation of the preconscious, furnishes a compelling vista through which the complexities of the Prisoner’s Dilemma and the tragedy of the commons may be re-examined. This study’s foray into the shadows of the mind reveals that the essence of cooperation extends far beyond the calculative confines traditionally postulated, embedding itself within the intricate tapestry of societal norms and preconscious motivations. The implications of this paradigmatic shift are manifold. It beckons policymakers and scholars alike to recalibrate their compasses towards a more holistic understanding of human behavior, one that accommodates the silent whispers of cultural imperatives and the subtle nuances of the preconscious. As we chart the course towards sustainable resource management, the insights garnered from this exploration advocate for strategies that resonate with the deeper psychological and sociological currents that propel cooperative endeavors. Looking to the horizon, the integration of Weber’s and Freud’s theories opens new avenues for empirical investigation, inviting a multidisciplinary approach to unravel the enigmas of cooperation. Future research might endeavor to empirically validate the model proposed herein, exploring the tangible impacts of societal norms and preconscious motivations on collective action across diverse cultural landscapes. Such endeavors will not only enrich our theoretical repertoire but also equip us with the tools to forge more cohesive and resilient communities in the face of shared challenges. In summation, the present investigation elucidates the pivotal role of cooperative social norms in ameliorating the conundrum presented by the Prisoner’s Dilemma, whilst concurrently laying the groundwork for both theoretical and practical inquiries into the cultivation of cooperative dispositions. By assimilating these revelations, we are presented with a propitious opportunity to confront and surmount pressing societal tribulations, most notably in the realm of sustainable stewardship of common-pool resources. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. CRediT authorship contribution statement Robert Mamada: Writing – review & editing, Writing – original draft, Software, Resources, Methodology, Investigation, Formal analysis, Conceptualization. Declaration of generative AI and AI-assisted technologies in the writing process In the course of preparing this manuscript, the author employed the capabilities of ChatGPT 4 for the rectification of spelling and grammatical inaccuracies. Subsequent to the utilization of this tool, the author undertook a thorough review and refinement of the content, and assumes complete accountability for the integrity of the published work. Declaration of competing interest I hereby affirm that I am devoid of any known financial conflicts of interest or personal affiliations that might have seemed to impact the research presented in this manuscript. Acknowledgment I thank two anonymous reviewers for their useful suggestions.
|
[
"BERGER",
"BERNS",
"BOYD",
"DEWEY",
"DIAMOND",
"GUTH",
"HARDIN",
"HUFFMAN",
"ITO",
"ITO",
"KAHNEMAN",
"KANDEL",
"KATCHANOVSKI",
"KHADJAVI",
"MAMADA",
"MAMADA",
"MOFFATT",
"OSTROM",
"OSTROM",
"OSTROM",
"PETERSEN",
"ROTH",
"TANIMOTO",
"TANIMOTO",
"TOBIAS",
"UZAWA",
"UZAWA",
"WANG",
"WEBER",
"YITBAREK",
"YOUNG"
] |
7be267294cd74b1fba438693f0308a02_Objetivos a seguir en la certificación y recertificación en Diagnóstico por Imágenes_10.1016_j.rard.2016.09.001.xml
|
Objetivos a seguir en la certificación y recertificación en Diagnóstico por Imágenes
|
[
"Binda, M.C."
] | null |
La educación constituye indiscutiblemente la base del desarrollo, sobre todo cuando es continua, mantiene su excelencia a pesar de los cambios políticos y tiene una amplia proyección social. Estos conceptos deberían ser aplicados en todas las áreas del saber y, con más razón, en la Medicina. Un sistema educativo exitoso tiene que cumplir dos objetivos: garantizar el acceso gratuito a las mejores universidades y posibilitar que los estudiantes especialmente dotados y aquellos que aspiran a transformarse en científicos encuentren en la universidad el ambiente apropiado para desarrollarse y volcar sus conocimientos en beneficio de toda la sociedad . 1 El mejoramiento de la educación universitaria, sin lugar a dudas, tiene que estar acompañado por una sólida formación en la escuela primaria y secundaria. El educando debería finalizar sus estudios secundarios con la certeza de que ninguna universidad puede aportar conocimientos sin el propósito formal del estudiante de lograr la máxima formación profesional. Si bien parece fantasioso analizar las bases del éxito en la educación médica e investigación básica y aplicada en un país que evoluciona con períodos cíclicos de crisis y sin una política de estado en el área educativa, contar con una estructura amplia y gratuita en los tres niveles de formación nos plantea el interrogante y el desafío sobre los pasos a seguir, a fin de lograr una educación médica de excelencia. Esta debe estar avalada por una política de estado y ser fruto de esfuerzos coordinados para que se desarrolle en forma amplia e inclusiva a lo largo y ancho de todo el país. La carrera de Medicina se encuentra regulada por el Estado, pero dado que el ejercicio de la profesión puede poner en riesgo la salud y seguridad de los habitantes, la gratuidad, por sí sola, no garantiza la formación de nuestros médicos. Esta debe acompañarse de una carga horaria mínima, contenidos curriculares básicos y criterios de intensidad de formación práctica, establecidos por el Ministerio de Educación en acuerdo con el Consejo de Universidades. En 1994, por resolución de la Comisión Directiva de la Asociación Médica Argentina (AMA), se creó el Comité de Recertificación (CRAMA), cuya función es la de organizar la recertificación de los médicos de todas las especialidades reconocidas por el Ministerio de Salud de la Nación. El CRAMA definió como recertificación “al acto por el cual un profesional médico previamente certificado en una especialidad reconocida se presenta voluntariamente ante sus pares para que estos evalúen su trabajo y sus condiciones y cualidades ético-morales en forma periódica, y le otorguen un aval que lo acredite y jerarquice en su labor profesional” . 2 El Diagnóstico por Imágenes, como especialidad, tiene entre otras características la de evolucionar rápidamente en el corto plazo debido a los grandes cambios tecnológicos en aparatología e investigación aplicada. Pensemos simplemente que en los años 60 era inimaginable que la antimateria, como el positrón o el antielectrón, se utilizara en el diagnóstico médico y hoy la tomografía por emisión de positrones (PET) se indica como una posibilidad diagnóstica en constante avance. Estos progresos, de todos modos, podrán ser aún más efectivos cuando estén acompañados por programas de educación médica continua y entrenamiento en las diversas técnicas de imágenes, avalados por la certificación y recertificación de los profesionales, siempre y cuando su implementación sea obligatoria y no presente diferencias curriculares entre las universidades, colegios médicos y sociedades del país. Actualmente la recertificación médica, que se lleva a cabo cada 5 años en la Ciudad Autónoma de Buenos Aires, no es obligatoria. Si bien contamos con una ley al respecto, esta no ha sido reglamentada a pesar de la cantidad de años transcurridos, por lo que la certificación médica queda librada a la voluntad de aquellos especialistas que munidos de autocrítica y responsabilidad para con el paciente, reconocen que deben perfeccionarse continuamente y que este perfeccionamiento debe ser evaluado regularmente por sus pares. A diferencia del resto del país, en la provincia de Buenos Aires la certificación y recertificación de los médicos especialistas en Imágenes es obligatoria por ley desde hace muchos años. El Estado y sus representantes democráticamente elegidos son los responsables de reglamentar la ley de certificación y recertificación en las diferentes especialidades médicas a fin de que sea obligatoria en todo el país. Dejar librada la formación médica a la decisión individual se traduce diariamente en un alto riesgo para la salud de la población que, confiada en nuestros saberes, recibe un diagnóstico, en este caso, de imágenes. Actualmente se está trabajando activamente en pos de la actualización del Programa Nacional de Garantía de Calidad de la Atención Médica y de la creación de instrumentos para su normatización. Uno de los principales objetivos es homologar las normas y guías para la certificación y recertificación en todas las sociedades locales y provinciales, creando un ente homologador donde converjan todas las sociedades médicas del país. El propósito final es que todos los médicos especialistas certifiquen su título y lo recertifiquen cada 5 años por ley. El camino es largo y lleno de obstáculos, pero cada día la meta y los pasos a seguir son más claros. No solo se dirigen a homologar el mecanismo trabajando sobre las competencias, sino que junto con la Agencia Nacional de Calidad, las sociedades científicas y los hospitales homologados, con una cobertura nacional de salud, logren las coincidencias necesarias a fin de que solo los médicos certificados y recertificados puedan acceder a la atención sanitaria en nuestro país. Un programa actualizado de certificación debe contar con pautas de profesionalismo, educación médica continua, excelencia en la especialización y evaluación de la práctica médica. El objetivo es que cada médico especialista en Imágenes demuestre un nivel de actualización y desempeño en ciertos puntos, como: seguridad del paciente, correcta interpretación de las imágenes, informes de las imágenes de urgencia (guardias), guías de las diferentes prácticas radiológicas y técnicas estándar, y comunicación fluida con el médico referente del paciente . 3–5 La excelencia profesional en el desarrollo de nuestra especialidad tendría que estar sustentada en el aprendizaje continuo durante toda la vida profesional activa, a la vez que debería basarse en un sistema que pueda medirse, presentar datos comparativos y ajustarse a prácticas estandarizadas . De esta manera, la certificación y recertificación se traducirían en un beneficio real para el paciente, merecedor del diagnóstico recomendado por la literatura basada en la evidencia 6 4, . 7
|
[
"STRATMANN",
"VIDARENY",
"STRIFE",
"BERQUIST",
"MEDINA"
] |
1e9bd66d5e8e4e64908c295e864777ad_Adapting liposomes for oral drug delivery_10.1016_j.apsb.2018.06.005.xml
|
Adapting liposomes for oral drug delivery
|
[
"He, Haisheng",
"Lu, Yi",
"Qi, Jianping",
"Zhu, Quangang",
"Chen, Zhongjian",
"Wu, Wei"
] |
Liposomes mimic natural cell membranes and have long been investigated as drug carriers due to excellent entrapment capacity, biocompatibility and safety. Despite the success of parenteral liposomes, oral delivery of liposomes is impeded by various barriers such as instability in the gastrointestinal tract, difficulties in crossing biomembranes, and mass production problems. By modulating the compositions of the lipid bilayers and adding polymers or ligands, both the stability and permeability of liposomes can be greatly improved for oral drug delivery. This review provides an overview of the challenges and current approaches toward the oral delivery of liposomes.
|
1 Introduction Since the discovery of liposomes by Bangham and Horne in 1964 , the potential of liposomes as drug delivery carriers has been extensively explored 1 via versatile administrative routes such as parenteral, oral, pulmonary, nasal, ocular and transdermal routes . In 1974, AmBisome 2–4 ® , a formulation of amphotericin B, became the first injectable liposome product to be licensed . Nevertheless, primitive parenteral liposomes have one severe drawback: they are always cleared from blood very quickly and end up in organs and tissues in the reticulo-endothelial system (RES, 3,4 e.g ., liver, spleen, and lung). The clearing occurs by plasma opsonization and subsequent sequestration from circulation . By pegylation, a process of coating with long-chain polyethylene glycols (PEG), liposomes are camouflaged with layers of hydrophilic coatings to evade RES clearance and achieve long circulation in the body 5–8 . The successful marketing of Doxil 9–16 ® , a pegylated liposomal doxorubicin product, represents a milestone in the development of parenteral liposomes . 17 Liposomes consist of enclosed vesicles of concentric self-assembling lipid bilayers composed of phospholipids and cholesterols in common . According to the structure of lipid bilayers and the size of the vesicles, liposomes are commonly classified into large unilamellar vesicles (LUV), small unilamellar vesicles (SUV), multilamellar vesicles (MLV) and multivesicular vesicles (MVV) 1,4,5 . While LUV, SUV and MLV are candidate carriers for versatile routes including the oral route, MVV are used for parenteral delivery only. The inner aqueous phase of liposomes is well protected by the lipid bilayers and is able to load hydrophilic entities, whereas the hydrophobic region in the lipid bilayers is able to load hydrophobic entities ( 4,5 Fig. 1 ). The most remarkable advantages of liposomes are their biocompatibility and safety due to resemblance to biomembranes. Moreover, it is easy to modify the liposomal surfaces by conjugation to polymers and/or ligands so as to endow the vesicles with special properties ( Fig. 1 ). See recent reviews for a better understanding of the history and various application aspects of liposomes. 2,18–24 Oral delivery of liposomes has a long history as well and can be traced to as early as the late 1970s . It is interesting to see that the initial application of oral liposomes was with the delivery of insulin 25–27 , emphasizing the continual challenge in the field of oral drug delivery. Despite the initial ardor, the efficacy of oral liposomes was not reproducible or predictable. For instance, only 54% of the normal rats and 67% of the diabetic rabbits responded to the treatment of oral liposomal insulin 28 . More negative results added to the disappointment of using liposomes as oral delivery carriers 29 , and there seemed to be a period of quiescence in the 1980s. However, attempts to use liposomes as drug carrier systems for oral delivery resurged in recent years 30,31 , thanks to modern modification technologies to enhance liposomal stability and permeation. 32–39 By addition of polymer coatings and modulating liposomal compositions 40–43 , both the stability of liposomes in the gastrointestinal tract (GIT) and trans-epithelial absorption of active components have been significantly improved. It is worth noting that once again oral delivery of biomacromolecules, especially proteins and peptides, becomes the hot topic of research and discussion 44–47 . In addition to improved oral bioavailability, the pharmacokinetic and pharmacodynamic profiles are improved as well 48,49 . In this review, the status quo will be summarized with emphasis on challenges and strategies taken to adapt liposomes for oral delivery. 50,51 2 Challenges confronting liposomes as oral drug delivery systems 2.1 Instability Conventional ( i.e ., non-modified) liposomes, are susceptible to combined detrimental effects of gastric acid, bile salts and pancreatic lipases in the GIT, all of which lead to reduced concentrations of intact liposomes and payload leakage . Following incubation with artificial intestinal fluid for 120 min, a majority of liposomes show irregular shapes and obviously damaged membranes, whereas only a small proportion of liposomes maintain intact structures 52 . Bile salts are able to disrupt the lipid bilayers of liposomes composed of lipids with lower phase transition temperatures such as phosphatidylcholine (PC) and dimyristoyl phosphatidyl choline (DMPC) 53 . Pancreatic fluid, which contains lipolytic enzymes such as lipases, phospholipase A2 and cholesterol esterases, hydrolyses liposomal phospholipids thereby disrupting liposomal structure 54,55 . 55,56 Generally, there are widespread concerns with the physical stability of liposomes in the GIT. For labile biomacromolecules, liposomes are apparently not ideal carrier systems because of the instability of liposomes and instant degradation of leaked payloads upon disruption of the liposomal structure . However, the situation differs for poorly water-soluble drugs; in this case, the remnants of liposomes can form new mixed micelles, in which the encapsulated drugs are transferred to the new vehicles and transported to intestinal epithelia for absorption 40 . 40,54 2.2 Poor permeability Conventional liposomes have poor permeability across intestinal epithelia because of the relatively large size of particles and the presence of various epithelial barriers. There are mainly two proposed pathways for enhancement of oral drug delivery by liposomes. The first is via drug release in the gastrointestinal lumen or via transformation of vesicles into mixed micelles, and subsequent permeation of drug molecules across the intestinal epithelia . As mentioned above, this approach is apparently not workable for labile biomacromolecules ( 40 e.g ., insulin ). The improved absorption of biomacromolecules is apparently 47,52 via the second pathway; that is via uptake of intact liposomes by M cells residing in the follicle-associate epithelia (FAE) of Peyer׳s patches . However, M cell-mediated uptake sets an upper limit on oral absorption of liposomes 57 because M cells represent only 5% of human FAE and 1% of total intestinal epithelial cell population 40,47 . On the other hand, the rapid secretion and shedding of gastrointestinal mucus significantly restrict the oral absorption of liposomes as well, which are likely trapped in the mucus layers 58,59 via hydrophobic interaction . There is so far no direct evidence confirming the transport of intact liposomes across intestinal enterocytes. 60 2.3 Formulation challenges Although several liposomal formulations ( e.g ., Doxil ® ) have been successfully marketed, the production of liposomes is not without challenges. In fact, the mass production of liposomes is largely unsatisfactory due to batch-to-batch variations. Although it may meet the demands for parenteral products, the biggest batch sizes so far are not big enough for oral use, which usually require higher doses and extended courses of treatment. Owing to the instability of liposomes in aqueous dispersion, there is always a need to formulate liposomes into solid dosage forms . Traditionally, freeze-drying is employed to produce solid liposomal formulations with good reconstituting capacities 61–64 . However, the freeze-drying technology is less efficient and consumes much time and money. More efficient technologies are desired for mass manufacturing of solid liposomal products. 64–66 3 Recent advances in modulating liposomes for oral drug delivery 3.1 Stabilization In view of the poor stability of liposomes during production, storage and transit across GIT, a series of approaches such as modulation of lipid compositions, surface coating and interior thickening have been explored to stabilize liposomes. 3.1.1 Modulation of lipid compositions Conventional liposomes are commonly comprised of phospholipids and cholesterols, mimicking the physiological compositions of biomembranes. Although liposomes demonstrate certain degree of stability both in vitro and in vivo , they are susceptible to the harsh gastrointestinal environment. Liposomes containing phospholipids with phase transition temperatures ( T p ) below 37 °C are completely disrupted by bile salts, but this effect is less pronounced for those with T p higher than 37 °C . In early developmental stages, it is an easy option to improve the physical stability of liposomes by optimizing lipid compositions. By incorporating stearylamine, liposomes are charged positively and are capable of suppressing the digestion of insulin by trypsin 67 and enhancing the hypoglycemic effect 68 . Replacing phospholipids or cholesterols with specific lipids or sterols improves the performance of oral liposomes due to enhanced stability in the GIT 26,69 . Insulin-loaded liposomes prepared with dipalmitoyl phosphatidylcholine (DPPC) and a soybean-derived sterol mixture exhibit a better hypoglycemic effect than conventional liposomes, which was ascribed by the authors to increased rigidity of the lipid bilayers 70–73 . 72 As a type of surfactant secreted by hepatocytes, bile salts have been considered to be the main factor for the disruption of liposomes in GIT . Paradoxically, studies revealed that prior incorporation of bile salts into liposomal bilayers stabilized the membranes against the destructive effect of physiological bile salts 74,75 . It is well accepted that physiological phospholipids and bile salts readily form colloidal mixed micelles, which is the main mechanism for oral absorption of aliphatic acids and glycerides 44,45,52,76 . Bile salts always have a tendency to associate with phospholipids actively, even from lipid bilayers of plain liposomes, thereby compromising the integrity of liposomes 44,45 . However, the prior incorporation of bile salts in liposomal bilayers offsets the destructive effects of outside bile salts 30,31,40 . To date, liposomes containing bile salts, also named as bilosomes, have been widely investigated for both oral immunization 47,52 and oral delivery of poorly water-soluble drugs and biomacromolecules 45,77 . Various types of bile salts including sodium glycocholate (SGC), sodium taurocholate (STC) and sodium deoxycholate (SDC) have been incorporated into liposomes to protect enclosed insulin from enzymatic degradation by pepsin, trypsin and 47,78–81 α -chymotrypsin . A better protection of insulin is observed for liposomes containing SGC than liposomes containing STC or SDC and conventional liposomes 52,81 . It is believed that improved stability of liposomes by bile salts contributes at least partly to enhanced oral bioavailability of insulin 47,81 . 81 3.1.2 Surface coating To protect liposomes from the harsh gastrointestinal environment, another workable approach is to coat liposomal surfaces with layers of polymers such as enteric polymers, proteins and chitosans . Enteric coatings are well known to prevent liposomes from disintegration in the stomach thereby improving absorption as more liposomes survive and are exposed in small intestine. Liposomes coated with Eudragit L100 enhance the oral bioavailability of alendronate sodium by 12-fold in rats as compared with the commercial tablets 50,82,83 . However, in some cases a layer of coating with enteric polymers such as Eudragit S100 does not protect damage by bile salts 50 . To this end, a design of liposomes-in-microspheres delivery systems comprising chitosan-coated liposomes within Eudragit S100 microspheres was found to be highly effective to resist the attack by bile salts 82 . 83 Polysaccharides are another kind of functional coating materials used to stabilize liposomes in the GIT . Arabinoside-loaded liposomes coated with 84–87 O -palmitoylpullutan (OPP), a polysaccharide derivative, are able to withstand the damage caused by sodium cholate (SC) at a concentration up to 16 mmol/L at pH 5.6 or pH 7.4 . Moreover, OPP-coated liposomes showed a reduced release rate at pH 2.0 and 5.6 at 37 °C as compared to uncoated liposomes 84 . Polysaccharide-coated liposomes loading bovine serum albumin (BSA) are capable of producing higher levels of serum IgA and IgG in comparison with naked liposomes, indirectly verifying improved stability of the model drug 84 . In addition to OPP, 85 O -palmitoylcurdlan sulfate and 86 O -palmitoylscleroglucan have been utilized to protect liposomes from SC and pancreatin. Well-known as a gelling agent 87 , pectin has also been studied as a stabilizer for liposomal drug delivery systems 88 . Low- and high-methoxylated pectins show improved liposomal stability upon storage without disturbing membrane permeability 89 . Among various polysaccharides, chitosan may be the choice of coating materials because it is positively charged and readily interacts with the negatively charged liposomal surfaces to ensure firm coating. On the contrary, positive charges should be introduced onto liposomal surfaces to achieve firm coating with negatively charged polymers such as pectins 89 via electrostatic interaction . 41 In vitro studies show that chitosan-coated liposomes achieve better protection of liposomes as well as the protein payloads in artificial gastrointestinal media . Further observation of enhanced oral bioavailability confirms the effectiveness of coating with chitosan 90,91 . Moreover, the stability of chitosan-coated liposomes can be strengthened by subsequent cross-linking using 91 β -glycerolphosphate . 92 Pegylation, a technique originally developed for extending drug half-life in blood , has also found applications in the oral delivery of liposomes 93 . Pegylation of DPPC and PC liposomes significantly enhances the oral bioavailability of recombinant human epithelial growth factor (rhEGF), which was ascribed by the authors to suppression of enzymatic degradation by coating with PEG 43,69,94–96 . Liposomes coated with PEG 2000 or mucin are able to withstand bile salts and improve the stability of encapsulated insulin in GIT 95 . 69 In addition to the coating materials mentioned above, there are many other compounds available for chemical modifications of liposomes. For instance, polyelectrolytes perform well to stabilize liposomes loading doxorubin or paclitaxel 97 by layer-by-layer (LBL) coating in artificial gastrointestinal fluids with enhanced oral bioavailability by 4–6 folds 98 vs . conventional liposomes. Inorganic materials such as silica and silica nanoparticles 99,100 are among other stabilizers for oral delivery of liposomes. The formation of layers of protective coatings, as a result of surface adsorption of silica particles, is believed to contribute to enhanced liposomal stability 101 . 99–101 3.1.3 Interior thickening The physical stability of liposomes can also be improved by thickening the interior aqueous phases. Normally, interior thickening is initiated by increasing the viscosity of the interior aqueous phases, or alternatively by reconstitution of lipid bilayers to enclose hydrogel beads upon mixing the beads with liposomes . The so-called Supermolecular Biovector (SMBV 102 TM ), which consists of charged, cross-linked polysaccharide cores surrounded by lipid membranes, was found to be an amiable carrier for proteins . Another group reported a kind of lipobeads prepared by self-assembling of lipid bilayers around hydrogel beads initiated by acrylamine-functionalized lipids tethered to the bead surfaces 103,104 . 105 In vitro evaluation indicated enhanced stability of lipid bilayers even at temperatures below T p . Interior thickening can also be attained 106 via in situ gelling after formation of liposomes in response to physical stimuli. UV-induced polymerization within liposomes has been utilized to prepare lipobeads with increased mechanical strength and enhanced stability . By incorporating reverse-phase thermosensitive 107,108 in situ gel into the aqueous phase of liposomes, interior thickening was achieved when liposomes were heated to a temperature above the gelling temperature ( T gel ) ( Fig. 2 ) . 109,110 T gel can be tailored within the range of room temperature and physiological temperature (37 °C) through adjusting the ratio of the thermosensitive gel (poloxamer 407/poloxamer 188). Therefore, the liposomes were prepared under conditions similar to conventional liposomes at ambient temperature . After administration, the liposomal interior gelates in response to increased temperature. Further study showed that interior gelling improves physical stability and protects the lipid bilayers against membrane destabilizers ( 109,110 Fig. 2 ) . Significantly prolonged elimination time after intravenous injection suggests enhanced liposomal performance 110 in vivo . Interior thickening improves some of the physicochemical properties of liposomes such as increased rigidity of the lipid bilayers, modified shape, improved physical stability and sustained release of the payloads. However, the utility of these liposome formulations for oral delivery of liposomes awaits experimental validation. 109 3.1.4 Other strategies In addition to the methods mentioned above, other strategies have also been utilized to improve the stability of liposomes. For example, novel double liposomes, prepared by filtering preformed inner liposomes through a glass filter painted with lipid bilayers, demonstrate even more improved stability . The outer bilayers serve as protective coatings against the destruction by intestinal enzymes; as a result, significantly enhanced hypoglycemic effect (insulin) 111 or hypocalcemia effect (salmon calcitonin) 111 was achieved. In another study, liposomes were embedded into gelatin matrices to stabilize the lipid bilayers and attained controlled release of the vesicles 112 , although no 113 in vivo data were provided. 3.2 Absorption enhancement 3.2.1 Enhanced absorption due to mucoadhesion Mucoadhesion endows liposomes with prolonged GIT residence, allowing prolonged contact of liposomes and/or the payloads with intestinal epithelia and subsequently enhancing opportunities for oral absorption of either liposomal vesicles or the payloads. Enhancement of mucoadhesion is attainable through coating with polymers or modulating surface charges. Positively charged liposomes gain not only mucoadhesion but also resistance to enzyme destruction , and thus improve oral bioavailability of the payloads 42 . Coating liposomes with mucoadhesive polymers such as polysaccharides seems to be one of the most promising approaches to achieve mucoadhesion 26 . Pectins are one class of preferable polysaccharides commonly used 41,114–116 . Pectin-coated liposomes show adhesion to mucin with high-methoxylated pectin-coated liposomes performing the best 115,117 . In another study, mucoadhesive pectin-liposome nanocomplexes (PLNs) gave better intestinal absorption of calcitonin than uncoated liposomes 115 . High density of fluorescently labeled PLNs, observed by confocal laser scanning microscopy, were found adhering to intestinal epithelia and remained for a prolonged duration, suggesting strong mucoadhesion 41 . 41 As a natural cationic polysaccharide derived from chitin via deacetylation, chitosan represents one of the most popular coating materials for oral liposomes due to low toxicity, biocompatibility, biodegradability and mucoadhesion. Various chitosan derivatives are reported to improve mucoadhesive properties of liposomes by either chemical coupling or physical coating 118 . Insulin-loaded liposomes coated with mucoadhesive polymers such as chitosan, polyvinyl alcohol and poly (acrylic acid) show better and more prolonged hypoglycemic effect than uncoated ones 119–121 . The type of chitosans also influences the degree of mucoadhesion and thereby the 122 in vivo behaviors; low-molecular-weight chitosans show stronger mucoadhesion . A comparison of different materials on mucoadhesion confirms that chitosan is the best coating materials for liposomes following the order of chitosan-coated liposomes≥carbopol-coated liposomes>positively charged non-coated liposomes>negatively charged non-coated liposomes 114 . Combinatory use of chitosan with other mucoadhesive materials such as tocopherol polyethylene glycol succinate (TPGS) reinforces mucoadhesiveness 42 . 123 Apart from polysaccharides, many other mucoadhesive polymers are also used to coat liposomes. Coating with PEG and mucin not only improves the stability of liposomes but also extends the residence time in GIT, which altogether contribute to the hypoglycemic effect of insulin 69 . In contrast to the mechanism of prolonged residence of pegylated nanocarriers in circulation following intravenous administration, the extended residence of PEG-coated liposomes in the GIT is due to deep penetration of the PEG chains into the mucus layers lining the GIT wall and inter-weaving with mucin. The extended retention in the GIT thus strengthens the uptake of the vesicles by M cells and subsequent efficacy of oral immunization 43 . 95 3.2.2 Enhancer-facilitated absorption Various absorption enhancers have been studied to facilitate the oral absorption of liposomal payloads. TPGS 400, cetylpyridinium chloride and cholylsarcosine, in combination with stearylamine, were confirmed to enhance the oral absorption of liposomal fluorescein isothiocyanate (FITC)-dextran, a hydrophilic macromolecule . Tween-80, a surfactant commonly used as a solubilizer, enhances the oral bioactivity of insulin when it is incorporated into liposomes at a level of 1% 124 . In a comparative study, cetylpyridinium chloride performed better on enhancement of oral bioavailability of human growth hormone than a few other absorption enhancers including d- 46 α -TPGS 400, phenylpiperazine, sodium caprate and octadacanehiol . 125 Bile salts are physiological surfactants that play a very important role in lipid absorption. By incorporating bile salts into the lipid bilayers of liposomes, the oral bioavailability of a variety of hydrophilic and lipophilic drugs has been significantly enhanced . Owing to their structural resemblance to cholesterol, bile salts can be easily incorporated into liposomal membranes to form bilosomes. Among the bile salt family, SC, STC, SDC and SGC are popular candidates used in bilosomes for enhancement of oral absorption 79,126 . The oral bioavailability of cyclosporine A was significantly enhanced by bilosomes in comparison with conventional liposomes. The enhancement is probably due to facilitated absorption by SDC rather than improved release because drug release from liposomes is very slow 127 . 78 Non-ionic surfactants are also used as absorption enhancers. Tween 80-reinforced liposomes composed of SPC and cholesterol significantly enhanced the absorption of (+)-catechin following oral administration with increased area under the curve (AUC) and prolonged mean residence time (MRT) as compared to the solution control . Enzyme inhibitors are always used in combination with enhancers to improve the absorption of liposomal biomacromolecules. This was demonstrated by the significantly enhanced hypocalcemic effect of calcitonin when chitosan conjugated with an inhibitor aprotinin was used to coat liposomes 129 . 119 3.2.3 Polymer-facilitated absorption Besides enhancement of liposomal stability and mucoadhesion, polymers also enhance intestinal permeability. By opening tight junctions , 130 N -trimethyl chitosan has become a preferable polymer to coat liposomes for oral delivery of various ingredients . Another chitosan derivative, methylated 36,39,131–133 N -(4- N , N -dimethylaminobenzyl) chitosan, was applied to coat FITC-conjugated liposomes to enhance the permeability of a model protein BSA across Caco-2 cell monolayers . The combined use of cell-penetrating peptide such as oligoarginine further enhanced the efficacy of chitosans 91 . It should be noted that opening epithelials junctions with these agents can have both positive and negative effects. The latter may include risks of concurrent entry of toxins as well as payload. The loss and gain of using chitosans are still awaiting systemic evaluation. 133 The trapping capacity and fast turnover of mucus are known factors which impede the permeability of liposomes across mucus layers . Recently, mucus-penetrating polymers were used to coat liposomes to facilitate permeation. For instance, liposomes coated with chitosan-thioglycolic acid 6-mercaptonicotinamide-conjugate (an 60 S -protected thiomer chitosan with mucus-penetrating capabilities) achieved 8.2-fold enhancement of physiological bioavailability (areas above curves of the blood calcium levels) of calcitonin following oral administration in rats . Pluronic F127-coated liposomes were reported to enhance oral absorption of lipophilic ingredients due to the intestinal mucus-penetrating properties 134 . A 1.84-fold enhancement of AUC 135–137 0- of cyclosporine A-loaded pluronic F127-coated liposomes was seen following oral administration t vs . chitosan-coated liposomes . Additionally, polymers with polyethylene oxide tags such as pluronic P85 and PEGylated G5 PAMAM dendrimer inhibit the P-glycoprotein efflux system and enhance overall oral bioavailability when used as coating materials for liposomes 136 . 138 3.2.4 Ligand-mediated targeting to epithelial cells To overcome the poor permeability of conventional liposomes, ligands have been investigated to enhance intestinal uptake by epithelial cells via receptor-mediated endocytosis. Since most cell proteins and lipids in cell membranes of the GIT are glycosylated, lectins have been widely utilized to modify liposomes for oral immunization or oral drug delivery 139–141 . This is possible due to the specific recognition and binding by lectins to glycans. Wheat germ agglutinin (WGA)-modified liposomes containing insulin achieved superior control of blood glucose as compared with ulex europaeus agglutinin 1 (UEA 1)-modified ones 142 . However, the results are not consistent with the findings obtained by another group, who reported that UEA 1 performed better than WGA 142 . By taking advantage of the interaction between lectins and glucans, mannose derivatives were applied to modify liposomes to target mannosyl receptors expressed in antigen-presenting cells (APCs) 139 . Antibodies were attached to liposomes to enhance gastrointestinal permeability as well. In this case, IgA-coated liposomes containing ferritin showed enhanced immune responses 143,144 . The authors ascribed the enhancement to increased uptake 145 via M cells, but did not mention the relevant receptors . In a recent work, Fc fragments were used as ligands to modify liposomes for active targeting to neonatal Fc receptors. Results with these liposomes showed significantly improved hypoglycemic effects of insulin 145 . In view of the instability of peptide ligands in the GIT, non-peptide ligands such as folic acid (FA) 146 and biotin are preferred for liposomal surface modification 147,148 . FA-modified polymer-stabilized multilayer liposomes gave an approximately 20% relative bioavailability of insulin following oral administration 149 v s. results from subcutaneous administration . Similarly, functionalization of bilosomes with glycomannan improved liposomal targeting and stability 150 . In addition to the ligands mentioned above, ligands employed in the oral delivery of other types of nanoparticles 151 can also be utilized to modify liposomes. 152,153 3.3 Mass production The practice of developing liposomes as oral drug delivery systems has motivated investigations on the mass production of liposomes on an industrial scale. On a laboratory scale, liposomes can be prepared using a variety of methods such as thin-film dispersion, reversed-phase evaporation, detergent dialysis, solvent injection and a few other methods . So far, these methods are only successful for small scale production of liposomes. Problems encountered with scale-up include poor size distribution, poor batch-to-batch reproducibility, physicochemical instability, residues of organic solvent and high production costs 154 . 155 Considerable effort has been made in recent decades to overcome these problems. A continuous high-pressure extrusion apparatus was developed to prepare liposomes with uniform size on a one-liter scale . The leakage of drugs upon extrusion is seen as a drawback of this method 156 . A high-speed dispersion method has been developed to prepare liposomes with high physical stability and encapsulation efficency 156 . One concern with this technique may be the production of smaller-sized liposomes, ranging from 280–350 nm 157 . High-pressure homogenization/extrusion has been applied to downsize large liposomes containing plasmid DNA with commercially available instrumentation. Although this methodology has the capability for large-scale continuous (1–1000 L/h) production 157 , but drug leakage and high production costs of this complex process restrict its industrial application. 158 The ethanol injection technique is probably the most suitable present method for implementation at industrial scale due to its simplicity and safety. Regarding this method, size distribution is controllable by modulating the aqueous phase temperature in large-scale production . A novel ethanol injection method using a microengineered nickel membrane was recently developed 159 . Depending on the size of the membrane, this technique can be easily scaled up to a very large scale. Moreover, the size and size distribution of liposomes can be controlled 160 via the oscillating membrane system during scale-up . Another scalable production technology based on ethanol injection produces liposomes regardless of production scale under fully sterile conditions 160 . Economical evaluation of liposome production by ethanol injection suggests economic feasibility for a plant with a daily production capacity of 288 L of liposomal suspension 161 . 162 Owing to the physical instability of liposomes in aqueous media, the storage problems must also be considered. Therefore, there has been consistent effort to prepare liposomes as solid dosage formulations. Spray-drying and freeze-drying are commonly used to address this problem. However, factors such as high cost of freeze-drying and heat liability of the payloads in spray-drying limit industrial applications. In contrast, proliposomes are an alternative for mass production and storage of liposomes due to the solid state formulations and simplicity in production. Preliminary evaluation of proliposomes containing amphotericin B or cyclosporine A 163 demonstrated promising features for large-scale industrial applications. More importantly, the final dosage forms of proliposomes resemble conventional oral solid dosage forms and can be easily adapted to conventional manufacturing facilities and processes. This was demonstrated by BSA proliposome tablets coated with Eudragit L100 that could be completely reconstituted into liposomes 164 . 165 In spite of the progress in mass production of liposomes, only a few parenteral, and no oral liposomal products are successfully marketed. There are significant remaining impediments to successfully developing liposomes as oral drug delivery carriers. 4 Mechanisms of oral absorption of liposomes Despite the advances outlined above, the mechanisms of oral delivery of liposomes have yet to be elucidated. To begin this topic, it is important to outline the general fate of liposomes as well as the embedded drug payloads following oral administration ( Fig. 3 ). Orally administered liposomes are partially destroyed following exposure to gastric acid. Although some of the payload drug is released, other liposomes and their cargo survive . While free drugs follow their own fate, surviving liposomes are emptied from stomach and transit into small intestine, where another fraction is destroyed by intestinal surfactants and enzymes 40,47,52 . Liposomes surviving this step penetrate the mucus layers and make close contact with intestinal epithelia 53–56 . There is still the possibility of destruction of liposomes at this stage as well as release of embedded drugs. However, the fractions of liposomes that survive the whole digestion process are able to be absorbed as integral vesicles 47,61 via the M cell-to-lymph pathway . Liposomes may be taken up by enterocytes as well, but their fate following this step is unknown. Several mechanisms are proposed as follows. 47 4.1 Enhanced gastrointestinal stability As mentioned above, liposomes are prone to degradation in response to the combined effects of gastric acids, bile salts and pancreatic lipases. Degradation of liposomes leads to the leakage of the payloads, which further leads to inactivation or degradation of labile drugs ( e.g ., peptides and proteins). Leakage also causes precipitation of lipophilic ingredients, thus decreasing the total fraction of oral absorption. Many studies show that enhancing the stability of liposomes or their payloads significantly improves oral bioavailability. In a sense, improving the stability means to enhance the surviving rate of liposomes and thereby enhance the opportunities to be taken up by intestinal epithelia. Several strategies have been applied to enhance the stability of liposomes, and the underlying mechanisms have been partly elucidated. For example, phospholipids with a higher T p endow liposomes with rigid membranes at physiological temperature, and thus help to resist the gastrointestinal destabilizing factors . Incorporation with bile salts improves the flexibility of the lipid biomembranes and helps to withstand the detrimental effects of bile acids in the GIT 166–168 . Imaging evidence show that bilosomes surviving the gastrointestinal environment can be absorbed as intact vesicles 45,52 . By exterior coating, the liposomal membranes are separated from the harsh environment in the GIT due to steric hindrance induced by polymers or polymer-formed water layers 169 , protecting the membranes from the influence of gastric acid 69 , bile salts 50,69,82 and pancreatic lipases 69,84 . Moreover, enzyme inhibitors can stabilize the proteins released from liposomes by inhibiting various enzymes in the GIT 87,101 . 114,119 There is currently a disagreement about whether the payloads are released first before absorption or the liposomes are absorbed as integral vesicles. In the first case, the payloads such as proteins are released in the gastrointestinal lumen and inhibitors must be used together to suppress enzymatic degradation . Secondly, uptake of intact liposomes 99,119 via clathrin-dependent endocytosis, caveolae-dependent endocytosis, macrocytosis or fusion may be alternative routes for oral absorption of liposomes . Abundant evidence indicates that free insulin without concomitant use of enzymatic inhibitors elicits no hypoglycemic efficacy 132 . Our previous work that validates the transcellular transit of bilosomes also provides a reference for trans-enterocytic internalization of oral liposomes 47 . 169 4.2 Mucoadhesion It is logical to assume that mucoadhesion of liposomes to intestinal epithelia prolongs the exposure of the vesicles in small intestine (the ideal site for oral absorption) and enhances opportunities for oral absorption. Polymers such as polysaccharides , PEGs 41,116,132 and carbopols 43 are good coating materials to improve mucoadhesion of liposomes. Mucoadhesion of various polymers is mainly due to the ionic interaction between positively charged polymers and negatively charged constituents ( 42 i.e ., sulfonic and sialic acid residues) of the mucus layers . Furthermore, disulfide bridges form between thiolated polymers with cysteine-rich subdomains of mucus glycoproteins 39,116,132 , as well as the interpenetration of polymers within mucus 118,134 . Mucins, a family of glycoproteins, have been generally used to evaluate the mucoadhesion of polymer-coated liposomes 43,92 in vitro , as mucins are largely responsible for mucus viscoelastic and adhesive properties. There are 116,132,170 ex vivo and 42,92,118 in vivo models for this purpose. Following oral administration of mucoadhesive polymer-coated liposomes, prolonged elimination half-life 132 and extended pharmacological action 39 of the payloads have been observed, which is ascribable to prolonged drug-residence time due to mucoadhesion. It is speculated that mucoadhesion increases partition of liposomal payloads from the gastrointestinal lumen to the epithelial wall in comparison with free drugs, and ultimately results in enhanced passive permeation across intestinal epithelia. A mechanism was proposed for insulin- 41,43,132 or calcitonin-loaded chitosan-coated liposomes 122 suggesting that the drugs are released in the mucus layers upon interaction with mucin and degradation of the liposomes, and subsequently absorbed without enzymatic degradation. Other studies ascribe enhanced oral absorption to adherence of the polymers to the mucus layers and prolonged retention therein, facilitating penetration of liposomes and payloads across intestinal epithelial cells 42 . 41,116 4.3 Facilitated translocation across the mucus layers The intestinal permeability of liposomes is known to be restricted by the trapping and fast turnover of the mucus layers. The turnover time of the mucus layers are supposed to be a limiting factor that determines the transit time of mucoadhesive liposomes . Considering the intestinal mucin turnover time is between 50 and 270 min, mucoadhesive liposomes are not expected to adhere to the mucus for more than 4–5 h 171 , a factor that greatly limits the efficacy of mucoadhesive polymer-coated liposomes. Therefore, facilitating mucus penetration potentially enhances residence time of liposomes in mucus, thereby increasing the oral absorption of liposomes and their payloads. A series of polymers possessing mucoadhesive properties have been utilized to coat liposomes to render them mucus-penetrating instead of mucus entrapment 172 . Pluronic F127 has very good mucus-penetrating ability and has been used to modify liposomes for oral drug delivery 41,134 . It is reported that facilitated penetration in the mucus layers promotes direct contact of liposomes with epithelia, and thus improves liposomal uptake by caveolae- or clathrin-mediated endocytosis 136 . The mucus-penetrating ability is thought to be attributable to the PEG chains of Pluronic F127 on the surface of liposomes that ease hydrophobic and electrostatic interaction of liposomes with mucins 135,137 . Besides liposomes, PEG modification has also been used for mucus-penetrating polymeric nanoparticles 136 . 173,174 4.4 Enhanced permeation across the enteric epithelia The oral bioavailability of liposomes is limited by poor intestinal permeability of both the vesicles and the payloads, especially biomacromolecules. Incorporation of absorption enhancers along with polymer coatings has been shown to efficiently enhance permeation across enteric epithelia. As for small molecular weight drugs, the effects and mechanisms of absorption enhancers are clear . However, enhancers for absorption of integral liposomes may have different mechanisms. Carrier-mediated transmembrane absorption 175 and penetration through intercellular regions 128 are proposed for enhancing oral absorption of deformable liposomes containing surfactants. Another 132 in vitro study using Caco-2 cell models shows that some bioenhancers incorporated in liposomes may act 124 via interfering with cellular lipid bilayer structure, which leads to facilitated uptake of payloads or higher fusion affinity of liposomes with cell membranes. The opening tight junctions that facilitate paracellular absorption of drugs is another potential mechanism. Furthermore, some absorption enhancers also enhance the oral bioavailability of liposomal payloads by forming lipophilic ion-pair complexes with various organic cations, which increase permeability of the cations across biological membranes . It is worth noting that many absorption enhancers such as bile salts act 79 via multiple rather than a single mechanisms . 79,127,128 Polymer coating enhances permeability of liposomal payloads through epithelial cells as well. Chitosans and derivatives are unique types of polymers widely investigated to coat liposomes for oral delivery . The interaction of chitosan with cellular membranes is reported to initiate a structural re-organization of tight junction-associated proteins, thus facilitating paracellular transport of hydrophilic macromolecules 39,91,133 . However, a majority of mechanistic studies with chitosan-coated liposomes are carried out in Caco-2 models 176,177 . Moreover, P-gp, a multidrug transporter, is responsible for the efflux of various drug substrates, and P-gp inhibition represents another potential mechanism for enhancement of oral absorption of liposomal payloads 91,133 . 36,124,138,178,179 4.5 Ligand-mediated endocytosis Inspired by the fact that some nutrients are absorbed via active absorption, liposomes can be modified with nutritional ligands to achieve active targeting to specific receptors in the enteric epithelia. Ligands are able to further enhance the cellular uptake and trans-epithelial transport of liposomes and thus improve oral absorption. The ligand-receptor interaction probably brings about two aspects of functions: i.e ., receptor-mediated transport and accumulation of liposomes at the sites of absorption. The former is comprised of the mechanisms of pinocytosis and phagocytosis, mainly restricted to M cells . The latter refers to the ligand-receptor interaction that achieves adherence and accumulation of liposomes at the site of absorption, thus facilitating absorption of the payloads if they are meant to be released there. In general, receptor-mediated pinocytosis occur by clathrin-mediated endocytosis (CME) or caveolae-mediated endocytosis (CvME) 180 . Compared to CME, CvME is not concerned with lysosomal biodegradation. Therefore, the use of liposomes exploiting CvME may be advantageous for oral delivery of enzyme-sensitive drugs. Size seems to be an important factor that determines the patterns of cellular internalization 180 via either CME or CvME . 181 Significantly enhanced oral absorption has been reported for liposomes modified with FA , biotin 147,148 149 , lectins 182 and mannose 140,168 . FA and biotin interact with their own receptors, (both of which are expressed widely by intestinal epithelia) to improve liposomal uptake 143,144 via receptor-mediated endocytosis . Moreover, CME rather than CvME may be an important route for endocytosis, as confirmed by utilizing endocytosis inhibitors ( 147–149 Fig. 4 ) . Lectins interact with the specific glycosylation patterns expressed in M cells or absorptive cells or both to enhance liposomal uptake 149 . APCs in the GIT, including the macrophages and dendritic cells (DC) (the major APCs present in the vicinity of Peyer׳s patches), abundantly express the mannose receptors (also called C-type lectin) and thus can be utilized as targeting cells for oral liposomes 139,140 . It is worth mentioning that phagocytosis plays an important role in receptor-mediated endocytosis in M cells and APC targeting. In spite of its high efficacy, receptor-mediated endocytosis may not be the sole mechanism for enhanced oral absorption of ligand-modified liposomes 143,183 . Accumulation of liposomes at the sites of absorption and sustained release of payloads prior to absorption contribute to enhanced oral absorption as well. 150 4.6 Uptake by M cells M cells are specialized epithelial cells locating in the FAE of Peyer׳s patches. They are able to transport a broad range of particles, such as bacteria, viruses and antigens, from the intestinal lumen to the underlying lymphoid tissues . Despite the small population of M cells, liposomal absorption through the M cell pathway has many advantages, including less glycocalyx, reduced levels of membrane hydrolases, few lysosomes and high endocytosis capabilities 184 . Furthermore, M cells are the least protected cells by mucus in enteric epithelia and the most exposed to chyme because M cells do not secrete mucus. Therefore, M cells are easily accessible for liposomes 150 via mechanisms of adsorptive endocytosis, fluid phase endocytosis and phagocytosis . It was shown that the M cell pathway contributes to total oral absorption of liposomes 185 , and the liposomal surface charges influenced the efficency 57,186 . In addition, prolonged residence of liposomes in GIT increases the opportunity of uptake by M cells 187 , which partly explains the contribution of stabilization of liposomes to enhanced oral absorption 188,189 . Polymer-coated liposomes can be transcytosed by M cells as well due to prolonged contact with intestinal epithelia 69,95 . To further increase oral absorption, ligands such as lectins 190 have been utilized to modify liposomes to target M cells as mentioned above. In conclusion, the M cell pathway has been shown to be an important route for the oral absorption of liposomes. 141,168 5 Conclusions and perspectives Despite the growing number of investigations on the oral delivery of liposomes, essential breakthroughs are still needed to develop and market these products for clinical use. The bottleneck to development of oral liposomes lies in the poor understanding of the absorption mechanisms. Following the transit of liposomes from the stomach to small intestine, liposomes are gradually broken down. The drug payloads can be released immediately into the gastrointestinal lumen or be transferred into secondary carriers like mixed micelles and transported to the intestinal epithelia for absorption. This represents the first mode of drug absorption. As for labile biomacromolecules, released fractions are degraded quickly and will not be absorbed; only liposomes that survive the gastrointestinal environment and manage to penetrate the mucus layers can reach the intestinal epithelia and be absorbed together with the payloads. To enhance the oral absorption of liposomes as well as the payloads, the initial challenge is to maintain the integrity of liposomes and prolong gastrointestinal residence, thereby enhancing penetration of the mucus layers. Recent advances are focused on modulating the compositions of the lipid bilayers or modifying the liposomal surfaces with polymers or ligands to modulate the in vivo fate of liposomes after oral administration. Acknowledgements This work was financially supported by National Natural Science Foundation of China ( 81573363 and 81690263 ) and National Key Basic Research Program ( 2015CB931800 ).
|
[
"BANGHAM",
"TORCHILIN",
"GREGORIADIS",
"GREGORIADIS",
"GREGORIADIS",
"MOGHIMI",
"YAN",
"PALCHETTI",
"TANG",
"PEREIRA",
"HUANG",
"SIGNORELL",
"BUNKER",
"SUK",
"LIU",
"GRIFFIN",
"BARENHOLZ",
"WEISSIG",
"XING",
"DARAEE",
"NOBLE",
"CARITA",
"AGRAWAL",
"KAPOOR",
"PATEL",
"HASHIMOTO",
"DAPERGOLAS",
"HE",
"ARRIETAMOLERO",
"CHIANG",
"CHIANG",
"TANG",
"SONG",
"LIU",
"VERMA",
"CHEN",
"UHL",
"JENSEN",
"CHEN",
"WU",
"THIRAWONG",
"TAKEUCHI",
"IWANAGA",
"CONACHER",
"SHUKLA",
"CHOUDHARI",
"NIU",
"BOZZUTO",
"YAMANO",
"HOSNY",
"YANG",
"HU",
"LIU",
"TIAN",
"KOKKONA",
"COHN",
"SHUKLA",
"LOPES",
"GIANNASCA",
"ENSIGN",
"GUAN",
"VANDENHOVEN",
"WANG",
"CHEN",
"KANNAN",
"ALI",
"RICHARDS",
"KATO",
"IWANAGA",
"PARMENTIER",
"CUI",
"MURAMATSU",
"PARMENTIER",
"BIRRU",
"ANDRIEUX",
"ARAFAT",
"ABURAHMA",
"GUAN",
"SONG",
"AYOGU",
"NIU",
"BAREA",
"BAREA",
"SEHGAL",
"VENKATESAN",
"LEE",
"CARAFA",
"WILLATS",
"SMISTAD",
"NGUYEN",
"KOWAPRADIT",
"MANCONI",
"HOFFMAN",
"PATEL",
"MINATO",
"DAEIHAMED",
"JAIN",
"JAIN",
"DWIVEDI",
"LI",
"MOHANRAJ",
"KAZAKOV",
"DEMIGUEL",
"VONHOEGEN",
"NG",
"BUCK",
"KAZAKOV",
"PETRALITO",
"ZHANG",
"ZHANG",
"KATAYAMA",
"EBATO",
"PANTZE",
"THONGBORISUTE",
"KLEMETSRUD",
"HAN",
"NGUYEN",
"GRADAUER",
"WERLE",
"MANCONI",
"SUGIHARA",
"TAKEUCHI",
"SHAO",
"PARMENTIER",
"PARMENTIER",
"JALALI",
"DEGIM",
"CHEN",
"HUANG",
"BENEDIKTSDOTTIR",
"CAO",
"HUANG",
"HUANG",
"GRADAUER",
"ZHU",
"CHEN",
"LI",
"ZHOU",
"CHEN",
"LI",
"GUPTA",
"ZHANG",
"WANG",
"WANG",
"ZHOU",
"PRIDGEN",
"ANDERSON",
"ANDERSON",
"ZHANG",
"AGRAWAL",
"JAIN",
"DESRIEUX",
"ZHANG",
"PATIL",
"SHUKLA",
"SCHNEIDER",
"PONS",
"PUPO",
"JUSTO",
"LAOUINI",
"WAGNER",
"JUSTO",
"SINGODIA",
"KARN",
"TANTISRIPREECHA",
"HAN",
"SCHUBERT",
"WERLE",
"NIU",
"RESCIA",
"BERNKOPSCHNURCH",
"LEHR",
"WANG",
"YUAN",
"GANEMQUINTANAR",
"THANOU",
"ZAMBITO",
"WANG",
"MA",
"HILLAIREAU",
"REJMAN",
"ZHANG",
"JAIN",
"DESRIEUX",
"BUDA",
"LING",
"TOMIZAWA",
"ARAMAKI",
"ROGERS",
"CHANNARONG"
] |
afcae348ac5d4d3ab0e1e16c6e15b5fd_Groundwater Use Habits and Environmental Awareness in Ca Mau Province Vietnam Implications for Susta_10.1016_j.envc.2023.100742.xml
|
Groundwater Use Habits and Environmental Awareness in Ca Mau Province, Vietnam: Implications for Sustainable Water Resource Management
|
[
"Pham, Van Cam",
"Bauer, Jonas",
"Börsig, Nicolas",
"Ho, Johannes",
"Vu Huu, Long",
"Tran Viet, Hoan",
"Dörr, Felix",
"Norra, Stefan"
] |
The Vietnamese Mekong Delta including Ca Mau province (CMP) is seriously affected by land subsidence. Groundwater over-extraction is considered to be a major driver for this process. To address the reduction of groundwater (GW) extraction as a potential counter measure for further subsidence, this study focuses on understanding the importance of GW in people's life and water using habits as well as their awareness with current environmental problems in Ca Mau. Therefore, GW sampling campaigns and surveys were conducted in all 9 districts of Ca Mau province in 2019 and 2020. The analyzed water samples showed a connection with information from questionnaires and created a general picture of water using habits. GW plays an important role in people's lives, it is used for washing, cooking, drinking and other activities. People use GW for different purposes depending on their perception of water quality. For important and direct health related purposes, such as cooking or drinking, people prepare to treat water more carefully or choose another alternative water resource. The analytical approach to evaluation results based on viewpoints from general to detail helped to dig deeper into people's stories to explain research results with their behavior in each situation. When people are dependent on GW and have no option to use alternative water resources, the importance level of GW in their life increases and their awareness of GW over-extraction becomes less. If people have another water source to use such as tap water (TW), habits of using GW change. This opens up the idea that a potential alternative water will reduce the dependence of people on GW and protect GW from over-exploitation. Besides, people in Ca Mau do not have much awareness of land subsidence or the reason leading to environmental problems. Therefore, raising the awareness of people by well-design education campaigns should be strongly considered.
|
1 Introduction Vietnam is one of the most threatened area with the effect of sea level rise ( Hens et al., 2018 ) as well as climate change related to the intensity of natural disasters ( MONRE, 2016 ; Oxfarm, 2008 ). The Vietnamese Mekong Delta, including Ca Mau province (CMP), is located at the Lower Mekong River and forms the very southern edge of Vietnam. The Mekong Delta has an extremely low mean elevation above sea level (around 0.8 m) ( Minderhoud et al., 2019 , 2017 ). In addition, the Mekong Delta is facing a great number of environmental challenges and sustainability problems during the twenty first century due to the decrease of sediment supply from its catchment due to upstream dams, saltwater intrusion from the sea, sea level rise and flooding as well as significant land subsidence ( Allison et al., 2017 ; Tran et al., 2021 ). Human activities and climate change have impacts on saltwater intrusion into GW systems in Mekong Delta ( Han et al., 2021 ). The Mekong delta has to deal with salinity intrusion in the dry season and flooding during the rainy season. For example, in the dry season of 2015/2016, eight provinces in the Mekong delta have announced an emergency situation due to drought and salinization ( Bäumle, 2017 ). In recent years, due to rapid social-economic development and an increase in population, the increasing demand of freshwater led to an increased exploitation of GW without any planning, what causes serious problems in the Mekong Delta in general and Ca Mau Peninsula in particular ( Friedrich et al., 2008 ; Van, 2019 ). Therefore, water security in the region is seriously endangered due to a decrease in freshwater quantity and quality, mainly caused by salinization, pollution and over-extraction ( Ha et al., 2018 ). GW over-exploitation leading to an average decline of hydraulic heads of around 30 cm per year, potentially plays an important role in ongoing land subsidence ( Erban et al., 2014 ). Land subsidence in Mekong Delta is leading to the unsustainability of area ( Gustafson et al., 2018 ; Di Giusto et al., 2021 ). The Vietnamese Mekong Delta has a transition in agriculture from mono-culture (rice farm) to multi production like shrimp-rice to be more sustainable ( Nguyen et al., 2021 ). CMP, is the shrimp basket of Vietnam with a high demand of GW for farming. The export of these goods is of uttermost importance for Vietnam's economy. However, Ca Mau's groundwater resources are intensively affected by saltwater intrusion and a decrease of hydraulic heads resulting in a lack of freshwater for estimated 95,600 households ( UN, 2020 ). Thus, this region urgently needs the identification, development and implementation of adapted relief measures to save it from complete inundation by the sea ( Minderhoud et al., 2020 ). Bauer et al., 2022 described the challenges of the Mekong Delta as “a progressive loss of land and freshwater”. One major counter measure to mitigate land subsidence might be to stop or significantly reduce GW extraction from deep, confined aquifers and switching to alternative water resources, such as surface water from the rivers and channels, rainwater, pumped water from the Mekong River, or desalinization of sea water. However, before even starting with one of these measures, it has to be investigated how people would react on such intensive intervention into their daily life. Many previous studies have shown that sustainable adaptation is only successful if the local people fully accept it. People intend to increase their adaptation when they are aware of the risk of climate change to many aspects of their life ( Luu et al., 2019 ). People in CMP are diverse in terms of living standards as well as access to water resources. To start reducing the use of GW, it is necessary to understand the importance of GW in people's life. Research needs to reflect people's opinions and their assessment of the water they are using. People's perception of GW quality and surrounding factors might influence their usage habits. Some factors affecting the acceptance of water source are public awareness about water supply, distribution, treatment as well as income and other personal factors ( Baumann, 1983 ). Another case study in Bengaluru, India evaluated the factors impacting the acceptance of recycled water. One of the noticeable results was that 89% of the people using surface water were not aware of waste water treatment concepts or water reuse at all ( Ravishankar et al., 2018 ). Another research in Vietnam also indicated that people's choice of water source depend on availability, quality of the water sources and financial situation of the household ( Danh and Khai, 2015 ). In CMP, a recent study showed that people are not fully aware of the danger of submersion by the sea ( Di Giusto et al., 2021 ). In addition, to our best knowledge, facing many problems of water use, there are still no studies on people's perception on water usage habits, with both quantitative and qualitative aspects in Ca Mau. On the one side, quantitative research is a research approach, which evaluates the relationship between variables by numeric data collection and analysis (data can be expressed in numbers or scores). On the other side, qualitative research is a research approach, which concentrates on discovering individual's experiences with phenomena by narrative or text data collection and analysis (data can be expressed in words and images) ( Clark et al 2016 ). The combination of these two approaches creates enhanced methods that can show a more comprehensive view of people's opinion in Ca Mau from a personal perspective as well as statistical analysis ( Creswell, 2017 ). Therefore, in this study a comprehensive GW quality assessment combined with a survey in CMP was conducted. This study emphasizes on the relationship between water quality and people's water use habits with the quantitative approach. At the same time, the open-end questions and group discussion from the qualitative approach open up explanations to the problem, explaining why people in CMP are overusing GW and evaluates their possible alternative options. 2 Methodology 2.1 Study area The study area in this research focus in Ca Mau province (CMP). Ca Mau is the southernmost province in Vietnam, surrounded by sea in three directions. CMP includes 9 districts (Ca Mau, U Minh, Tran Van Thoi, Dam Doi, Thoi Binh, Cai Nuoc, Phu Tan, Nam Can, Ngoc Hien). This area contains high density of river and canals, it is flat and low area, average elevation is around 0.5 to 1.5 m above sea level. ( Pechstein et al., 2018 ). Ca Mau is located in the monsoonal zone, it has a tropical monsoon climate with two main seasons (rainy and dry seasons). The rainy season is usually from May to November and the dry season is from November to May. The population of CMP is around 1.2 million people with 603,250 males and 589,150 females in 306999 households. Sex ratio in Ca Mau is around 102.39 male/100 female. According to Ca Mau Statistic Office in 2021, main labor force is in rural area with 535,892 people with 80.01%, labor force in urban area accounts for 19.99% with 133,881 people. The occupation of people in CMP includes high level professionals, mid-level professionals, clerks, personals services, protective workers, sales workers, skilled agricultural, forestry and fishery workers, craft and related trade workers, machine operators, unskilled occupations and others. The most popular occupations are workers related to agricultural, forestry and fishery sector with 214,153 workers accounting 32. 63% and unskilled workers with 254,550 workers accounting 38.78%. The key activities in the development of economy sector is agriculture, especially aquaculture (with a total area of 297,200 ha). GW extraction in Ca Mau is mainly used for agriculture, aquaculture and domestic use in rural area, from mostly small to medium sized wells with pumping rates >200 m 3 /day. However, the number of GW extraction wells is approximately 175,710 wells including only 248 centralized wells and 452 licensed extraction wells ( Pechstein et al., 2018 ). It means most wells are illegally extracting GW without any exact estimation of extraction rates. 2.2 Questionnaires 2.2.1 Data collection The data set was collected on the basis of questionnaires and group discussions with households in CMP. Questionnaires were collected by face-to-face interviews between the authors, an instructor from the local government and a household member. Field trips were implemented by visiting nine districts in CMP to collect data. The first survey was carried out in March 2019 with 87 questionnaires and water samples collected in all nine districts. The second survey took place in December 2019 and January 2020 with 57 questionnaires and water samples focusing on the northern part of CMP. Based on the experiences from the first field trip on March 2019, questionnaires were collected together with groundwater samples at the same location. Groundwater samples were taken systematically to cover all regions in CMP ( Fig. 1 a, 1 b). Following the approach by Ravishankar et al (2018) , the authors of this study are aware that some aspects of this study might not be representative for a commune, district or CMP as a whole. However, as this is the first study in the area, it is important to qualitatively focus on basic factors and reasons impacting groundwater extraction as well as the connection between people's story and the respective water quality. In total, 144 questionnaires and 144 GW samples were collected ( Fig. 1 a). At first, a survey was conducted to get the results and lead to the following up questions in discussion for better understanding of initial finding from quantitative research ( Clark et al, 2016 ). The questionnaire was designed with 28 questions in the first version and was extended by 35 questions in the second version after the evaluation of the results of the first survey. The second version focuses more on people's assessment of GW quality and their awareness of negative environmental impacts, such as salinization and land subsidence. All questions were explained and discussed with households directly. The questionnaire system is designed and divided into four main parts with easy to understand contents to match the perceptions of the respondents in CMP. The first part covers basic information surrounding the respondents, such as living conditions. The second part explores information about groundwater extraction and water use habits. The third part describes the potential of alternative water sources and the last part is about people's perceptions and awareness of overexploitation and land subsidence in CMP. The level of awareness of the people is divided by level from 1 to 4, with 1 being the lowest level, meaning that people totally have no knowledge of this issue. Level 2 means that people heard about this issue somewhere but they do not understand it clearly. Level 3 means that people know about the problem and have a basic understanding of the causes. Level 4 is the highest level when the people capture the whole picture, understand the issue and relevant information. After completing a questionnaire, in the following group discussion part, respondents had a deeper discussion with the interviewer to explain their answers and stories. 2.2.2 Data analysis To evaluate the outcomes of the questionnaires, well established quantitative and qualitative approaches are applied. The quantitative approach covers statistical evaluation of the questions related to numbers such as amount of water use or number of people using the well etc. Similarly, the qualitative approach consists of a hermeneutical evaluation of questions related to words and text, such as the favorite water source or GW using purpose etc. The selection of locations was based on the general idea to get a complete overview about the different living conditions and water resources around the whole province rather than a statistical overview based on the distribution of population. Distribution of questionnaires in Ca Mau province is shown in Fig. 1 b. Information gained from data collection was expressed with the points of view from general view into detailed explanation. According to observations and interviews, beside groundwater, there are three other types of water sources used in CMP, including rain water, tap water, and bottled water (from private water supplier). Tap water (TW) is not available for the whole province. TW is obtained from GW, which is treated at a drinking water plant and distributed usually to nearby households in the same ward or commune. Until now, fresh water management and distribution are not effective. GW is the major water source which is used mainly for domestic purposes ( Ha et al., 2015 ) The number of households that possess the ability to use TW is not as high as compared to households using GW. In this study, although the main research subject are households using groundwater (GW users) and GW samples, tap water households (TW users) are also approached and interviewed to collect further information. For this reason, the respondents (144 questionnaires in total) were divided into two groups. Group 1 includes all households that only have GW as the main source of water called GW users, and group 2 includes households that own a GW well but also have direct access to tap water, or use TW only, called TW users. Results from 2 groups of respondents were selected, analyzed and compared to understand the difference of their thought and behaviors between 2 groups. The distribution of questionnaires in two groups is not the same ( Fig. 1 c). GW users accounts for 89.6% questionnaires, the percentage of people who use only TW and TW together with GW are 4.20% and 6.20%, respectively. Important reason for this extra subject is that people who get experience in using TW have more diversity in perception and they can evaluate water sources and their consumption habits in a wider view. This study focused on the group of GW users and the number of TW users is used for comparison. The number of TW users is small and do not have the meaning to be a representative for the whole group. 2.3 Water quality GW samples were collected in parallel to the questionnaires and were analyzed in the frame of the previous study by Bauer et al. (2022) , emphasizing on GW evolution and geochemistry. However, only major ions and parameters were considered in this study. The sampling methods are described in Bauer et al. (2022) in detail. Briefly, GW samples were collected from households, small businesses and water supply stations with the aim to cover the whole province. Sampling points were selected based on their spatial relevance, general access as well as permission situation. The number of samples is more than the number of questionnaires. GW was collected after 15 minutes of pumping to ensure that the sample originate from the aquifer and not the stagnant water which was standing in the well casing prior to pumping. GW was taken and measured some parameters on site, or brought back into laboratory for further analysis, as described in Section 2.3.2. To prepare for analysis, 25 mL of GW sample was filtered through a 0.45 µm cellulose-acetate filter (Satorius Stedim Biotech GmbH). 50 µL of high purity nitric acid was added to the filtered sample to prevent the precipitation of cation ( APHA AWWA, 2005 ) and 50 µL of sodium azide was added to inhibit microbiological processes to ensure a correct anion analysis ( Vanderford et al., 2011 ). Physiochemical parameters including temperature, pH, electrical conductivity (EC), Oxygen and redox were determined on site by using a multi parameter portable meter (WTW Multi 3630 IDS). Total alkalinity was also measured on site with a titration kit (Merck KGaA, Germany). Besides, samples after filtration and acid/sodium azide addition were transported to Germany and analyzed with IC (X-Series 2, Thermo Fisher) for anions and ICP-MS (Dionex, ICS-1000; Trennsäule IonPac As14 Supressor ERS 500) for cations at the Karlsruhe Institute of Technology in the laboratories of Institute of Applied Geosciences. After analysis, the samples of GW were compared with the people's subjective assessment of the water quality from the questionnaires. In addition, the concentration of ions, are compared with the National Technical Regulation on Domestic Water Quality QCVN 01-1/2018 BYT to know whether the water quality meets the usage standards or not. The dashed red line represents the QCVN standards and shows clearly that some parameters of samples are over the permissible standards for domestic water ( Fig. 5 ). If the dashed red line does not appear in the figure, it means all values of each parameter meet QCVN standards. All necessary parameters were analyzed and a comparison was carried out between the group of people using GW for non-drinking and drinking purpose. One more aspect to be discussed is samples where GW is chosen as the best water resource. Not all parameters from the list of Vietnamese standard QCVN 01-1: 2018/BYT are considered. This study focuses on some selected parameters, which are most important to evaluate domestic water quality and are easy for people to realize if they occur in harmful concentration. The parameters include pH, EC, NH 4 + , Cr, As, Cd, Sb, Pb, Al 3+ , Mn 2+ , Zn, B, Fe, Ba, Na + , Ca 2+ , Cl − , SO 4 2− . pH is the basic parameter, which can affect to value of other parameters. Heavy metals such as cadmium, lead and chromium dissolve more easily in highly acid water (DeZuane, 1997). Besides, iron and manganese are not serious substances, which cause health problems, but can cause bitter taste in drinking water even at very low concentration. When water containing higher amounts of Fe 2+ and Mn 2+ are exposed to air, these ion can oxidize and precipitate and the water can turn to be turbid ( APHA AWWA, 2005 ). Zinc is not harmful at small concentration, but it can cause strange taste in drinking water with concentration of above 4 mg/L. Zinc at concentrations between 3 – 5 mg/L in water can cause the greasy film when boiling ( WHO, 2018 ). Besides, using lead pipes increases lead concentration in drinking water and after long term, it could affect children mental health. Arsenic also causes a risk to health after long term exposure ( WHO, 2018 ). Ammonia (NH 4 + ) concentration can be over the taste threshold at 35 mg/l ( WHO, 2018 ). 3 Results and discussion 3.1 General information of respondents In this study, households' information of GW use issues along with perceptions of GW over-exploitation are used for the analysis. Percentage of males and females among the respondents was a ratio of 2 males: 1 female. The average household size is 5.5 people per household. According to the Ca Mau Statistic Office, the population in rural area is much higher than in urban area (920948 with 77.2% in comparison with 271452 people, around 22.8%). In this study, respondents are also focused on rural area with around 91.0% and 9.00% respondents in urban area. Most respondents who used groundwater lived in rural area. In urban area, tap water is supplied to the households from water supplier station. The interviewed households have a variety of occupations, from growing rice, farming shrimp to doing small business to working as employees of a company or officer in government. The majority of respondents are farmers (rice or shrimp farming) with 68.1%. There are also small self-employed households including restaurants, bars, selling bottled water with 16.7%. People working in companies and government organization known as officers, accounted for 10.4% of interviewees, and finally 4.86% of interviewees were workers. In Ca Mau population, percentage of these occupations are 32.6%, 8.50%, 2.93% and 8.87% respectively (according to Ca Mau Statistic Office, 2021 ) The household's income is difficult to estimate because most interviewees do not have a stable monthly income but their income is determined by the success of the crop or shrimp harvest. Usually, people in Ca Mau have more than one source of income. One person can be an employee but also a small seller or a farmer. According to Ca Mau Statistic Office, average monthly income per capita in urban and rural region in 2018 is 2,985,900 VND/pp/month (around 129.6 USD/pp/month). This income is including wage or salary (798,300 VND) (around 34.7 USD /pp/month), income from agriculture, forestry and fishery (self-employment) (1,015,200 VND) (around 44.1 USD), income from non - agriculture activities (813,400 VND) (approximately 35.4 USD) and others (359,100 VND) (around 15.6 USD). This data fits to the authors’ impressions in CMP. Respondents are mainly normal households where GW is used only for domestic purposes. Besides, there were some other types of households. Some households own their business which require much water such as bike washing, restaurant, shrimp farming. Others are private water suppliers who sell water bottles or owners/managers of water station. 3.2 Current state of using water resource Groundwater is popular and plays an important role in people's lives in CMP. To understand if people would accept stopping GW usage, it is necessary to understand the importance and specific role of GW in their daily life under social and economic aspects. According to the results of 144 questionnaires, GW is used for many purposes in CMP, which are listed in Table 1 . Households consume GW to wash clothes, dishes, cook or even drink directly. However, washing clothes and dishes consume the highest amount of GW. Two groups of GW users and TW users have different opinions for certain aspects. For each group, their experience with TW differs, leading to their perception of several different assessments regarding water quality or convenience of use. People have sensory assessments of each water source they use, in terms of water volume, water quality, and as a consequence which water source is the best. Opinions of respondents about the best water source are shown in Fig. 2 . Among GW users, rainwater (RW) is rated as the best quality with 61.1% agreement of households and according to the respondents: “Rainwater is sweet and delicious, is usually boiled to drink tea. Rain water is only used for the main drinking and cooking purpose. Although the water quality is good, the amount of rain water is not enough for another purpose.” Their opinions about rain water is based on long- time experiences of using it. According to Đoàn Thu Hà and Hồ (2014) , rain water in the Mekong Delta is also considered as high quality water source and it meets Vietnamese standards for almost all parameters. However, contamination of fecal parameters are due to the condition of rain water storage and treatment ( Wilbers et al., 2013 ). Second is bottled water (BW) with 27.0%. In fact, BW comes from groundwater but it is treated in private filtration systems and sold to people. People have confidence in the water quality after treatment with the filtration system and think that BW is of good quality. In addition, the TW users group believes that RW, BW and TW are the top three with the best water quality, with 30.8%, 30.0% and 23.1% respectively. Even though they have access to TW, they still believe that RW has a better quality in taste. Some interviewees said that TW is sometimes still affected by pipes, strange color changes and its taste is worse than that of RW. Not so many respondents think all water sources have the same quality and no one chose surface water (SW) as the best water source. SW in Ca Mau Peninsula is widely contaminated with organic matters, nutrients, total suspended solids, and microorganisms ( Giao, 2022 ). On the other hand, both target groups believe that in general GW quality is not really good, with only 7.90% for GW users and 15.4% for TW users voted for GW as the best water resource. The perception of households about groundwater is nearly compatible with the water quality due to the characteristics of some parameters in terms of color, smell and taste (mentioned in Section 3.3 ). In addition, the role of GW in the life of the household with access to TW has decreased. The importance level of GW in people's life from the results of the survey is shown in Fig. 3 . The question posed to residents is how important GW is in their lives. The answer is divided from level 1 (not important) to level 4 (extremely important, irreplaceable). For the GW users group, none of the people choose level 1, up to 30.3% and 66.4% of the respondents choose the importance level of GW to be level 3 and 4. For the TW users, the importance of GW in people's life is less important, with 22.2% at level 3 and 55.6% at level 4 ( Fig. 3 ). It can be seen that when the group of people is able to use TW, the importance of GW decreases. Moreover, the use of GW for drinking purposes is a matter of great concern. GW users can be further divided into two subgroups: (i) a group of people who use GW for the purpose of drinking and (ii) a group of people how use GW for non-drinking purposes. If people use GW for the purpose of drinking, people tend to pay more attention to water quality and they usually use some pre-treatment to make water safer for drinking ( Fig. 4 ). The results show that only 4.2% did not use any pre-treatment before drinking GW, which might be concerning for their health. However, 25% of the subgroup did use a settling process to remove suspended particles before drinking, while 20.8% used both settling and boiling methods to ensure safe water. The majority of the respondents, 50%, relied on filtration systems to remove impurities before drinking GW. Normally, people use mini filtration systems at home. The price of a mini system is in range of 6.000.000 VND (around 260 USD) and they have to change the filter every 3-6 months with a price of around 90.000 VND/filter (around 3.90 USD). These findings highlight the importance of promoting safe water practices to ensure the health and wellbeing of the population, particularly for those who do not currently use any pre-treatment methods ( Fig. 4 ). The importance of GW in people's lives is affected by their intention in using GW as well as whether they have other water sources such as TW to use. Besides using GW, people also have access to other water sources such as RW, SW, BW or TW for different purposes. The self-assessment question of best quality water opens up potential alternatives to replace GW as main water source. Fig. 2 shows that RW is preferred by people and rated as the best quality water source for both GW users and TW user groups. RW is also a popular drinking water source in the Mekong Delta with positive characteristics of color, taste and smell (G. J. Wilbers et al., 2013 b). However, the quantity of RW is not enough for people's demand during the dry season when they only can collect and store rain water in few and rather small containers ( Li et al., 2016 ). 3.3 Groundwater quality at the locations where GW is used for the purpose of non-drinking and drinking Since the scientific assessment of water quality through proper measuring equipment is not an option for most of the people in CMP, their decisions and actions in using water depend on their individual gustatory and olfactory senses. According to feedback from residents through interviews, when people perceive strange water quality in regard to taste, color or odor, they will not use GW for drinking. According to the findings of Bauer et al. (2022) , the analysis of water samples regarding EC, reveals that certain GW does not meet the necessary standards for direct consumption as drinking water. The graphical representation in Fig. 5 presents a comparative analysis of water quality parameters (as discussed in Chapter 2.3) in areas where GW is used for drinking purposes (blue boxplot) and areas where GW is not utilized for drinking (red boxplot). In general, values where GW is used for non-drinking purpose have a wider range and there are more outliers in the plot. The depth of the GW extraction wells also has to be considered to know which aquifer households use for each purpose. The median depth of 120m for both groups correspond to the upper-middle Pleistocene aquifer qp2-3, which is the common aquifer on a household wells in Ca Mau, accounting for 63.37% amount of exploitation based on the estimation of GW model ( Hoan et al., 2022 ). However, non-drinking purpose wells have few exceptions with deeper wells. This indicates that the depth is not extremely important, this also agrees with Bauer et al., 2022 who identified that water chemistry is more a regional feature rather than a vertical one. On the one hand, the results state that most samples for non - drinking purpose (red boxplot) show that EC and pH parameters met the regulatory limits. However, many water samples in the study area show concentrations of NH 4 + , B, Fe 2+ , Ba 2+ , Na + , Cl − , SO 4 2− exceeding the threshold value for drinking water in Vietnam ( Fig. 5 ). Due to the noticeable strange taste, color or general appearance of these samples, households evaluated their quality poorly, and thus, did not choose them as a source of drinking water. Non-drinking water have higher pH, also correlated with EC. There are some dependent variables of high EC, mostly Cl − , pH, SO 4 2− , Na + , Ca 2+ and B. It is interesting that NH 4 + is higher in the group of samples with higher EC. This could probably support the assumption that presumably contaminated saline GW from the shallow aquifer is leaking into deeper aquifers ( Bauer et al., 2022 ). Regarding the “easy-to-detect” water quality parameters above, GW samples utilized for drinking purposes (blue boxplot) are comparatively superior to the water quality of samples not used for drinking purposes, as depicted in Fig. 5 . However, further investigation into the questionnaires is necessary to explain this result, as there are some exceptions that require deeper analysis. For samples with iron (Fe) concentration exceeding the standards (>0.30 mg/L) that people use for the purposes of drinking, people also have different ways of treating GW before consumption as drinking water source. Seven of them use treatment methods before consumption. Of those two households use sedimentation, three use sedimentation with filtration and boiling, and two use a mini filtration system. Two households do not use pre-treatment and rely on other sources like RW and BW for drinking, using GW only as a backup in emergencies. In the case of samples with Boron (B) levels exceeding the permissible limit (>0.30 mg/L), 15 samples were found to exceed the limit. Among these, two samples were consumed without any pre-treatment, five samples were settled, two were boiled, five were filtered, and one sample was obtained from a water treatment plant. Unlike iron, Boron is a substance that has minimal impact on taste and is challenging to detect for the households by themselves. For Sodium concentration, there are twelve households exceeding the standard (>200 mg/L). At the same time all twelve households have Boron concentration exceeding the standard, too. Of these twelve samples, one is from a water plant. Two households do not use any measure before using the GW because these households use mainly RW and BW as their primary drinking source, GW is an additional option for them in case of emergency. The other four households are using sedimentation and they also use additional sources for drinking, like RW and BW. Two households boil water, three households treat water with filtration systems and they do not use any other water sources. When households use treatment for GW, it could be assumed that they intend to increase the water quality before they use it for drinking water source. Similarly, in a household with Chloride concentrations exceeding the standard (>250 mg/L) (concentration of Sodium, Boron also exceeds the standard), there are two locations. One household has a professional water treatment system, filtration and UV disinfection to treat water after extraction and distribute water as a supplier of potable water. Another household uses only a small amount of GW for drinking; they also use BW (filtered water) and RW instead. Households using GW for both, cooking and drinking purposes, often have high water quality, with chemical parameters in the range of permissible limits. Biological parameters can be discussed further in future studies. For those households that use GW for drinking purposes but do not have good enough water in terms of quality, when digging into the discussion, it could be realized that they only use very little amount of GW for this purpose. In addition, they also have a different amount of water from rain or bottled water as main sources, or they will treat GW with different treatment before use, depending on their ability ( Table 2 ). It is suggested by previous studies that people in CMP should treat GW before drinking ( Ha et al., 2022 ). However, treatment for GW has to be appropriate with current water quality. People have a certain perception of GW quality leading to their different usage behavior. Therefore, people's perceptions as well as their stories need to be discussed more to find out about what factors can affect their water use habit, perhaps convenience and applicability of water sources, available alternative water sources and economic conditions. 3.4 GW quality of households where GW is perceived as best water resource Through their perceptions, people identify some uncertainties in the quality of the water they use. It can be said that the perceptions of the people and their responses are important factors in determining water quality as well as the role of GW in people's life and the status of GW extraction. Figs. 6 and 7 shows that most samples, which are perceived to have the best water quality, adapt with QCVN standards. There are not so many samples to parameters over the thresholds. For each situation there will be a reasonable explanation in Table 2 . For households whose water samples exceed the QCVN standard for Boron, the first sample is from a restaurant. Assuming that the quality of all types of water is same, they use a large amount of GW daily for restaurant business (400 – 500L/day). GW use brings financial benefits to the household at a very low cost, therefore GW is still the best water source in their situation. The remaining three out of five samples from households are using GW as the main source of water for drinking. They believe in the quality of their current water source because two households use a mini-filtration system for private households and one household is a supplier of potable (bottled) water with a professional treatment system. The fifth household uses a sedimentation method. This household still uses GW and RW in combination because of the low cost although TW is available. When RW runs out in the dry season, the main water source could be GW. The last household does not use GW for drinking purposes, GW is used as the main source of water, not only for washing and cooking purposes, but also in shrimp farming. Each month, the household uses much electricity to pump underground water up and the electricity bill to pump groundwater reach to 9.000.000 -10.000.000 VND/month (391 USD to 434.8 USD). GW is used for work that generates a large income for the family, so it is appreciated for its quantity and quality. Only two samples have chlorine content exceeding the standards. Both use GW as main water source and apply modern filtration systems before use. One uses a mini-filtration system for the household and one uses a large filtration system for bottled water production and distribution. This also explains the simple reason that although the original water quality is not good to reach the standards, GW is mainly used and still is be the best water source. Depending on the different cases, people always have their own reasons to choose which water source they consider to be the best, even though the water quality does not meet the standards of domestic water. The quality of GW and RW is highly appreciated, but RW has a great limitation in terms of inadequate water storage volume, so it is still used only in the rainy season, in general GW is preferred in comparison with rain water. Seemingly unlimited availability is a strong positive aspect of using GW in people's opinion. 3.5 Evaluation of people awareness about the impact of groundwater extraction on land subsidence Changing people's living habits is not simple. The priority of this study is to understand people's thoughts and perceptions of their issues. The next question in the questionnaire sheds further light on people's awareness of land subsidence impact and the effects of excessive GW extraction. Most of the people have low or extremely low awareness of these two problems as shown in Figs. 8 and 9 . In the two questions on the awareness issue, the number of households is also divided into two groups as in the previous part: GW users and TW users. According to the results collected, 80.8% of GW users had the lowest level (level 1) of awareness of the impact of GW extraction, 69.2% for the TW users group. With the highest level of awareness about the impact of GW extraction, the GW users group only has 5.00% of the households at this level, while TW users group has 15.4%. Similar to the question of people's perception of land subsidence in Ca Mau, the level of awareness among GW users on this issue from level 1 to level 4 was 71.7%, 5.8%, 19.2% and 3.30% respectively. As for the TW users, the level of awareness of the people is higher with the rate from low to high of 61.5%, 7.70%, 23.1% and 7.70%. In general, respondents in this study mainly are not aware of the impact of GW extraction as well as land subsidence in Ca Mau. The level of awareness of people about these issues is extremely low and needs to be promoted. In addition, the importance of GW to people's life is extremely high ( Fig. 3 ), which could lead to a low level of awareness. The opinions about the importance of GW and people's awareness of excessive GW extraction as well as land subsidence show an inverse relationship. The more people depend on groundwater use, the harder it is to pay attention to the issues surrounding excessive groundwater extraction. Similar to the farmers in the Red River Delta, people do not have the intention to adapt to climate change when they do not realize any threat to their life and health ( Luu et al., 2019 ). Currently, GW plays an important role in people's lives, while their awareness of overexploitation and land subsidence in Ca Mau is limited, it leads to the lack of attention to climate change issues. According to Eslami et al. (2021) inadequate understanding of environmental systems and processes in the Mekong Delta results in misinterpretation of socio-environmental aspects, ineffective policymaking, and an uninformed public opinion. The latter aspect is important, as only a well educated and informed society may accept change in their daily life. It can be seen that the TW users and GW users object groups have quite different answers, although the number of TW users’ questionnaires is still small compared to GW users. For the next study, it is necessary to expand the number of TW users and focus on making a case study in an area that contains both two subjects for deeper comparison. The proposed alternative water sources should also be analyzed and applied to each condition, considering the advantages and disadvantages of this water source for each area to get the most accurate results. RW is mentioned frequently in the answers of households as potential alternative water resources. Research on how to overcome the disadvantages of RW subtraction is also very interesting when RW has gained the interest of the people using it. 4 Conclusion This study has created a general picture of the Ca Mau household's GW using habits. The research approach of evaluating ground water quality parameters and then going into detail about each response of the people has shown more closely and explained the analysis results as well as the connection between the survey and sample analysis data. The study shows the opinions of households concerning GW use habits based on questionnaires that are reasonably collected according to the spatial and social condition distribution. These results do not claim to be statistically representative for the whole CMP population, however, it is a crucial first step in the evaluation of GW use habits in CMP. GW plays an important role in people's life, it is used for washing, cooking and other activities (restaurant business, shrimp farming, bottled water business, etc.). Depending on the people's perception of different GW qualities, they use GW for different purposes. People estimated the water quality through their perception without knowing the water quality though professional analysis techniques. For important and direct health-related purposes such as cooking and drinking, people have to prepare and treat GW as well as provide alternative water sources to GW if they feel that the water quality is inadequate. As people become more dependent on groundwater (GW users), their awareness of the potential impacts of GW extraction becomes less, and the importance of GW in their lives increases. Besides, people do not have much awareness about land subsidence processes in Ca Mau in general and especially about the potential, that GW extraction can be a major factor for land subsidence. If people have more options to use water, such as TW users, habits of use or dependence on GW as well as their perception of the harmful effects of groundwater extraction change. This opens a research direction to find potential alternative water besides tap water to meet the needs of the people - as a solution to reduce the current excessive exploitation of GW. Author contributions Van Cam Pham: Conceptualization, Methodology, Investigation, Formal analysis, Writing - Original Draft, Visualization. Jonas Bauer: Investigation, Writing - Review & Editing, Visualization. Nicolas Börsig: Investigation, Writing - Review & Editing, Project administration. Johannes Ho: Investigation, Review & Editing. Long Vu Huu: Investigation & Review. Hoan Tran Viet: Review & Editing. Felix Dörr: Review & Editing. Stefan Norra : Writing - Review & Editing, Supervision, Project administration, Funding acquisition Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment This research was carried out as a part of the project ViWaT- Engineering, funded by the German Federal Ministry of Education and Research (BMBF) under grant No. 02WCL1474A. We sincerely thank the Vietnamese National Center for Water Resources Planning and Investigation and the Department of Natural Resources and Environment Ca Mau (DONRE Ca Mau) for their kindly support. We would like to thank the laboratory staffs of the Institute of Applied Geosciences at KIT, Karlsruhe and all project partners working together in this project. I would like to express a special thanks to the financial support of Graduate Funding from the German States (Landesgraduiertenförderung). Finally, we acknowledge support from the KIT-Publication Fund of the Karlsruhe Institute of Technology.
|
[
"ALLISON",
"BAUER",
"BAUMANN",
"CRESWELL",
"DANH",
"DIGIUSTO",
"ERBAN",
"ESLAMI",
"FRIEDRICH",
"GIAO",
"GUSTAFSON",
"HA",
"HA",
"HAN",
"HENS",
"HOAN",
"LI",
"MINDERHOUD",
"MINDERHOUD",
"MINDERHOUD",
"NGUYENTHANH",
"BYT",
"CLARK",
"RAVISHANKAR",
"TRAN",
"VAN",
"VANDERFORD",
"WILBERS"
] |
5d3390be9146418cb375e85d903b0543_Treatment Patterns and Healthcare Outcomes with Collagenase Clostridium Histolyticum vs Surgery in P_10.1016_j.esxm.2021.100321.xml
|
Treatment Patterns and Healthcare Outcomes with Collagenase Clostridium Histolyticum vs Surgery in Peyronie's Disease: A Retrospective Claims Database Analysis
|
[
"Trost, Landon",
"Huang, Huan",
"Han, Xu",
"Burudpakdee, Chakkarin",
"Hu, Yiqun"
] |
Introduction
Treatments for Peyronie's disease (PD) include surgical management and collagenase clostridium histolyticum (CCH).
Aims
To evaluate PD treatment trends after CCH approval and compare clinical outcomes in CCH- and surgery-treated cohorts.
Methods
Patients newly diagnosed with PD between January 2011 and December 2017 were identified in a U.S. claims database. Cohorts initiating treatment with CCH or surgery between January 2014 and June 2017 were included. Patients were continuously enrolled ≥6 months before and ≥12 months after index date. Post-treatment penile complications and analgesic use were compared 1 year after procedure in propensity score-matched cohorts.
Main outcome measures
The main outcome measures of this study were treatment patterns, penile complications, and analgesic use.
Results
In the newly diagnosed PD cohort, 1,609 patients received CCH and 1,555 patients had surgery. Overall CCH or surgery treatment rate/year increased from 9.8% in 2014 to 15.5% in 2017, with <1% receiving verapamil or interferon. Initial treatment ratios of CCH to surgery increased from approximately 1:1 (2014) to 2:1 (2017). In the unmatched CCH (n = 1,227) and surgery (n = 620) cohorts, more (P < .05) surgery-treated patients received analgesics (particularly opioids), oral PD therapies, vacuum erection devices, and phosphodiesterase-5 inhibitors before the index date. After propensity score matching (n = 620/cohort), newly occurring postprocedural complications during the follow-up period were higher in the surgery cohort (25.3% vs 18.4%, P = .003). The surgery cohort had significantly (P < .05) higher rates of erectile dysfunction (65.0% vs 44.8%), penile pain (17.9% vs 8.9%), and penile swelling (8.1% vs 5.2%) and was more likely to be prescribed opioids (93.3% vs 38.9%; P < .0001) or non-steroidal anti-inflammatory drugs (27.0% vs 20.3%; P = .006).
Conclusion
CCH demonstrated fewer complications and less analgesic use than surgery and was used as the initial therapy for PD twice as often as surgery.
L Trost, H Huang, X Han, et al. Treatment Patterns and Healthcare Outcomes with Collagenase Clostridium Histolyticum vs Surgery in Peyronie's Disease: A Retrospective Claims Database Analysis. Sex Med 2021;9:100321.
|
Introduction Peyronie's disease (PD) is characterized by a disorganized, excessive deposition of collagen that results in plaque formation within the penile tunica albuginea. 1 , This plaque formation can restrict tunica lengthening during penile erection, resulting in penile curvature, deformity, discomfort, and/or pain. 2 The reported U.S. prevalence of PD ranges from 0.5% to 13%, although this may be underestimated owing to a reluctance of men to admit to the condition and seek treatment for it. 1 2 , Although PD occurs predominantly in older men, it has been reported in nearly every age group. 3 3–6 Treatment goals are to maximize symptom control, sexual function, and patient/partner quality of life while minimizing adverse events and patient/partner burden. Historically, surgical management of PD was considered the “gold standard” treatment for men with intact erectile function and PD, with multiple variations of plication, corporoplasty, or incision/excision and grafting techniques described. 1 1 , However, surgery is associated with complications including penile length/volume loss, erectile dysfunction (ED), sensory changes, recurrence of curvature, and palpable abnormalities, among others. 7 Given a desire for more conservative therapies, other treatments, including oral and topical formulations, have been proposed; however, limited, conflicting data have failed to consistently show benefits. 8 1 , 9 In December 2013, the U.S. Food and Drug Administration approved the first injectable therapy for the treatment of PD (collagenase clostridium histolyticum [CCH] [Xiaflex; Endo Pharmaceuticals Inc, Malvern, PA]), based on 2 phase III, randomized clinical trials that demonstrated safety and efficacy in treating penile curvature. 10 , Since then, multiple postapproval studies have confirmed the clinical utility and efficacy of CCH in various PD cohorts. 11 However, limited data are currently available on utilization rates of CCH and changes in practice patterns that have occurred since its availability. A retrospective regional claims database analysis of patients with PD between 2013 and 2016 showed that use of injectable therapies, including CCH, is increasingly displacing surgical management as first-line treatment in clinical practice. 12–14 15 The current objectives were to review a nationally representative healthcare claims database to describe therapeutic trends in PD treatment in newly diagnosed patients after U.S. regulatory approval of CCH for PD and to compare clinical outcomes of men treated with CCH vs surgery. The study hypothesis was that after approval and release, CCH use would increase and surpass surgery as a first-line treatment for PD. Materials and methods Study Design This retrospective, longitudinal cohort study was conducted using administrative healthcare claims data gathered from the IQVIA Real-World Data Adjudicated Claims, a U.S. database, between January 1, 2010 and June 30, 2018. This database contains anonymized information for >150 million unique enrollees representing a diverse cohort by U.S. geographic region (ie, northeast, midwest, south, west), employers, payers (eg, commercial, Medicaid, Medicare, self-insured), providers, and specialists. The data collected included demographic variables and medical and prescription claims data (eg, inpatient and outpatient diagnoses, procedures, and medications). The database is considered to be representative of the U.S. insured population with regard to age and sex. This database is further described by Camper et al (2019). 16 16 Three patient cohorts were created for this study: 1 cohort for the treatment trend analysis and 2 cohorts for comparing patient characteristics, treatment patterns, post-treatment penile complications, and analgesic use (ie, non-steroidal anti-inflammatory drugs, opioids) among patients initiating CCH vs surgery. For the treatment trend analysis, a cohort that was newly diagnosed with PD between January 2011 and December 2017 was created ( Supplementary Figure 1 ). To ensure that patients were newly diagnosed, those with a PD diagnosis recorded during the 12 months before the first observed PD diagnosis were excluded. Continuous enrollment ≥12 months before the first PD diagnosis was required to ensure an adequate look-back period. Continuous enrollment for ≥30 days after the first PD diagnosis was required to capture treatment trends. The first treatment received (ie, CCH, penile plication, incision/excision and grafting, penile prosthesis, interferon alpha, or verapamil) and the time from diagnosis to the first treatment were assessed. In cases where men underwent >1 treatment during the study period, only the first treatment was captured. Patients with PD were identified based on diagnosis codes (International Classification of Diseases, Ninth/Tenth Revision, Clinical Modification), and treatment of interest was identified based on National Drug Codes, the Healthcare Common Procedure Coding System, or Current Procedural Terminology codes ( Supplementary Tables 1–4 ). For the comparative analysis, the same database was used to create the cohorts that included patients initiating intralesional CCH therapy or penile surgery of interest (ie, plication, incision/excision and grafting, and penile prosthesis implantation) between January 2014 and June 2017 ( Figure 1 ). This selection period was chosen to coincide with the approval of CCH. The index date for the CCH cohort was defined as the date of the first CCH claim; for the surgery cohort, it was defined as the date of the first surgery of interest ( Supplementary Figure 2 ). Patients were included if they were ≥18 years of age on the index date and had continuous enrollment of ≥6 months before the index date (baseline), to capture prior PD therapies, and ≥12 months after index (follow-up period). Patients were required to have ≥1 medical claim with a PD diagnosis and no evidence of penile surgery of interest or CCH treatment during the baseline period. The CCH cohort was matched 1:1 to the surgery cohort using propensity score (PS) matching based on baseline characteristics (ie, age category, Charlson comorbidity index [CCI] category, comorbidities, geographic region, history of radical prostatectomy, indicator of ≥1 all-cause hospitalization, insurance plan type, payer type, total all-cause cost/month per patient, treatment of PD). Outcomes, including penile-related complications and analgesic use, were compared between PS-matched cohorts during the 12-month postindex period. Study Assessments For the treatment trend analysis, the number of patients newly diagnosed with PD, the initial treatments received, and time from the diagnosis to initial treatment were analyzed. For the comparative analysis, baseline demographic and clinical characteristics and penile events of interest were compared before and after PS matching ( Supplementary Tables 4 and 5 ). Time from earliest PD diagnosis to the initial treatment with CCH or surgery (index event) and PD-related treatments used during the follow-up period (eg, analgesics, intralesional injections) were also evaluated, as were postindex penile-related complications and medication use. For the CCH cohort, the total number of CCH injections (identified using Healthcare Common Procedure Coding System codes) per patient was evaluated. Statistical Analysis All measures were reported with descriptive statistics, using frequencies and percentages for categorical variables and mean, SD, median and interquartile ranges (Q1–Q3) for continuous variables. For the comparative analysis, baseline patient characteristics of the unmatched CCH and surgery cohorts during the preindex period were described. The t -test (mean) and Wilcoxon rank-sum test (median) were used to compare continuous variables between the unmatched cohorts, while χ 2 tests were used for categorical variables. All tests were conducted assuming a two-tailed test of significance and α = .05. The CCH and surgery cohorts were compared after PS matching to minimize confounding and bias. A PS model that estimated the probability of receiving either CCH or surgery was constructed from key baseline characteristics including: duration of preindex period, index age group, geographic region, payer type, plan type, CCI categories, selected comorbidities (including benign prostatic hyperplasia, hypogonadism, diabetes, cardiovascular disease, dyslipidemia, Dupuytren's contracture, lower urinary tract symptoms, prostate cancer, depression, obesity, ED, penile pain), and history of radical prostatectomy. 17 Postprocedural treatment outcomes, including penile complications and analgesic use during postindex period, were compared using the PS-matched cohorts. Pair-wise comparisons were made between PS-matched cohorts using the Wilcoxon signed-rank test for continuous variables and the McNemar-Bowker test for categorical variables. Statistical analyses were performed using SAS software, version 9.4 (SAS Institute, Inc, Cary, NC). Results Treatment Trends In the cohort of patients newly diagnosed with PD (n = 36,156) identified between 2011 and 2017, 1,555 patients had surgery and 1,609 received CCH as initial therapy ( Supplementary Figure 1 ). During this time period, while the annual rate of new PD diagnoses remained stable, the treatment rate with CCH or surgery increased gradually, from 9.8% in 2014 to 15.5% in 2017 ( Figure 2 ). After the release of CCH in 2014, its use as first-line treatment for PD increased by 1.6%/year ( P = .023 for yearly trend), whereas the rate for surgery remained stable (0.2%/year, P = .078). The ratio of CCH vs surgery as initial treatment for PD increased from approximately 1:1 in 2014 to approximately 2:1 in 2017 ( Figure 3 ). When stratifying this newly diagnosed cohort by surgery type, the percentage of patients treated with plication and incision and grafting decreased by 18.8% in 2017 from its peak in 2015, while the percentage of patients receiving a penile prosthesis increased by 11.9% from 2015 to 2017. Between 2014 and 2017, the mean time from the initial PD diagnosis to the first CCH treatment decreased from 13 months in 2014 to 8 months in 2017. The mean time from diagnosis to first treatment also decreased in the surgery cohort, although at a slower rate that was not statistically significant (0.7 vs 1.8 months/year, P = .13). The use of other intralesional injection therapies (eg, verapamil or interferon) as the initial treatment remained consistently low (<1%); however, these therapies often are not submitted for insurance claims and may be underrepresented in the current cohort. Patients treated with CCH received a median of 6 injections; the overall distribution of injections indicated that 32.6% of patients received 8 injections ( Figure 4 ). Comparative Analysis Baseline Patient Demographics, Clinical Characteristics, Treatment History There were 1,227 CCH cohort and 620 surgery cohort patients identified from 2014 to 2017 before PS matching. During the baseline period, there were statistically significant differences between both unmatched cohorts in age group at index, mean CCI score, select comorbidities, analgesic use, prior PD therapies, and prior ED therapies ( Table 1 , Supplementary Tables 5 and 6 ). After PS matching, 620 patients remained in each cohort. The matched cohorts showed similar baseline demographic characteristics (mean age of 54 years at index) and similar clinical characteristics ( Table 1 , Supplementary Table 5 ). The mean CCI score was approximately 1.5 for both cohorts; the most common comorbidities being ED, cardiovascular diseases, dyslipidemia, and benign prostatic hyperplasia. In the matched cohorts, there was a higher percentage of patients with a history of radical prostatectomy in the surgery cohort than in the CCH cohort (5.2% vs 1.9%, P = .001). The majority of patients in the matched CCH (73.9%) and surgery (76.5%) cohorts filled prescriptions for analgesics (ie, opioid or non-steroidal anti-inflammatory drugs [NSAIDs]) at some point before their index treatment ( Supplementary Table 6 ). CCH and surgery cohorts were similar in their use of oral and intralesional PD therapies and ED treatments such as phosphodiesterase-5 inhibitors and testosterone. Postprocedural Complications During the 12-month follow-up period (PS-matched cohorts), the surgery cohort had a higher percentage of patients with newly occurring postprocedural complications vs the CCH cohort (25.3% vs 18.4%, P = .003). Among all postprocedural penile-related complications (newly occurring and reoccurring), several notable complications occurred at significantly higher rates in the surgery vs CCH cohort, including ED (65.0% vs 44.8%), penile pain (17.9% vs 8.9%), and penile swelling (8.1% vs 5.2%; Figure 5 ). In contrast, corporal rupture (1.8% vs 0.8%) and penile hematoma (1.1% vs 0.2%) were reported more frequently in men treated with CCH, although differences for the latter 2 comparisons were not statistically significant ( Supplementary Table 7 ). Postprocedural Analgesic Use and Hospitalization Analgesic use and hospitalization rates were evaluated in the PS-matched cohorts. To limit confounding, patients in the CCH cohort who subsequently underwent surgery, and patients in the surgery cohort who subsequently received CCH were excluded, resulting in a total of 596 patients remaining in each cohort ( Figure 1 ). Within 12 months before the index date, 38.6% of patients in the CCH cohort and 43.5% of patients in the surgery cohort filled at least 1 opioid prescription. During the 1-year postindex follow-up period, the use of opioids (93.3% vs 38.9%; P < .0001) and NSAIDs (27.0% vs 20.3%; P = .006) was higher in the surgery cohort than in the CCH cohort ( Figure 6 ). The mean (SD) number of opioid prescriptions per patient was also higher in the surgery vs CCH cohort (4.4 [5.7] vs 1.8 [4.1]; P < .0001), and nearly all opioid prescriptions (94.8%) were filled within the first week after the surgical date. Patients receiving surgery were more likely than patients treated with CCH to be hospitalized for PD-related complications during the follow-up period (2.9% vs 0.5%; P = .002). Discussion This is the first study to report findings on observed treatment patterns and outcomes in the U.S. commercially insured, newly diagnosed, and newly treated PD population. Findings demonstrated that CCH is more commonly used as initial therapy for PD and that its use as first-line treatment doubled over surgery by 2017. In addition, results showed that an increasing number of men sought treatment with either CCH or surgery overall, suggesting that the option of an effective, conservative therapy led more men to seek treatment for PD. The time from diagnosis to initial treatment has also decreased throughout the observed period, which may indicate that patients are seeking out treatment earlier or that providers are offering CCH sooner than previously offered. The findings from the present study are consistent with other published data. Sukumar et al (2019) reported an analysis of CCH claims in New York state during a similar time period. In their series, CCH was used as the first-line therapy for most patients newly diagnosed with PD over surgery, and CCH was used more frequently than surgery as a treatment option. 15 The specific reasons for the preferential use of CCH as a first-line therapy were not directly captured in the present study; however, the minimally invasive nature of CCH and lower postindex complication rates compared with surgery, as demonstrated in this study, may be contributing factors. Results from the present study highlight a lower postprocedural complication rate in the CCH (18.4%) vs surgery cohort (25.3%; P = .003). Although rare, patients treated with a surgical procedure were also more likely to be hospitalized for a procedure-related complication than those treated with CCH. A unique finding of this study was the frequency of opioid use in the CCH and surgery cohorts before and after the procedures. During the 12 months before the index date, approximately 40% of men with PD had filled a prescription for opioids. Twelve months after the index date, the percentage of patients who filled opioid prescriptions was 2.4-fold higher in the surgery cohort than the CCH cohort (93.3% vs 38.9%; P < .0001), with almost all prescriptions in the surgery cohort filled within a week of the procedure. The present study has several limitations that are inherent to claims-based methodologies, including potential misclassification of events based on diagnosis codes. In addition, the present study population was only a sample of all patients with PD in the United States. However, despite the small absolute numbers, the relative findings can be generalized to other U.S. commercially insured patients, given the broad sampling across the country. Inclusion of patients receiving penile prostheses in the surgery cohort is a limitation, given that this subgroup of patients likely represents a distinct population that may experience greater rates of postoperative pain and hospitalizations compared with groups of patients receiving other interventions. The frequencies of treatments reported during the study period likely underestimates the total number of surgical procedures and CCH injections performed annually. Based on Endo Pharmaceutical's internal data, approximately 37,500 vials of CCH were distributed from specialty pharmacies for PD use in 2017. Using a mean treatment of 6 vials per patient, this would suggest that approximately 6,250 unique patients with PD would have received treatment with CCH in 2017. This notably contrasts with the lower rate identified in the present study (n = 1,609). There are likely several reasons for this discrepancy, including evaluation of only men with newly diagnosed PD, extraction of only the first postdiagnosis treatment, specific requirements for look-back and look-forward periods, limited inclusion of Medicare/Medicaid beneficiaries, and other data quality requirements. However, despite these limitations, the data for the 2 treatment modalities are likely the most accurate as what can be reliably captured in a methodologically sound manner and provide insights from a comparative standpoint. For instance, the ratio of incremental CCH use reflected in the trending analysis in this study was consistent with CCH specialty pharmacy data, showing a 230% increase in the number of CCH vials distributed in 2017 compared with 2014. 16 Another limitation of insurance claims databases is inability to assess disease course and severity (eg, pretreatment disease history, degree of penile curvature), reasons underlying treatment selection (eg, insurance coverage of specific therapies), or patient-reported outcomes (eg, patient satisfaction). There was also limited representation of Medicare and Medicaid beneficiaries, and therefore, findings may not be generalizable to the uninsured, underinsured, and patients 65 years and older. Finally, some treatments used to manage PD (eg, NSAIDs, omega-3-fatty acids, traction, interferon alpha, verapamil) are available over the counter and/or are not submitted to insurance carriers for reimbursement; therefore, their use may be underestimated in the present study. In addition, although the number of patients filling a prescription was captured in the database, actual utilization was not tracked. Despite these limitations, the current data set is the largest of its kind and presents a nationally representative report of how trends have changed in PD management since the introduction of CCH. The data additionally demonstrate an increasing use of CCH as first-line therapy for PD and are the first to report actual utilization rates of CCH and compare different complication rates between CCH and surgery in a scientifically rigorous manner. Conclusions Results of this retrospective claims database analysis demonstrate increasing use of CCH vs surgery as first-line treatment for newly diagnosed PD in the real-world clinical practice setting. Compared with surgery, CCH treatment of PD was associated with lower rates of penile-related complications, hospitalization for PD-related complications, and use of opioids and NSAIDs within 1 year after treatment. Future investigations are recommended to explore factors influencing PD-prescribing trends and to assess patient satisfaction with PD treatment. Statement of authorship Landon Trost: Conceptualization, Methodology, Investigation, Resources, Writing - Review & Editing, Funding Acquisition; Huan Huang: Conceptualization, Methodology, Investigation, Resources, Writing - Review & Editing, Funding Acquisition; Xu Han: Conceptualization, Methodology, Investigation, Resources, Writing - Review & Editing, Funding Acquisition; Chakkarin Burudpakdee: Conceptualization, Methodology, Investigation, Resources, Writing - Review & Editing, Funding Acquisition; Yiqun Hu: Conceptualization, Methodology, Investigation, Resources, Writing - Review & Editing, Funding Acquisition. Acknowledgments Study analyses were conducted by Yi-Chien Lee, MSc, and Yao Cao, MSc, employees of IQVIA. Medical writing assistance was provided by Jackie Raskind, PharmD, KPS Life, Malvern, PA, and editorial assistance was provided by Synchrony Medical Communications, LLC, West Chester, PA, under the direction of the authors. Funding for this assistance was provided by Endo Pharmaceuticals Inc. Supplementary Data Supplementary Figures and Tables Supplementary Data Supplementary data related to this article can be found at https://doi.org/10.1016/j.esxm.2021.100321 .
|
[
"NEHRA",
"STUNTZ",
"DIBENEDETTI",
"JALKUT",
"DEVECI",
"MULHALL",
"LEVINE",
"CARSON",
"CHUNG",
"GELBARD",
"NGUYEN",
"YAFI",
"ZIEGELMANN",
"SUKUMAR",
"CAMPER",
"QUAN"
] |
1f13f5075fba4fca97916abd86f38fd2_Expression analysis of the N-Myc downstream-regulated gene 1 indicates that myelinating Schwann cell_10.1016_j.nbd.2004.07.014.xml
|
Expression analysis of the N-Myc downstream-regulated gene 1 indicates that myelinating Schwann cells are the primary disease target in hereditary motor and sensory neuropathy-Lom
|
[
"Berger, Philipp",
"Sirkowski, Erich E.",
"Scherer, Steven S.",
"Suter, Ueli"
] |
Mutations in the gene encoding N-myc downstream-regulated gene-1 (NDRG1) lead to truncations of the encoded protein and are associated with an autosomal recessive demyelinating neuropathy—hereditary motor and sensory neuropathy-Lom. NDRG1 protein is highly expressed in peripheral nerve and is localized in the cytoplasm of myelinating Schwann cells, including the paranodes and Schmidt–Lanterman incisures. In contrast, sensory and motor neurons as well as their axons lack NDRG1. NDRG1 mRNA levels in developing and injured adult sciatic nerves parallel those of myelin-related genes, indicating that the expression of NDRG1 in myelinating Schwann cells is regulated by axonal interactions. Oligodendrocytes also express NDRG1, and the subtle CNS deficits of affected patients may result from a lack of NDRG1 in these cells. Our data predict that the loss of NDRG1 leads to a Schwann cell autonomous phenotype resulting in demyelination, with secondary axonal loss.
| null |
[] |
0cffe97e68e8401da677568aaf4d55bd_Application of machine learning to predict of energy use efficiency and damage assessment of almond _10.1016_j.indic.2023.100298.xml
|
Application of machine learning to predict of energy use efficiency and damage assessment of almond and walnut production
|
[
"Beni, Mehrdad Salimi",
"Gholami Parashkoohi, Mohammad",
"Beheshti, Babak",
"Ghahderijani, Mohammad",
"Bakhoda, Hossein"
] |
A study was conducted in Shahrekord city, Iran, focusing on improving the production of almond and walnut crops on rural agricultural lands. The gardeners selected for the study shared similar characteristics and production histories. One of the major challenges in producing these crops was the manual harvesting process, which required a significant amount of human labor in the region. To collect data, questionnaires and face-to-face interviews were conducted. The study used machine learning models, specifically artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) models, to predict energy use efficiency and environmental impacts in almond and walnut production. Among the models used, the ANFIS model with a three-level topology was found to be the most accurate in predicting output energy generation and environmental impacts in both almond and walnut production. The R2 values for the testing stage ranged from 0.969 to 0.996 for output energy generation and 0.994 to 0.997 for environmental impacts. The study demonstrated the effectiveness of using machine learning models like ANN and ANFIS in predicting energy use efficiency and environmental impacts in almond and walnut production, which can aid in planning and managing these crops more sustainably and efficiently in the future.
|
1 Introduction There is a growing concern that the agricultural sector's heavy reliance on non-renewable resources, particularly fossil fuels, may eventually lead to a decline in its production capacity. The continuous and excessive use of these inputs could potentially hinder the growth in food production that has been observed in recent decades ( Moghimi et al., 2013 ). The use of limited and non-renewable resources has also caused concerns about environmental problems caused by agricultural operations, such as pollution, deforestation, loss of soil fertility due to erosion and excessive exploitation of the soil, as well as concerns about intensive agriculture ( Taherzadeh-Shalmaei et al., 2023 ). Most of the methods, tools and techniques in the sustainability toolbox have so far focused on relative improvements in environmental performance, but this is often not sufficient to achieve the order of magnitude of improvement required to meet the requirements ( Elalami et al., 2022 ). The limitation of resources and energy has caused special attention to be given to the way of allocation as well as the management of its consumption ( Elyasi et al., 2022 ). In the field of farm energy analysis, different types of energy are considered in different categories. The gradual change of the traditional agricultural system towards the development and advancement of today's technology in terms of agricultural products has caused the agricultural units to apply the correct rules and principles of energy and environment in the present era ( Unakıtan and Aydın, 2018 ). In the cultivation and production of horticultural products, the amount of cultivated area is one of the important factors in estimating the costs and determining the energy efficiency of natural facilities and finally increasing the yield per hectare ( Feyzbakhsh et al., 2018 ). The nutritional value of walnuts and almonds, the ever-increasing demand for products in the world and their high selling prices show the profitability of investing in the construction of walnut and almond orchards. Almond and walnut trees are highly resistant to cold and dehydration. The type of soil, the amount of irrigation, sufficient humidity, proper sunlight and the use of animal and chemical fertilizers are important for optimal seed growth. As a result, their cultivation is based on geographical conditions ( Baer et al., 2016 ). The latest statistics show that the cultivated area of walnuts and almonds in Iran is 53504 ha and 75553 ha, respectively. The amount of walnut production is 386976.51 tons per year and almond production is 163568.2 tons per year, which has a significant share in the dry food ration in Iran ( FAO, 2020 ). Reports on the energy flow of inputs and outputs in product production are ways to achieve sustainable agricultural development. The study of energy flow can clarify unknown dimensions of the production process in the system that are not considered in other management methods, including the common methods of studying mechanization or economic methods ( Kaab et al., 2021 ). The energy ratio in different agricultural systems is influenced by the product type and the materials used in its production. Therefore, the energy ratio is crucial in identifying deficiencies and plays a key role in maintaining production stability, optimizing economic benefits, preserving fossil fuel reserves, and reducing air pollution ( Ghritlahre and Prasad, 2018 ). Analyzing input and output energies in production systems in order to design optimal cultivation patterns is not scientifically possible without examining the efficiency and effectiveness of energy consumption ( Taherzadeh-Shalmaei et al., 2021 ). The agricultural industry, which serves as a major food supplier, is intricately linked to the elements of water, soil, and air. However, this connection also leads to pollution and alterations in these environments. Lately, there has been a growing focus on assessing the impacts of these changes ( Younis et al., 2021 ). Each agricultural product consumes a specific input and a certain amount of it according to the environmental and geographical conditions of a country and the type of product ( Tricase et al., 2018 ). Life cycle assessment (LCA) are becoming more commonly utilized to communicate scientific evidence of enhanced environmental performance. These studies typically involve comparing new product designs with existing products or a similar reference to demonstrate that the eco-efficiency of a system's products has been incrementally improved or is leading in terms of eco-efficiency performance. This approach allows for a more objective and rigorous evaluation of environmental impact and helps to promote sustainable practices ( Wang et al., 2021 ). Modern techniques like hybrid neural-genetic network models and artificial neural networks (ANN) are being increasingly used in conjunction with traditional statistical methods like multivariate regression to model and predict the yield of agricultural and horticultural crops. These advanced methods offer more precise and accurate predictions, enabling farmers to optimize crop production and reduce waste ( Nabavi-Pelesaraei et al., 2013 ). ANN methods have become increasingly popular in modeling due to their high accuracy and efficiency. These networks are designed to mimic the structure and function of the human brain, with the ability to learn, generalize, and make decisions. By emulating the neural network of the brain, these models can predict outcomes and make decisions with a high degree of accuracy, making them a valuable tool in fields such as agriculture where precise predictions are necessary for optimal yield ( Renno et al., 2016 ). The underlying principle of these methods is to replicate the operation of the human brain. One such artificial intelligence technique that combines the strengths of both ANN and fuzzy systems is the adaptive neuro-fuzzy inference system (ANFIS). By integrating the capabilities of these two models, ANFIS can provide improved accuracy and flexibility in predicting outcomes and making decisions in various fields, including agriculture ( Sefeedpari et al., 2016 ). The ANFIS system is structured as a network and shares similarities with the ANN method. It models input variables using input membership functions and associated parameters, and then utilizes output membership functions and related parameters to predict output variables. Fuzzy systems, which have gained particular prominence in various fields, including agriculture, are integrated into this model. This approach enables ANFIS to provide more precise and accurate predictions by combining the strengths of both fuzzy systems and ANN ( Kaab et al., 2019 ). In recent years, there has been a growing interest in using low-salinity waterflooding to improve oil recovery. This method involves injecting diluted water into an oil reservoir to enhance recovery. By changing the wettability of the reservoir rock, LSWF can increase the amount of oil recovered. However, predicting reservoir recovery using traditional simulators can be time-consuming and expensive. This study introduces a new approach using a feed-forward neural network to predict low-salinity waterflooding efficiency in a heterogeneous reservoir. The model considers various input parameters such as water dilution, mobility ratio, reservoir heterogeneity, permeability anisotropy ratio, API gravity, and production water cut. The model was developed using 20,000 simulated data points and validated using a real carbonate reservoir in Wyoming. The neural network parameters were optimized through sensitivity analyses, and the model's physical behavior was validated through trend analysis. The performance of the model was evaluated using statistical indices, with low average absolute percentage error values indicating its accuracy. It's important to note that this model is specifically designed for single-stage, low-saline waterfloods using a 5-spot pattern ( Kalam et al., 2021b ). Developed a novel ANN model with two hidden layers to predict the performance of a 5-spot pattern waterflood in a heterogeneous reservoir. The model accurately estimates the movable oil recovery efficiency by considering various factors such as permeability variation coefficient, mobility ratio, permeability anisotropy ratio, production water cut, wettability indicator, and oil/water density ratio. The model achieved a low Mean Absolute Percentage Error (MAPE) for both training and testing data. Comparative studies with other soft computing models showed that the ANN model outperformed in terms of accuracy and computational efficiency. The validation of the model using real field cases demonstrated good agreement with actual data. The use of the ANN model can save significant computational time compared to using a reservoir simulator for waterflood performance forecasting ( Kalam et al., 2022 ). Researchers have developed a hybrid model called ANFIS-GA-PSO to predict the shear strength of concrete beams. Accurately predicting shear strength is crucial for assessing the ability of concrete beams to withstand external forces like floods and earthquakes. The model combines ANFIS, genetic algorithms (GA), and an extreme learning machine (ELM) for preliminary analysis. The model takes into account factors such as yield strength of horizontal reinforcement, shear span to concrete compressive strength ratio, effective depth, and depth-to-width ratio. The results showed that the ANFIS-GA hybrid model outperformed the ELM model in terms of accuracy, achieving an RMSE of 0.546 and an r value of 0.912, compared to an RMSE of 0.888 and an r value of 0.833 for the ELM model. Additionally, the ELM model demonstrated faster training performance. The study identifies the key factors determining shear strength in reinforced concrete beams, regardless of the presence of transverse reinforcement ( Li et al., 2023 ). The significance of this work lies in its focus on addressing the challenges and concerns associated with the production of agricultural goods, particularly in relation to the heavy reliance on non-renewable resources such as fossil fuels. The ongoing use of these inputs poses a threat to the agricultural sector's production capacity and raises environmental concerns such as pollution, deforestation, and soil degradation. While previous methods and techniques have focused on relative improvements in environmental performance, they have not been sufficient to meet the magnitude of improvement required. Therefore, there is a need to allocate and manage resources and energy more effectively. The study specifically examines the cultivation and production of walnut and almond, which have high nutritional value and global demand. By analyzing the energy flow and input-output ratios in these production systems, the research aims to design optimal cultivation patterns that ensure production stability, optimize economic benefits, preserve fossil fuel reserves, and reduce air pollution. To achieve these goals, the study utilizes advanced techniques like hybrid neural-genetic network models and ANN, which offer more precise and accurate predictions for optimizing crop production. The ANFIS is also employed, combining the strengths of fuzzy systems and ANN to provide even more precise and accurate predictions. The results of this study will contribute to the understanding of energy consumption, environmental emissions, and the performance of ANN and ANFIS models in the context of walnut and almond production. By identifying bottlenecks and proposing solutions, the research aims to harness the existing potentials in different regions to reduce energy consumption and environmental impact. Overall, this work has implications for sustainable agricultural development and the promotion of eco-efficient practices in the industry. 2 Materials and methods 2.1 Sample collection conditions In order to apply the mentioned techniques for better production of almonds and walnuts, the rural agricultural lands of Shahrekord city, Iran have been selected. The climate of the region is variable and this has caused different agricultural and horticultural products to be grown in different regions. The area in question has a very hot and very cool climate, which has provided the ground for the growth of fruit trees that require a warm climate, such as pomegranates, and trees resistant to cold weather, such as almonds and walnuts. Gardeners have used machines to plant seedlings to build a garden. Irrigation of trees is done by the terrace method and electricity is needed for irrigation. According to the statistics collected from reliable sources for sampling, the number of samples was calculated using Cochran's formula ( Cochran, 1977 ). The selected gardeners had common characteristics and their production history was equal. A significant point in the production of almonds and walnuts is their manual harvesting, which required a lot of human labor in the region. The required information was collected through questionnaires and face-to-face interviews. where n is the required sample size, N is the number of farms in the target community, which is equal to 60, Z is the reliability coefficient (equal to 1.96, which indicates a confidence level of 90%). p is a proportion of the population with a certain trait. q (p-1) and d is the permissible deviation of the error ratio from the population average. Considering the amount of farmers in the region and considering the value of 0.5 for p and q and 0.05 for d, the sample size calculated in this study was 50, which were chosen randomly. (1) n = z 2 p q d 2 1 + 1 N ( z 2 p q d 2 − 1 ) 2.2 Influential inputs in energy consumption The inputs and outputs of almond and walnut production were evaluated, taking into account various factors such as human labor, machinery, diesel fuel, gasoline fuel, chemical fertilizers, farmyard manure, biocides, and electricity. To quantify the amount of equivalent energy in these inputs and outputs, the energy values of different sources were used as a reference point. The energy coefficients of inputs-outputs showed in Table 1 . This approach enabled a more accurate assessment of the energy efficiency of almond and walnut production and helped to identify areas where improvements could be made to reduce energy consumption and optimize resource use ( Ghorbani et al., 2011 ). To determine the input and output energy values of each input and output, the consumption of each resource was multiplied by its equivalent energy value. This part of the research incorporates a range of energy indicators that provide a comprehensive understanding of the agricultural production systems. These indicators include energy ratio, energy productivity, net energy, and specific energy, which are considered key measures in the energy analysis process. By utilizing these indicators, the study can accurately assess the energy efficiency of almond and walnut production and identify areas for improvement to optimize resource utilization ( Beigi et al., 2016 ). In order to estimate the fuel consumption of machines, first all agricultural operations were separated into different stages. Then, with the start of each operation, the duration of the operation of different machines in each farm from the beginning to the end of the production stages of walnuts and almonds was calculated separately. According to the work experience of the machine operator during the past years, the amount of fuel consumed was calculated based on the following relationship ( Naseri et al., 2020 ). where (2) F T = t × F G is the fuel needed to carry out agricultural operations at the level of 1 ha (liters per ha), t is the duration of the machinery operation (hours per ha) and F T is the fuel required by the tractor in 1 h of operations (liters per hour). F G The researchers considered energy ratio as the criterion of technological progress and considered efficiency indicators or energy ratio, net energy addition and energy efficiency to be important in the evaluation and analysis of energy consumption in the agricultural sector. The following equations were used to estimate the energy indices ( Elalami et al., 2022 ). (3) Energy use efficiency = Output energy ( MJ ) Input energy ( MJ ) (4) Energy productivity = Production ( kg ) Input energy ( MJ ) (5) Specific energy = Input energy ( MJ ) Production ( kg ) (6) Net energy = Output energy ( MJ ) − Input energy ( MJ ) 2.3 LCA Carbon dioxide and other greenhouse gas emissions are a major environmental concern, particularly in the pursuit of sustainable development. The primary greenhouse gases include carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O). These gases contribute to the greenhouse effect, which traps heat in the atmosphere and leads to climate change. Reducing greenhouse gas emissions is essential to mitigating the damaging effects of climate change and achieving a more sustainable future ( Nabavi-Pelesaraei et al., 2019 ). LCA is a valuable tool for evaluating the environmental impacts of products throughout their entire life cycle, from resource extraction and material production to parts production, final product assembly, and product use, as well as management after disposal. LCA enables a comprehensive analysis of the environmental effects of a product, taking into account all stages of its life cycle and identifying areas where improvements can be made to reduce its environmental impact. By evaluating a product's life cycle, LCA can support the development of more sustainable products and promote environmentally responsible practices ( Houshyar and Grundmann, 2017 ). The LCA process involves four distinct steps. The first step is to define the goal and scope of the assessment. It is essential to define the goal and scope of the LCA clearly and ensure that it aligns with the intended application. For instance, in this study, the aim is to compare the environmental emissions associated with dried fruit production systems. The scope of the study, including the system boundary and level of detail, is influenced by the topic and intended use of the LCA. The depth and breadth of the LCA can vary based on the specific purpose of the assessment ( Zeng et al., 2011 ). The selected operational unit defines the reference stream. Comparison between systems should be based on similar functions. The same operational units can be quantified in the form of reference currents. As an alternative method, systems related to performing this function may be added to the boundary of other systems to make the systems more comparable. The chosen processes should be described and documented ( de Vries and de Boer, 2010 ). One ton of product is considered as functional unit. The second stage is the life cycle inventory. Fig. 1 describes the boundary of the study system. The inventory analysis stage includes the inventory of input/output data related to the studied system. This stage includes collecting the necessary data to meet the defined objectives of the study ( Wowra et al., 2021 ). The third stage of the LCA process is known as Life Cycle Impact Assessment (LCIA). The LCIA stage is designed to provide additional information that can help evaluate the results of the life cycle inventory of a product system. This information is necessary to better understand the environmental significance of the product system and its potential impacts. The LCIA stage involves a range of impact categories, such as climate change, water use, and human toxicity, which provide a comprehensive overview of the potential environmental impacts of a product system. The results of the LCIA stage enable decision-makers to identify areas where improvements can be made to reduce environmental impact and promote sustainability ( Renouf et al., 2010 ). The final stage of the LCA process is life cycle interpretation. In this stage, the results of the life cycle inventory (LCI) and life cycle impact assessment (LCIA) are discussed, and conclusions, recommendations, and decisions are made based on the defined goal and scope of the study. The life cycle interpretation stage involves evaluating the results of the LCI and LCIA and identifying areas where improvements can be made to reduce environmental impact and increase sustainability. The outcomes of this stage provide valuable information for decision-makers and stakeholders, enabling them to make informed decisions regarding product design, production, and management, with the goal of minimizing the environmental impact of products and promoting sustainable practices ( Mostashari-Rad et al., 2021 ). 2.4 Modeling the performance of walnut and almond 2.4.1 ANN model Optimization problems involve two stages: modeling and planning. The modeling stage involves forming the objective function, constraints, and limitations, while the planning stage involves determining the optimal conditions to reach the ideal solution ( Yang et al., 2022 ). ANN are composed of interconnected neurons that receive input data and information, and process it to generate an output. ANNs are typically structured in layers, with the input layer receiving the data, the middle layers acting as hidden layers, and the output layer providing the final output. The neurons in each layer are connected to those in the adjacent layers, and the strength of these connections is determined by a set of weights. By adjusting these weights, ANNs can learn from input data and improve the accuracy of their predictions, making them a useful tool in a variety of applications, including agriculture ( Nabavi-Pelesaraei et al., 2014 ). The learning process in the human brain also involves strengthening or weakening connections between nerve cells, which is modeled in ANNs by setting a parameter called weight ( Mohammadi and Omid, 2010 ). Different models of ANNs target specific learning and adaptation capabilities of the human brain ( Pahlavan et al., 2012 ). The Multi-Layer Perceptron (MLP) model is a basic neural model that simulates the transmission function of the human brain and is sometimes called a feedforward network ( Kaul et al., 2005 ). In the human brain, neurons process input and transfer the result to other cells until a specific outcome is achieved, which can lead to decision-making, processing, thinking, or action ( Rahman and Bala, 2010 ). ANN are a powerful tool in nonlinear modeling, allowing for the establishment of connections between input and output parameters through appropriate weights and activation functions ( Kalam et al., 2021a ). In this particular study, the researchers utilized Matlab software to implement and train a back-propagation feed-forward neural network. The explored various activation functions, number of neurons, and number of hidden layers. The ANNs were constructed using 11 input variables (including energy equivalents of human labor, machinery, diesel fuel, gasoline fuel, chemical fertilizers, farmyard manure, biocides, and electricity) and 4 outputs (specifically output energy and three environmental impact categories) for almond production. Similarly, for walnut production, the researchers built networks with ten input variables (including energy equivalents of human labor, machinery, diesel fuel, chemical fertilizers, farmyard manure, biocides, and electricity) and the same 4 outputs. To train the ANN in a supervised manner, the datasets were randomly divided into three subsets: training (70%), testing (15%), and validation (15%). This division allowed for the evaluation of the network's performance on unseen data during testing and validation phases. 2.4.2 ANFIS model The ANFIS model enables fuzzy systems to employ the adaptive backpropagation error training algorithm for parameter training. This algorithm fine-tunes the parameters of the fuzzy system by minimizing the difference between the actual output and the desired output. The ANFIS structure is comprised of a set of if-then rules that can be utilized to model input-output data. By combining the strengths of fuzzy systems and ANN, ANFIS can provide precise and accurate predictions, making it a valuable tool in various fields, including agriculture ( Mohaddes and Fahimifard, 2018 ). The combination of fuzzy systems, which are based on logical rules, and ANN, which have the ability to extract knowledge from numerical data, allows for the utilization of both existing information and human knowledge in constructing models. The neural-fuzzy network approach divides the available data into three equal parts: one-third for training the network, one-third for testing the network, and one-third for validation. This approach enables the network to learn from the training data and validate its accuracy using the testing and validation data sets. By combining the strengths of fuzzy systems and ANN, this approach can provide more precise and accurate predictions, making it a valuable tool in various fields, including agriculture ( Buj-Corral et al., 2023 ). A fuzzy controller, like a non-fuzzy controller, performs control work based on the feedbacks it receives from the system, but the difference is in the way of inference. The fuzzy controller operates based on the fuzzy inference system, which is based on fuzzy logic, while the non-fuzzy controller's inference method is based on zero and one logic or Aristotelian logic ( Amid and Mesri Gundoshmian, 2017 ). In fuzzy logic, a proposition may be "completely true" or "completely false" or, in most cases, "partly true and partly false". In other words, the truth value of a proposition can be a number between zero and one. For a specific process, the purpose of building fuzzy inference systems is to determine the fuzzy rules governing that process. Whether it's about designing a controller or about estimating or estimating a variable ( Yilmaz and Mert, 2023 ). In fuzzy rules, two components are important: 1) The general form of the rule. 2) Determining the parameters of the rule (including the shape and parameters of the membership functions of the introduced fuzzy words). There are two general methods for determining the above components: 1). Expert opinion of that process (both the general form of the rules and the parameters of the rules are determined based on the experience, judgment and mind of an expert. 2) Available data from a process (these data are considered training data and the general form of the rules and the parameters of the rules are determined through optimization models in such a way that the output of these fuzzy rules is the most compatible with the input variables of the data. have data output values) ( Mohaddes and Fahimifard, 2018 ). In this research, the clustering method was employed to analyze almond production by classifying 11 input variables into four clusters initially, and subsequently into two clusters. Fig. 2 illustrates the development of a three-level ANFIS with a total of eight ANFIS sub-networks. Similarly, for walnut production, ten input variables were classified into five clusters first and then into two clusters using the clustering method. Fig. 3 displays the construction of a three-level ANFIS with a total of eight ANFIS sub-networks. 2.5 Performance assessment of models Equations (7)–(9) showcase the utilization of various statistical metrics, such as coefficient of determination (R 2 ), MAPE and root mean square error (RMSE) for the assessment of model performance ( Kaab et al., 2019 ). (7) R 2 = 1 − ∑ i = 1 n ( P i − A i ) 2 ∑ i = 1 n A i 2 (8) M A P E = 1 n ∑ t = 1 n ( | ( P i − A i ) | A i ) × 100 (9) RMSE = 1 n ∑ i n ( P i − A i ) 2 3 Results and discussion 3.1 Energy analysis The comparison of energy consumption and production for almond and walnut products is presented in Table 2 . The estimation of input and output energies was based on the consumed inputs. Almond production consumes a total energy of 29430.56 MJ ha −1 , while walnut production consumes 15309.28 MJ ha −1 . It is worth noting that walnut production requires more energy (54901.05 MJ ha −1 ) compared to almonds. This indicates that walnut production can be significant depending on the availability of input supplies in the region. Furthermore, it is important to analyze input consumption separately, as illustrated in Fig. 4 of this discussion. In almond production, nitrogen accounts for over 30% of the energy share, whereas diesel fuel contributes 21.19% to the energy share in walnut production. To enhance the efficiency of nitrogen use, it is crucial to apply fertilizer at the appropriate time and in the correct amount based on the plant's needs during the growing season. This practice can help reduce the energy required for machinery and ensure accurate and effective use of nitrogen fertilizer. Gündoǧmuş (2006) has reported human labour and machinery consumption to be 1305.19 and 37.26 h ha −1 in walnut orchards. Also, diesel fuel consumption in pistachio production has been reported to be in the range of 41.82–48.54 L ha −1 ( Külekçi and Aksoy, 2013 ). In Table 3 , a comparison of energy indices for almond and walnut production is presented. The energy ratio for walnut production is considered to be very acceptable. On the other hand, almond cultivation has a higher energy ratio, indicating that more energy is available to consumers from growing almonds. However, when it comes to energy productivity, walnut production outperforms almond cultivation. This means that less energy is required per kilogram of crop in walnut production compared to almonds. Furthermore, the energy intensity results show the opposite trend compared to energy productivity. Walnut production has the highest net energy level, with a positive result of 39591.76 MJ ha −1 . This indicates that walnut production generates a surplus of energy, suggesting a more efficient use of resources compared to almond cultivation. In a study conducted by Khanali et al. (2021) , explored the energy consumption and environmental emissions associated with walnut production using the imperialist competitive algorithm. The findings revealed that the total energy used for both input and output in walnut production was calculated to be 31015 and 27200 MJ ha −1 , respectively. Furthermore, it was observed that gasoline, accounting for 40% of the energy consumed, was the primary contributor. Additionally, the study determined that the energy use efficiency in walnut production was 0.88, indicating inefficiency in energy utilization. Fig. 5 illustrates a comparison of the energy inputs in almond and walnut production. The figure clearly shows that almond production requires more energy input compared to walnut production. Specifically, the highest energy consumption is attributed to the use of nitrogen fertilizer. Based on the findings, it was observed that walnut production exhibits better energy productivity in terms of final crop output compared to almond production, indicating that it is a more efficient use of energy. 3.2 LCA analysis The data collection and analysis step is an essential part of the LCA process, and it involves collecting accurate data that is relevant to the defined functional unit. Table 4 presents the on-farm emissions of almond and walnut production. Data may come from public sources or other reference sources, and it should account for all activities within the system boundary, including upstream and downstream processes. The inventory flow can be divided into various categories based on the scope of the analysis. Data quality should be assessed, and uncertainties should be identified and addressed in the analysis. In the case of walnut and almond production, on-farm emissions are primarily due to the use of diesel fuel and chemical fertilizers. However, the level of diesel fuel pollutants is lower in walnut production due to reduced diesel fuel usage, while almond production tends to have a higher prevalence of contaminants associated with diesel fuel. It is crucial to collect accurate data on the usage of these inputs to evaluate their environmental impact accurately and identify ways to reduce their impact in the future. The results of the damage assessment for various almond and walnut production scenarios using the ReCiPe 2016 method are presented in Table 5 . The data indicates that the resources category has the highest environmental impact, while human health has the lowest. Almond production has been more extensively studied than walnut production, and research has shown that almond production has higher greenhouse gas emissions compared to walnut production. Furthermore, the resources category has a more significant impact on almond production than on walnut production in terms of pollutants. These findings underscore the importance of carefully evaluating the environmental impact of different agricultural practices and identifying ways to reduce their impact to promote sustainable agriculture. Another study by Mostashari-Rad et al. (2021) using the ReCiPe 2016 method found that citrus, hazelnut, kiwifruit, tea, and watermelon had a higher impact on the release of resources categories than on ecosystem and human health categories. In terms of greenhouse gas emissions, Litskas et al. (2017) , Bosco et al. (2011) , and Point et al. (2012) reported values of 0.155 kg CO2eq, 0.15–0.3 kg CO2eq, and 0.8 kg CO2eq, respectively. Nutrient management was found to be a significant contributor to ozone layer depletion, global warming, freshwater aquatic ecotoxicity, and acidification, accounting for 49%, 65%, 79%, and 92% of the impacts, respectively. Fig. 6 depict the contribution of each input to emissions in almond and walnut production. Both production methods have significant direct emissions that have a considerable impact on human health and the ecosystem, accounting for over 60% of emissions in both crops. Among the inputs, nitrogen fertilizer has the most significant impact on resources, accounting for over 40% of the total impact. Proper nitrogen fertilizer management is crucial for the growth and yield of crops, and researchers and farmers should prioritize its appropriate use. In many parts of the world, regulations are in place to control the use of chemical fertilizers in agriculture to prevent excessive amounts of elements from entering the environment. This not only serves to protect the environment and human health but also has economic benefits such as cost reduction, improved efficiency, and resource conservation. Steenwerth et al. (2015) have proposed two fertilizer management methods: mineral fertilizer and compost fertilizer. To minimize environmental impacts and ensure sustainable agricultural production, it is essential to consider the appropriate use of fertilizers and adopt efficient farming practices. 3.3 Modeling analysis In this study, various statistical measures and neural network models were employed to predict output energy generation and environmental impact categories associated with almond and walnut production. The models were developed using feed-forward back-propagation neural networks, with the Levenberg-Marquardt training algorithm utilized for model training. Sigmoid and linear functions were used as activation functions in the hidden and output layers, respectively. These models enabled the accurate prediction of output energy generation and environmental impact categories associated with almond and walnut production. By utilizing these models, decision-makers can make informed decisions to optimize resource utilization, reduce environmental impact, and promote sustainable agricultural practices. The ANN models developed for almond and walnut production had different structures. The outcomes of various configurations of models in distinct almond and walnut productions are displayed in Table 6 . The predictive ANN model for almond production had an 11-8-9-4 structure, while the best ANN structure for walnut production was 10-8-3-4. The performance of the models was evaluated using various statistical measures, including MAEP, RMSE, and R 2 . These measures enabled the assessment of the accuracy of the models and the identification of areas where improvements can be made. By utilizing different ANN structures and statistical measures, decision-makers can make informed decisions to optimize resource utilization, reduce environmental impact, and promote sustainable agricultural practices in both almond and walnut production. The R 2 values ranged from 0.948 to 0.988 for overall performance in almond production and 0.938 to 0.987 for overall performance in walnut production. Elhami et al. (2016) utilized a two-hidden-layer ANN model to forecast the environmental impact categories and yield of lentil cultivation, while Chen and Jing (Chen and Jing, 2017) predicted yield using ANN and calculated the MAPE, RMSE, and R 2 to be 10.38%, 979 kg/ha, and 0.61, respectively, during the testing phase. Table 7 presents the results of a three-level ANFIS topology utilized to predict environmental impacts and output energy generation in almond production. The ANFIS models employed different membership functions and linear membership functions to optimize the distribution of input and output layers, respectively. The results demonstrate the effectiveness of ANFIS in accurately predicting environmental impacts and output energy generation in almond production. By utilizing ANFIS, decision-makers can make informed decisions to optimize resource utilization, reduce environmental impact, and promote sustainable agricultural practices. The third-level ANFIS model (ANFIS 7) had an R 2 value of 0.969 for output energy and an R 2 value of 0.996 and 0.994 for ecosystem and human health impacts, respectively. The final ANFIS model had an RMSE ranging from 0.241, indicating its good performance in predicting output energy generation in almond production. This finding is consistent with previous studies by Mousavi-Avval et al. (2017) and Nabavi-Pelesaraei et al. (2018) , who used a multilayer ANFIS to model canola production and output energy in paddy production, respectively. The hybrid learning method used in the ANFIS model proved to be effective in mimicking input/output relationships and achieving high accuracy. The results of a two-level ANFIS topology model used to predict output energy and environmental impacts from input energies in walnut production presented in Table 8 . The ANFIS model achieved an R 2 value of 0.975 for output energy in the two-level ANFIS model (ANFIS 8), and the final ANFIS model had an RMSE of 0.128. For environmental impacts, the two-level ANFIS model (ANFIS 8) had an R 2 value of 0.997 for human health impacts, and the final ANFIS model had an RMSE ranging from 0.154, indicating its good performance in predicting output energy generation in walnut production. The results demonstrate that accurate prediction of output energy and environmental impact modeling can be achieved for almond and walnut production, which can be useful for future planning. Zhang et al. (2019) conducted a study on predicting almond yield at the orchard level using a machine learning approach in California. The results showed a strong agreement between the predicted yield and independent yield records. The predictions for both the early season (March) and mid-season (June) had an average R 2 of 0.71. The study also identified key factors that influenced yield based on the modeling results. It was found that almond yield generally increased with orchard age until about 7 years old. Additionally, higher long-term mean maximum temperatures during April–June were found to enhance yield in southern orchards, while a larger amount of precipitation in March reduced yield, particularly in northern orchards. Remote sensing metrics, such as annual maximum vegetation indices, were also found to be important variables for predicting yield potential. Although these results are promising, further refinement of the model is necessary. This includes the need for larger data sets, incorporation of additional variables, and implementation of different methodologies to enable the model to serve as a fertilization decision support tool for growers. The study demonstrates the potential of automatic almond yield prediction in assisting growers with adaptive nitrogen management, compliance with regulatory requirements, and ensuring the sustainability of the industry. 3.4 Sensitivity analysis Sensitivity analysis is a method used to assess the impact of changes in input variables on the output or outcome of a model, system, or process. It helps to understand how sensitive the results are to variations in the input parameters. By systematically varying the input values within a certain range, sensitivity analysis allows for the identification of key factors that significantly influence the output. This analysis provides valuable insights into the relationships between inputs and outputs, enabling decision-makers to make informed decisions and optimize the performance of the system or model under study ( Saltelli et al., 2019 ). In order to assess the influence of different inputs on our model's predicted yield, calculated the correlation between the yield and each input factor. Fig. 7 illustrates the proportion of each input factor's contribution to the output factor in our developed model. The results revealed that machinery had the most significant impact on almond and walnut yield, followed by diesel fuel, nitrogen, and human labor. Conversely, the results indicated a negative relationship between gasoline fuel and manure with almond yield, as well as sulfur and manure with walnut yield. This suggests that the excessive utilization of these energy resources in the studied region has a detrimental effect on yield. In a study conducted by Royan et al. (2012) , the energy consumption in peach production was analyzed. Through a sensitivity analysis, it was determined that a one MJ increase in energy inputs such as human labor, machinery, diesel fuel, chemical fertilizers, biocides, farmyard manure, irrigation water, and electricity resulted in yield changes of 11.31 kg, −2.8 kg, 1.33 kg, 0.29 kg, −0.003 kg, 0.54 kg, and 0.14 kg, respectively. According to Rafiee et al. (2010) , the research on apple production revealed the following values for the amounts of MPP (Material Productivity Potential) associated with various inputs: 1.30 for human labor, 2.43 for water used for irrigation, 0.71 for biocides, 0.32 for electricity, 0.06 for chemical fertilizers, 0.62 for farmyard manure, 0.40 for diesel fuel, and 0.53 for machinery. 3.5 The results of comparison ANN and ANFIS models Fig. 8 illustrates that the ANFIS model outperformed the ANN model in predicting output energy and environmental impacts in almond and walnut production, as indicated by the higher R 2 obtained by the ANFIS model. Both models were able to predict environmental impacts and output energy generation with high accuracy, but the ANFIS model required only two computations, while the ANN model required four networks, which increased the computational time. Therefore, the ANFIS model is considered to be better than the ANN model for predicting environmental impacts and output energy generation in almond and walnut production. Overall, the results of both ANN and ANFIS models in almond and walnut production showed that ANFIS models achieved higher accuracy than ANN models in forecasting and modeling. This highlights the potential of ANFIS as a valuable tool in promoting sustainable agricultural practices. In a recent study by Kalam et al. (2021a) , a novel empirical correlation for predicting waterflooding performance in stratified reservoirs was developed using artificial intelligence. The researchers utilized an ANN model to forecast the recovery performance of a layered reservoir undergoing a five-spot-pattern waterflood. Additionally, they introduced a mathematical equation based on the ANN model to predict oil recovery, taking into account crossflow between layers and variations in rock wettabilities. A new parameter called the wettability indicator (WI) was also introduced, which quantifies rock wettability using relative permeability curves. The results demonstrated that incorporating the WI term significantly reduced the number of simulation runs compared to existing models. Furthermore, the ANN model exhibited superior accuracy compared to non-linear regression and ANFIS approaches. The developed correlation was validated using various data sets and showed high accuracy. Overall, this novel empirical correlation serves as a valuable tool for estimating waterflood oil recovery prior to conducting large simulation models. In their study, Kalam et al. (2020) aimed to improve the estimation of relative permeability through data-driven modeling. To ensure accurate input for the AI models, they implemented a customized workflow that included a comprehensive sensitivity analysis. This analysis involved running multiple simulations with different numbers of neurons, resulting in diverse weights and biases for the ANN model. The ANFIS model was also fine-tuned using various cluster sizes to find the optimal value. The optimized ANN and ANFIS models were then compared using the Root Mean Squared Error (RMSE) and correlation coefficient (R 2 ) analysis. This evaluation was performed on a blind dataset consisting of over 300 data points. The results showed that the ANN model performed better than the ANFIS model in predicting relative permeability values for both oil and water. In contrast, the ANFIS model exhibited higher error values when tested on an unseen dataset. Additionally, unlike the ANN model, the ANFIS model did not provide a mathematical correlation. Overall, this study introduces alternative data-driven artificial intelligence models that enable faster and more cost-effective estimation of relative permeability. 4 Conclusions In the production of walnut and almond crops, energy use efficiency and damage assessment play important roles. Energy use efficiency measures the efficiency of the production process by comparing the amount of energy used to the output energy generated. Damage assessment evaluates the environmental impacts of the production process on ecosystems and human health. To improve energy use efficiency and minimize environmental damage, accurate prediction and assessment of these factors in almond and walnut production is crucial. Machine learning models such as ANN and ANFIS can be effective tools in forecasting output energy generation and evaluating environmental impacts with high accuracy, leading to more sustainable and efficient production practices. Studies have shown that almond production has higher greenhouse gas emissions compared to walnut production. The total energy consumption for almonds and walnuts was found to be 29430.56 MJ ha −1 and 15309.28 MJ ha −1 , respectively. Additionally, LCA results indicated that the resources category had the highest environmental impact, while human health had the lowest. The modelling analysis showed that ANFIS models achieved higher accuracy than ANN models in forecasting and modeling for both almond and walnut production. Based on the results, it was found that walnut cultivation is more preferable than almond cultivation due to its lower energy consumption and environmental pollutants. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
[
"AMID",
"BAER",
"BEIGI",
"BOSCO",
"BUJCORRAL",
"COCHRAN",
"DEVRIES",
"ELALAMI",
"ELHAMI",
"ELYASI",
"FEYZBAKHSH",
"GHASEMIMOBTAKER",
"GHORBANI",
"GHRITLAHRE",
"GUNDOGMUS",
"HOUSHYAR",
"KAAB",
"KAAB",
"KALAM",
"KALAM",
"KALAM",
"KALAM",
"KAUL",
"KHANALI",
"KITANI",
"KULEKCI",
"LI",
"LITSKAS",
"MOGHIMI",
"MOHADDES",
"MOHAMMADI",
"MOSTASHARIRAD",
"MOUSAVIAVVAL",
"NABAVIPELESARAEI",
"NABAVIPELESARAEI",
"NABAVIPELESARAEI",
"NABAVIPELESARAEI",
"NASERI",
"PAHLAVAN",
"POINT",
"RAFIEE",
"RAHMAN",
"RENNO",
"RENOUF",
"ROYAN",
"SALTELLI",
"SEFEEDPARI",
"STEENWERTH",
"TAGHINEZHAD",
"TAHERIRAD",
"TAHERZADEHSHALMAEI",
"TAHERZADEHSHALMAEI",
"TORKIHARCHEGANI",
"TRICASE",
"UNAKITAN",
"WANG",
"WOWRA",
"YANG",
"YILMAZ",
"YOUNIS",
"ZAREISHAHAMAT",
"ZENG",
"ZHANG"
] |
25e00fcade7541109c3c1e44c0d420a6_Diagnosis of Malignant Potential in Mucinous Peritoneal Neoplasms by Characterization of Mucin Carbo_10.1016_j.jcmgh.2018.02.012.xml
|
Diagnosis of Malignant Potential in Mucinous Peritoneal Neoplasms by Characterization of Mucin Carbohydrate Structure
|
[
"Clark, L. August",
"Ghazi, Alexia",
"Gaffney, Kristin",
"Soto, Rodrigo",
"Agarwal, Atin",
"Carmack, Susanne",
"Boland, C. Richard"
] | null |
Pseudomyxoma peritonei (PMP) is characterized by the growth of intestinal epithelial cells with extensive mucin secretion in the abdomen and pelvis. The cells grow freely in the peritoneal cavity and may be a low-grade mucinous neoplasm or a peritoneal mucinous carcinoma. However, the mucinous nature and the peritoneal location make it difficult to determine whether the neoplastic epithelial cells are benign or malignant. Consequently, it is difficult to determine how the tumor will behave, predict the long-term outcome, or plan rational therapy. It has long been appreciated that mucins in the normal colon have a homogeneous pattern of terminal glycosylation that is altered in the mucins associated with colorectal cancer (CRC), which can be recognized by lectin binding patterns. The goal of this study was to use lectin binding patterns in the setting of PMP to characterize mucins secreted by low-grade neoplasms and those produced by adenocarcinomas using fluorescein isothiocyanate (FITC)-labeled lectins: FITC– 1 Dolichos biflorus agglutinin (DBA), FITC–soybean agglutinin, FITC– Ricinus communis agglutinin-1, FITC– Ulex europeus agglutinin-1, and FITC–peanut agglutinin (PNA). We hypothesized that there would be differences in glycoconjugate structure that would differentiate mucins in adenocarcinomas from those in lower grade neoplasms. The PMP cases were divided into 2 pathologic groups (low-grade benign mucinous neoplasms and mucinous adenocarcinomas), and the lectin binding was independently determined to be positive or negative for each case. Forty-five patients were studied; 25 had adenocarcinoma, and 20 a low-grade neoplasm ( Table 1 ). All of the adenocarcinomas were labeled by FITC-PNA, versus 50% of the low-grade neoplasms ( P < .01) ( Table 2 , Supplementary Figures 1–4 ). The other lectin-binding results are in Table 2 . Mucus is the term for the viscoelastic substance coating the intestinal epithelium, and its principal nonaqueous component is mucin . Mucin apoproteins have a linear polypeptide chain in its central axis. The oligosaccharide side chains of mucins often terminate with sialic acids or sulfate groups, making them negatively charged and hydrophilic. It has been demonstrated that virtually all normal human colonic mucins have the same terminal sugars on their oligosaccharide side chains. 2 Goblet cells in the upper portion of the colonic crypt synthesize mucin with a terminal sugar that is bound by the lectins DBA and soybean agglutinin (α- 1 N -acetylgalactosamine). There is a gradient of labeling by DBA and soybean agglutinin as goblet cells differentiate and migrate up the colonic crypt. The goblet cell mucins at the bottom of the crypt terminate with a sugar recognized by Ricinus communis agglutinin-1 (β-galactose). The disappearance of the terminal β-linked Gal residues at the base of the crypt is perfectly matched by the appearance of terminal α-linked GalNAc residues at the top. However, CRCs lose the organized crypt structure and there is a change in the pattern of lectin binding. PNA binds to the mucins made by almost all CRCs but essentially never binds to the mucins made in normal colons. PNA binds to the Thomsen-Friedenreich antigen (T-Ag), a cancer-associated sequence that consists of a short disaccharide pair (Galβ1-3->GalNAc). 3 In some instances, the Gal, GalNAc of T-Ag, or both may be extended by terminal α-linked sialic acid residues; PNA also binds sialylated T antigens. 4 Importantly, the presence of the PNA-binding glycoproteins can also be seen within large benign adenomatous polyps in foci of high-grade dysplasia. 4 3 In this study, all of the mucinous adenocarcinomas were PNA-positive, indicating a cancer-associated T-Ag, whereas only half of low-grade neoplasms were PNA-positive. The other lectins were not as helpful diagnostically but shed light on the likely oligosaccharide structures in PMP ( Table 2 ). One possibility is that terminal sialic acid residues are permissive of PNA-binding but inhibit Ricinus communis agglutinin-1 binding. DBA binds the terminal α-GalNAc of blood group A and labeling was present in about half of the adenocarcinomas but most of the low-grade neoplasms. It is likely that DBA recognizes different GalNAc residues in the cancers than in the normal colon, because in addition to recognizing the blood group A structure, DBA also binds to solitary α-linked GalNAc, such as that in the cancer-associated structure, Tn. The presence of Tn and sialylated-Tn in mucins is associated with metastasis and a poorer prognosis. 5 6 It has been previously shown that CRC-associated mucins are less densely glycosylated, have shorter oligosaccharide side chains than in the normal colon, and have other biochemical differences in molecular weight and charge. 7 Future directions include purification and biochemical characterization of these cancer-associated mucins, and exploration of the molecular and enzymatic basis of this phenomenon. 8 Acknowledgments L. August Clark conducted the fluorescence microscopy, assembled and interpreted data, and wrote the manuscript. Alexia Ghazi performed immunofluorescence interpretation and photographs, patient data collection, and edited the manuscript. Kristin Gaffney initiated the experiments, collected the samples, and conducted initial fluorescence microscopy. Rodrigo Soto performed initial immunofluorescence interpretation and photographs. Atin Agarwal edited the manuscript. Susanne Carmack read all of the PMP tissues blinded to the lectin-binding data. C. Richard Boland conceived of the experiments, assisted with fluorescence microscopy, and edited the manuscript. Supplementary Material
|
[
"BOLAND",
"BYRD",
"BOLAND",
"BIAN",
"BARROW",
"MUNKLEY",
"BOLAND",
"SHIMAMOTO"
] |
dd171efef64c4848963a8750a827eb13_Factors associated with oral fingolimod use over injectable disease- modifying agent use in multiple_10.1016_j.rcsop.2021.100021.xml
|
Factors associated with oral fingolimod use over injectable disease- modifying agent use in multiple sclerosis
|
[
"Earla, Jagadeswara Rao",
"Hutton, George J.",
"Thornton, J. Douglas",
"Chen, Hua",
"Johnson, Michael L.",
"Aparasu, Rajender R."
] |
Background
Fingolimod is the first approved oral disease-modifying agent (DMA) in 2010 to treat Multiple Sclerosis (MS). There is limited real-world evidence regarding the determinants associated with fingolimod use in the early years.
Objective
The objective of this study was to examine the factors associated with fingolimod prescribing in the initial years after the market approval.
Methods
A retrospective, longitudinal study was conducted involving adults (≥18 years) with MS from the 2010–2012 IBM MarketScan. Individuals with MS were selected based on ICD-9-CM: 340 and a newly initiated DMA prescription. Based on the index/first DMA prescription, patients were classified as fingolimod or injectable users. All covariates were measured during the six months baseline period prior to the index date. Multivariable logistic regression was performed to determine the predisposing, enabling, and need factors, conceptualized as per the Andersen Behavioral Model (ABM), associated with prescribing of fingolimod versus injectable DMA for MS.
Results
The study cohort consisted of 3118 MS patients receiving DMA treatment. Of which, 14.4% of patients with MS initiated treatment with fingolimod within two years after the market entry, while the remaining 85.6% initiated with injectable DMAs. Multivariable regression revealed that the likelihood of prescribing oral DMA increased by 2–3 fold during 2011 and 2012 compared to 2010. Patients with ophthalmic (adjusted odds ratio [aOR]-2.60), heart (aOR-2.21) and urinary diseases (aOR-1.37) were more likely to receive fingolimod. Patients with other neurological disorders (aOR-0.50) were less likely to receive fingolimod than those without neurological disorders. Use of symptomatic medication (for impaired walking (aOR-2.60), bladder dysfunction (aOR-1.54), antispasmodics (aOR-1.48), and neurologist consultation (aOR-1.81) were associated with higher odds of receiving fingolimod. However, patients with non-MS associated emergency visits (aOR-0.64) had lower odds of receiving fingolimod than those without emergency visits.
Conclusions
During the initial years after market approval, patients with highly active MS were more likely to receive oral fingolimod than injectable DMAs. More research is needed to understand the determinants of newer oral DMAs.
|
1 Introduction Fingolimod was the first oral Disease-Modifying Agent (DMA) approved by the Food and Drug Administration (FDA) in September 2010 to treat relapsing-remitting form of Multiple Sclerosis (MS). Prior to fingolimod approval, for almost two decades, only injectable DMAs – Interferon beta (1993) and glatiramer (1996) – were available to treat MS. Although, a few intravenous DMAs – mitoxantrone (2000) and natalizumab (2006) – were available to treat MS, they were not first-line agents to treat MS. Evidence indicates that fingolimod is comparable or superior to injectable DMAs in reducing relapses, delaying disability progression, and decreasing accumulation of magnetic resonance imaging (MRI) lesions. However, the side effect profile of fingolimod is extensive, and it requires more monitoring than injectable DMAs. 1 In addition to being effective, fingolimod's once-daily dosing offers a convenient administration schedule and facilitates better adherence than injectable DMAs. 2–7 2 , 3 Fingolimod's approval provided clinicians with an additional option of DMA to treat patients with MS. After fingolimod, several other oral DMAs were approved into the market between 2012 and 2020, including teriflunomide, dimethyl fumarate, cladribine, siponimod, diroximel fumarate, ozanimod, and monomethyl fumarate. Since 2010, until today, there are no criteria or clinical recommendations regarding the selection of an appropriate DMA for patients with MS. 8 9 , It is suggested that selection of DMA should be individualized considering the patient's disease activity, comorbidities, symptoms, risk factors, values and preferences. 10 11 , In the absence of established clinical guidelines by national or international neurology societies regarding the selection of DMAs, the decision to choose oral fingolimod versus injectable DMA is complex considering their varied safety and efficacy profiles. DMA selection is generally assumed to be a collaborative decision based on both patient and provider preferences. 12 2 , 13 , 14 A recent real-world study by Desai et al. evaluated factors associated with the prescription of oral DMAs versus injectable/infusion DMAs using commercial health insurance claims data from Aetna (2009 to 2014) and reported that patients' age and certain clinical factors were associated with the selection of oral DMA. However, Desai et al. assessed factors associated with prescription of any oral DMA (including newly approved teriflunomide [2012] and dimethyl fumarate [2013]) versus either first-line injectable/second line infusion DMAs. Previous evidence indicates that patient factors, primarily age and comorbidities, could play a role in the severity of MS 14 and further affects DMA selection. 15 However, there is limited real-world evidence regarding factors associated with the prescribing of first-line oral fingolimod versus first-line injectable DMAs, especially during the initial years after approval. Therefore, this study examined the factors associated with oral fingolimod prescribing over conventional injectable DMAs during the initial years after the approval. This retrospective study could help us understand the drivers for acceptance of first oral DMA by providers over injectables during the initial years after fingolimod approval. 16–18 2 Material and methods 2.1 Study design and data source A retrospective longitudinal study was conducted using the IBM MarketScan Commercial Claims and Encounters data from 2010 to 2012. The 2010–2012 data set was selected to understand the drivers for initiation of the first oral DMA by providers during the initial years after fingolimod approval. The IBM MarketScan consists of more than 43.6 million commercially insured enrollees and provides a nationally representative sample of Americans with employer-provided health insurance. Beneficiaries are from large employers, health plans, government, and public organizations. It is a limited dataset that includes de-identified inpatient, outpatient, and pharmacy claims allowing for longitudinal analysis of health care utilization. This study was approved by the Institutional Review Board at the University of Houston under the ‘exempt’ category. 19 2.2 Study population The study population included adults (≥18 years) diagnosed with MS and newly initiated oral fingolimod or conventional injectable DMAs starting September 21, 2010 (after fingolimod's FDA approval) until December 31, 2012. DMA initiation was evaluated based on the first prescription of DMA with a six months baseline period without DMA use. Patients with MS were identified using the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) ‘340’ in diagnoses claims, and patients with DMA prescription were identified using National Drug Codes (NDC) in pharmacy claims or The Healthcare Common Procedure Coding System (HCPCS) codes in outpatient or inpatient encounter files. The NDC codes of medications were obtained from the Redbook. Based on the index/first DMA prescription, patients were classified as oral fingolimod or injectable users. Injectable DMA users consisted of patients who used interferon beta and glatiramer acetate. In this study, users of second-line infusion DMAs, or other newer oral DMAs introduced in later 2012 were excluded as their utilization was very minimal during the study period. The date of the first DMA prescription (oral fingolimod or injectable) was regarded as the index date. Patients were required to have continuous enrollment with the health insurance plan during the six months prior to the index date (baseline/lookback period). A detailed study design is presented in Appendix A . 2.3 Conceptual framework This study was conceptualized based on the Andersen Behavioral Model (ABM) of health care utilization. According to the ABM, healthcare utilization is a function of characteristics that explain (i) predisposition of an individual to use (predisposing factors), (ii) enable or impede the use (enabling factors), and (iii) need (need factors) of healthcare services. Predisposing factors included age group, gender, and region. Enabling factors included employment status, type of health insurance plan, physician specialty coding flag, and prescription time period (prescription year). Physician specialty coding flag identifies patients who had highly-differentiated (≥70%) claims coded by specialty physicians. Need factors included prevalent comorbidities, Elixhauser score, 20 MS-related symptoms/MS severity score ( 21 Appendix B ), MS symptomatic medication, and health care utilization indicators. Comorbidities that are prevalent in patients with MS were collated from existing literature, 22 23 , and identified using ICD-9-CM codes from diagnosis files. Further, few additional comorbidities that were prevalent (>15%) in the study cohort were also identified. 24 All the selected comorbidities were identified using the clinical classification system (CCS) codes proposed by the Agency for Healthcare Research and Quality (AHRQ). Elixhauser index score is a weighted score of selected comorbidities that were identified based on diagnoses in healthcare records. 25 It is widely used as a surrogate measure of comorbidity burden in observational healthcare research involving administrative data. 21 MS-related symptoms were identified using ICD-9-CM and HCPCS codes from diagnoses or procedure claims. MS severity measure is a weighted score of selected MS-related symptoms or comorbidities ( 26 Appendix B ). MS severity measure acts as a proxy measure of symptomatic burden or severity of MS; a higher score indicates a higher symptomatic burden. 22 Additionally, MS symptomatic medications are drugs that are prescribed to alleviate MS-related symptoms, and the use of these medications indicates neurological impairment. 27 Healthcare utilization measures include baseline relapse, neurologist consultation, magnetic resonance imaging (MRI) test (procedure group code: 216), and Emergency Department (ED) visit (procedure group code: 111) – MS-associated and non-MS associated. Claims-based relapse measure was operationally defined as (i) inpatient hospitalization or (ii) outpatient encounter followed by steroid prescription within 30 days of the encounter. Successive relapses within the next 30 days after the initial relapse were considered as a single relapse episode. 28 All the covariates were measured during the six months baseline period prior to the index date. 29 2.4 Statistical analyses Characteristics of oral fingolimod and injectable DMA users were compared and assessed using descriptive statistical tests such as chi-square test for categorical variables and t -test for continuous variables. Multicollinearity among the independent covariates was ruled out using the criteria of variance inflation factor (VIF) less than 10. Multivariable logistic regression was performed to determine the factors associated with the selection of fingolimod. The outcome variable was a binary indicator of oral fingolimod versus injectable DMA as the first DMA prescription; injectable DMA was considered as the reference category. As explained earlier, independent variables (predisposing, enabling, and need factors) in the multivariable logistic regression model were chosen based on the ABM. The sample size needed for the logistic model was 765 based on the independent variables selected for the study. All the statistical analyses were conducted using SAS 9.4 (SAS Institute, Cary, North Carolina) at a level of significance value of 0.050. 3 Results The study cohort consisted of 3118 MS patients receiving DMA treatment, of which 14.4% ( n = 450) of the individuals initiated oral fingolimod, while the remaining 85.6% ( n = 2668) initiated injectable DMAs (See Fig. 1 for Cohort Derivation). Among injectable DMA users, 51.0% ( n = 1360) initiated interferon-beta while 49.0% ( n = 1308) initiated glatiramer acetate. The characteristics of the total study cohort along with the route of administration of DMA are given in Table 1 . The cohort mainly consisted of females (77.7%), middle-aged (35–54 years; 59.9%), belonged to the South region of the US (37.6%), and were active full-time employees (80.4%) with Preferred Provider Organization (PPO) health insurance plan (60.7%). Among MS patients treated with DMAs, oral fingolimod and injectable DMA users were significantly different based on the distribution of a few predisposing (age group), enabling (employment status and prescription time period), and need factors (comorbidities, symptoms, symptomatic medication, and healthcare utilization) as shown in Table 1 . Multivariable logistic regression findings revealed that enabling (time period) and several need factors were associated with the initiation of oral fingolimod over injectable DMAs. The findings of multivariate logistic regression are shown in Table 2 . Compared to 2010, the odds of prescribing oral DMA were 2–3 fold higher during 2011 (adjusted odds ratio [aOR]-3.34; 95% CI: 2.13–5.24) and 2012 (aOR- 2.34; 95% CI: 1.46–3.75). Patients with eye disorders (aOR- 2.63; 95% CI: 2.08–3.31), heart diseases (aOR- 2.21; 95% CI: 1.65–2.97), and urinary diseases (aOR- 1.37; 95% CI: 1.03–1.82) were more likely to receive oral fingolimod than those who did not have those disorders. Whereas, patients with other neurological disorders (aOR- 0.50; 95% CI: 0.38–0.65) and nutritional deficiencies (aOR- 0.64; 95% CI: 0.41–0.98) were less likely to receive oral fingolimod than those without those disorders/deficiencies. Further, use of symptomatic medication for impaired walking (aOR-2.60; 95% CI: 1.90–3.58), bladder dysfunction (aOR-1.54; 95% CI: 1.17–2.02), andspasticity (aOR-1.48; 95% CI: 1.15–1.91) was associated with higher odds of receiving oral fingolimod compared to those without symptomatic medication for MS. In addition, patients who had neurologist consultation (aOR-1.81; 95% CI: 1.39–2.34) had higher odds of receiving oral fingolimod than those without neurologist consultation, while patients who had non-MS associated ED visits (aOR-0.64; 95% CI: 0.46–0.88) had lower odds of receiving oral fingolimod compared to those without ED visits. 4 Discussion This study examined the factors associated with the selection of the first oral DMA fingolimod over conventional injectable DMAs during the initial years after fingolimod approval (2010−2012). Approximately 15% of the patients initiated oral fingolimod during 2010–2012. This study revealed that time period (enabling factor) and several clinical (need) factors such as comorbidities, MS symptomatic medication, and healthcare utilization were associated with the selection of oral fingolimod over injectable DMAs. As expected, the likelihood of prescribing oral fingolimod increased by more than 2–3 folds after 2010. With time s, fingolimod's availability and clinicians' or patients' experience in using fingolimod might have increased, and thereby improved the chances of adopting newer oral fingolimod into clinical practice. In addition, other physician-related factors such as scientific commitment, high prescribing volume, high exposure to marketing, and communication with colleagues could have played a role in the successful adoption of fingolimod. 18 , 30 Patients with heart diseases (e.g., acute coronary syndrome, heart failure, arrhythmias, conduction disorders, and valve disorders, etc.) were more than two times more likely to receive fingolimod than those without heart diseases during 2010–2012. But, based on the evolving cardiac risk profile of fingolimod over time, a reverse association would be expected in more recent years (post-2012) as fingolimod is contraindicated with many cardiac conditions. The initial product monograph of fingolimod, released in September 2010, included cardiac warnings such as transient bradycardia upon first administration and atrioventricular conduction (AV) block. However, based on long-term safety studies, in April 2012, the manufacturer updated several cardiac contraindications to fingolimod. Those include second-degree or higher AV block, sick sinus syndrome or sinoatrial block, and prolonged QT interval. In addition, fingolimod is not recommended for patients who were taking antiarrhythmic medication or bradycardia inducing antihypertensive medications. 15 6 , 7 , This is likely to reduce prescribing of fingolimod after 2012 in MS patients with cardiac conditions. 16 Patients with other neurological disorders (Parkinson's disease, cerebral degeneration, Huntington's chorea, neuroleptic malignant syndrome, trigeminal nerve disorders, and other demyelinating diseases) had 50% lower odds of receiving oral fingolimod. The presence of other neurological disorders might have prompted neurologists to choose much safer and established injectable DMAs than oral fingolimod. Another important finding from this study is that patients using MS symptomatic medication were more likely to receive oral fingolimod. As observed previously, the use of medication for impaired walking, bladder dysfunction, and spasticity increased the odds of receiving oral fingolimod by 1.5–2.0 times in patients with MS. Further, patients with eye diseases and urinary diseases also had higher odds of being prescribed oral fingolimod. Ophthalmic diseases, other vision symptoms, and urinary diseases could be a part of MS clinical manifestation. Research also points to the fact that newly diagnosed symptomatic MS individuals might present with vision symptoms, urinary tract infections, or bladder/bowel dysfunction requiring symptomatic treatment. 14 31 , Hence, patients with severe symptomatic burden who are at high risk of disease progression may have been more likely to receive the newer and more effective oral fingolimod instead of injectable DMAs. Current evidence informs that fingolimod is more effective than injectable DMAs, but requires closer lab monitoring 32 7 , and is suggested for patients who can be closely monitored. 33 Therefore, prescribing DMA for MS patients is a complex decision that requires assessing the comorbidities and MS-related symptoms along with the laboratory parameters. 34 Consistent with previous literature, patients who had at least one neurologist consultation during baseline were nearly two times more likely to receive oral fingolimod. Patients who had non-MS-associated ED visits were 36% less likely to receive oral fingolimod. Desai et al. reported that patients who had ED visits were nearly 1.5 times more likely to receive oral DMAs. 14 However, in Desai et al.'s study, 14 ED visits were not classified based on MS diagnosis, which could inform the severity of MS and further treatment selection. 14 Overall, several factors influenced the selection of oral fingolimod over the existing injectable DMAs. In the current study, patients with other comorbidities, MS-related symptoms, and symptomatic medication suggest a more severe form of MS, and those more disabled patients were more likely to be prescribed with oral fingolimod over conventional injectable DMAs. Also, patients with cardiac diseases were more likely to be prescribed fingolimod during the early years after its approval. However, with the evolving cardiac risk profile of fingolimod in the later years, clinicians might not favor prescribing fingolimod to patients with cardiac conditions. With monitoring requirements and evolving risk profile of fingolimod, the drivers of prescribing fingolimod might have varied in recent years. Most importantly, the prescribing of oral fingolimod increased during the study period. This is expected as both clinicians' or patients' experience with fingolimod increases over time. Other market factors, including promotional activities and market access issues, might also have influenced the adoption of newer oral fingolimod into clinical practice. The early practices may not reflect the current use due to increasing evidence and experience involving fingolimod and the introduction of more oral DMAs. Therefore, more research is needed to understand the determinants of each oral DMA selection in recent times. 35 4.1 Strengths & limitations This is the first study to assess the factors associated with oral fingolimod versus injectable DMA prescriptions during the early years after its approval. As this study used data sources that are primarily administrative in nature, there exists an issue of unmeasured confounding. Information about race/ethnicity, MS phenotype, Expanded Disability Status Score (EDSS), and laboratory findings/MRI lesions were not available. However, the primary strength of this study is that it accounted for many MS-related clinical variables such as prevalent comorbidities, MS severity measure, and MS-related symptomatic medications. This rich source of clinical variables can be considered as the proxy for EDSS (a frequently used MS severity indicator in clinical trials) 36 in claims data. Further, physician-related variables, laboratory test information, and other market-related factors were also not available, which could have provided more understanding of factors related to fingolimod selection. Infusions were not studied as they were infrequently used as a primary treatment option to treat MS. It should also be acknowledged that, due to small sample size (<30), other relevant comorbidities/medication use that could have had an impact on treatment selection, such as autoimmune disorders, dementia/cognitive dysfunction, and diabetics with subcutaneous insulin, could not be adjusted in the model. Further, newer oral agents were not included as this study specifically aimed to assess the factors associated with the selection of the first oral DMA, fingolimod, during the initial years after its approval. Considering the above limitations and the study population, interpretation and generalization of results should be done with caution. 36 5 Conclusion During the initial years after market approval (2010–2012), nearly one in seven MS patients initiated treatment with the first oral DMA, fingolimod. Patients' enabling and need factors were the main drivers of oral fingolimod use over injectable DMA formulation. As the time from market entry increases, the likelihood of prescribing fingolimod increased. During the early years after its approval, patients with a highly active form of MS were more likely to receive oral fingolimod than injectable DMAs. These study findings could help clinicians in treatment decision-making and recommend policy modifications to improve DMA access. However, more research is needed to understand the determinants of oral DMA formulation selection with the introduction of several oral DMAs in recent times. Statement of funding source and role of sponsor This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declaration of interest None. Appendix A Study design to assess the factors associated with prescription of oral fingolimod in patients with MS: IBM MarketScan 2010–2012 Unlabelled Image Appendix B Claims based algorithm for calculating Multiple Sclerosis Severity Measure Unlabelled Table S. No MS-related symptoms Weight 1 Bladder/bowel symptoms (incontinence/constipation) or sexual dysfunction 2 2 Brainstem symptoms (facial neuralgia, vertigo, dizziness) 1 3 Cerebellar symptoms (movement disorders, ataxia, tremor) 2 4 Cerebral symptoms/cognitive impairment (e.g. altered mental status, aphasia) 1 5 Difficulty walking/gait problems 2 6 General symptoms (fatigue) 1 7 Pyramidal symptoms (e.g. weakness, paralysis, spasticity/muscle symptoms) 2 8 Sensory symptoms (e.g. disturbances of skin sensation) 1 9 Speech symptoms 1 10 Visual symptoms (e.g. visual loss, visual disturbances) 1 11 Mobility impairment or use of Durable Medical Equipment (DME) 1
|
[
"NATIONALMULTIPLESCLEROSISSOCIETY",
"ENGLISH",
"SCOLDING",
"BOSTER",
"NOYES",
"WILLIS",
"PELLETIER",
"NATIONALMULTIPLESCLEROSISSOCIETY",
"RAEGRANT",
"NATIONALMULTIPLESCLEROSISSOCIETY",
"OLEK",
"SAYLOR",
"ONTANEDA",
"DESAI",
"MARRIE",
"MEISSNER",
"KLAUER",
"LUBLOY",
"HANSEN",
"ANDERSEN",
"QUAN",
"NICHOLAS",
"MARRIE",
"MARRIE",
"HCUPUSTOOLSSOFTWAREPAGE",
"ELIXHAUSER",
"ONTANEDA",
"PYENSON",
"NICHOLAS",
"LUBLOY",
"ABBOUD",
"THOMPSON",
"MANDAL",
"LAMANTIA",
"JACOB",
"WINGERCHUK"
] |
728b0c9f5510432eb4db3d867b0f3338_Efficacy and accuracy of qSOFA and SOFA scores as prognostic tools for community-acquired and health_10.1016_j.ijid.2019.04.020.xml
|
Efficacy and accuracy of qSOFA and SOFA scores as prognostic tools for community-acquired and healthcare-associated pneumonia
|
[
"Asai, Nobuhiro",
"Watanabe, Hiroki",
"Shiota, Arufumi",
"Kato, Hideo",
"Sakanashi, Daisuke",
"Hagihara, Mao",
"Koizumi, Yusuke",
"Yamagishi, Yuka",
"Suematsu, Hiroyuki",
"Mikamo, Hiroshige"
] |
Background
The Japanese Respiratory Society recently updated its prognostic guidelines for pneumonia, recommending that pneumonia severity be evaluated using the sequential organ failure assessment (SOFA) and quick SOFA (qSOFA) scoring systems in a therapeutic strategy flowchart. However, the efficacy and accuracy of these tools are still unknown.
Methods
All patients with community-acquired pneumonia (CAP) and healthcare-associated pneumonia (HCAP) who were admitted to the study institution between 2014 and 2017 were enrolled in this study. Pneumonia severity on admission was evaluated by A-DROP, CURB-65, PSI, I-ROAD, qSOFA, and SOFA scoring systems. Prognostic factors for 30-day mortality were also analyzed.
Results
This study included 406 patients, 257 male (63%) and 149 female (37%). The median age was 79 years (range 19–103 years). The 30-day and in-hospital mortality rates were both 5%. With respect to the diagnostic value of the predictive assessments for 30-day mortality, the area under the receiver operating characteristic curve (AUROC) value for the SOFA score was 0.769 for CAP patients and 0.774 for HCAP patients. Further, the AUROC values for the SOFA score in CAP and HCAP patients with a qSOFA score ≥2 were 0.829 and 0.784, respectively, for 30-day mortality.
Conclusions
qSOFA and SOFA scores were able to correctly evaluate the severity of CAP and HCAP.
|
Introduction Pneumonia is one of the most common reasons for hospital admission in Japan and is a leading cause of death worldwide ( WHO Global Health Observatory (GHO), 2018 ). The mortality rate of pneumonia has not changed over the past several decades, despite advancements in technical methods such as multiplex PCR and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS), as well as the development of many effective antibiotics ( Venditti et al., 2009 ). In Japan’s aging society, the number of elderly patients with pneumonia is increasing every year. Therefore, not only specialists, but every hospital physician should be well-practiced in the treatment of pneumonia in the hospital setting. To aid physicians who do not specialize in respiratory infections, several prognostic tools have been developed since 2000, including A-DROP ( Miyashita et al., 2006 ), CURB-65 ( British Thoracic Society Standards of Care Committee, 2001 ), PSI ( Niederman et al., 2001 ), I-ROAD ( Japanese Respiratory Society, 2009; Seki et al., 2008 ), and SMART-COP ( Charles et al., 2008 ). The Japanese Respiratory Society updated its prognostic guidelines for pneumonia in 2017. The most dramatic change in the newly published guidelines is the suggestion that pneumonia severity be evaluated using the sequential organ failure assessment (SOFA) and the quick SOFA (qSOFA) scoring systems as part of a published flowchart of the therapeutic strategy ( Figure 1 ). It has been shown that the SOFA and qSOFA assessments are useful diagnostic tools for predicting hospital mortality among adults with suspected infection in the intensive care unit ( Raith et al., 2017 ). Further, it has been shown that these assessments are useful prognostic tools for community-acquired pneumonia (CAP), urinary tract infections, and sepsis ( Ranzani et al., 2017 ). However, whether qSOFA and SOFA scores are able to correctly evaluate the severity and prognosis of healthcare-associated pneumonia (HCAP) has yet to be determined. Additionally, although Matsunuma et al. ( Matsunuma et al., 2014 ) reported that I-ROAD was useful in evaluating HCAP severity, its prognostic value for HCAP is still unknown. In a recent pilot study, we reported that the SOFA score was able to evaluate the severity and prognosis of HCAP more accurately than A-DROP, CURB-65, PSI, or I-ROAD ( Asai et al., 2018 ). We have since continued to examine the validity of qSOFA and SOFA scores in the management of CAP and HCAP. Patients and methods Study population and patient inclusion criteria This retrospective study was conducted between 2014 and 2017 at the Aichi Medical University Hospital, which is a 900-bed tertiary hospital in the Aichi Prefecture in Japan. Patients with CAP and HCAP who were admitted to the hospital were included in this study. Patients with hospital-associated pneumonia were excluded. Pneumonia was diagnosed according to previously published international guidelines ( American Thoracic Society and Infectious Diseases Society of America, 2005; Mandell et al., 2007 ). Patient characteristics (age, sex, coexisting illness, etc.), symptoms, laboratory data, radiological findings, initial antibiotic regimen, pneumonia severity, clinical outcome, and pathogens isolated by sputum culture and blood culture at the time of admission were evaluated. This study was approved by the Institutional Review Board of Aichi Medical University Hospital (IRB number 17-H106). Evaluation of comorbidities Comorbidities were evaluated using the Charlson comorbidity index (CCI). This index predicts the 10-year mortality for 22 different comorbid conditions, including heart disease, AIDS, and cancer. Each condition is assigned a score of 1, 2, 3, or 6 depending on the risk of dying. For each patient, the sum of these scores is used as the total score to predict mortality. As patients are often unaware of the severity of their conditions, each patient’s chart was reviewed to determine the appropriate comorbid conditions and the resulting CCI score ( Charlson et al., 1987; de Groot et al., 2003 ). Severity of pneumonia Pneumonia severity on admission was evaluated by A-DROP, CURB-65, PSI, I-ROAD, qSOFA, and SOFA scores. The use of vasopressor agents, use of mechanical ventilation, and the existence of do-not-resuscitate orders were also examined during admission. Microbiological evaluation A sputum sample and two sets of blood were collected from each patient for microbiological examination. Serological tests were performed to detect antibodies against Mycoplasma pneumoniae and Chlamydophila pneumoniae ( Ishida et al., 1998; Miyashita et al., 2008 ). Additionally, Legionella pneumophila serogroup 1 antigen in urine was tested by immunochromatography. The antimicrobial susceptibility of isolated bacterial pathogens was assessed on the basis of the minimum inhibitory concentration according to the Clinical and Laboratory Standards Institute guidelines ( Clinical and Laboratory Standards Institute, 2011 ). Methicillin-resistant Staphylococcus aureus (MRSA), Pseudomonas aeruginosa , Acinetobacter baumannii , and extended-spectrum β-lactamase-producing organisms were defined as potentially drug-resistant (PDR) pathogens based on the American Thoracic Society/Infectious Diseases Society of America (ATS/IDSA) guidelines ( American Thoracic Society and Infectious Diseases Society of America, 2005 ). Definition of appropriate and inappropriate treatment Antibiotic treatment was classified as appropriate or inappropriate according to whether the pathogens identified were sensitive or resistant, respectively, to the initial prescribed antibiotics. Analysis of prognostic factors for 30-day mortality All of the patients were examined at the time of admission to the hospital. For CAP patients, clinical factors involving continuous variables were divided into two categories as follows: age (<76, ≥76 years); white blood cell count (<4 and >9, ≥4 and ≤9 × 10 9 cells/l); hemoglobin (<10, ≥10g/dl); hematocrit (<30, ≥30); platelets (<150, ≥150 × 10 9 cells/l); sodium (<130, ≥130 mEq/l); total bilirubin (<1.2, ≥1.2 mg/dl); C-reactive protein (CRP) (<11, ≥11 mg/dl); blood urea nitrogen (<20, ≥20 mg/dl); creatinine (<1.2, ≥1.2 mg/dl); albumin (Alb) (<3.3, ≥3.3 g/dl), pH (<7.3 or >7.4, 7.3–7.4). For HCAP patients, continuous variables were divided into two categories using the same cut-offs as were used for the CAP patients, but with the following changes: age (<80, ≥80 years); CRP (<8.7, ≥8.7 mg/dl); Alb (<3.0, ≥3.0 g/dl). The cut-off points for white blood cell count, hemoglobin, hematocrit, platelets, sodium, total bilirubin, blood urea nitrogen, creatinine, pH, systolic blood pressure, and PaO 2 /FiO 2 ratio were set at values to assess normal vs. abnormal ranges, whereas age, Alb, and CRP were based on the median values of the patient groups. Statistical analysis The data for categorical variables were expressed as percentages, while continuous variables were recorded as the mean ± standard deviation (SD). The Chi-square test or Fisher’s exact test (two-tailed) was used to compare categorical variables, and the unpaired Student t -test or Mann–Whitney U -test was used to compare continuous variables. Logistic regression analysis was used to identify independent risk factors associated with 30-day mortality of patients with CAP or HCAP. Variables with a p -value of less than 0.10 from the univariate analysis were entered into the multivariable model. All tests were calculated using IBM SPSS Statistics version 23 for Windows (IBM Corp., Armonk, NY, USA). Variables that showed a p -value of less than 0.05 were considered statistically significant. Results Patient characteristics Patient characteristics are shown in Table 1 . A total of 406 patients were enrolled in this study: 257 male (63%) and 149 female (37%). The median age was 79 years (range 19–103 years). Two hundred and forty-one patients (59%) were current or ex-smokers, while the smoking status was unknown for 30 patients (7%). Outcomes The 30-day and in-hospital mortality rates were both 5%. Initial treatment failure was seen in 27/409 patients (7%) and inappropriate antibiotic treatment in 43/196 patients (22%) ( Table 1 ). PDR pathogens were detected in 59/409 patients (15%). Vasopressors and mechanical ventilation were used in four (2%) and seven (4%) of the CAP patients and in seven (3%) and 12 (5%) of the HCAP patients, respectively. Do-not-resuscitate orders were confirmed for 21 CAP patients (12%) and 54 HCAP patients (24%). Microorganisms identified Sputum cultures were performed in 337/406 patients (83%). Microorganisms were confirmed in 242 of these patients (72%; Table 2 ). Blood cultures were performed in 232/407 patients (107 CAP patients and 125 HCAP patients). Of these, 26 (11%) showed positive cultures: nine CAP patients (8%) and 17 HCAP patients (14%). Correlation between qSOFA score and other pneumonia severity scores The qSOFA scores were compared with other pneumonia severity scores for patients with qSOFA scores of 0 or 1 and for patients with qSOFA scores of ≥2. It was found that the other pneumonia severity scores were much higher in the group of patients with a qSOFA of ≥2 compared to those with scores of 0 or 1 in both the CAP group ( Table 3 a) and the HCAP group ( Table 3 b). Prognostic accuracy of the predictive values for 30-day mortality To evaluate the prognostic accuracy of the predictive values, the following cut-off points were set for the assessment of 30-day mortality among pneumonia patients, in accordance with previous studies: A-DROP ≥4, CURB-65 ≥ 3, PSI ≥ IV, and I-ROAD C ( Matsunuma et al., 2014; Shindo et al., 2009 ). Table 4 reports the prognostic accuracy of the SOFA and qSOFA scores for 30-day mortality. A combination of qSOFA ≥2, or ≥4, or ≥6 and SOFA scorepoor prognostic factor showed higher sensitivity and specificity than A-DROP, CURB-65, PSI, or I-ROAD in both CAP and HCAP patients. Receiver operating characteristic curves for 30-day mortality in the CAP and HCAP groups The ability of the various prognostic assessments included in this study to predict 30-day mortality was also assessed. For all patients, the area under the receiver operating characteristic curve (AUROC) for A-DROP, CURB-65, PSI, I-ROAD, and SOFA scores was 0.798, 0.714, 0.692, 0.651, and 0.803, respectively. Among CAP patients, the AUROC values for A-DROP, CURB-65, PSI, I-ROAD, and SOFA scores were 0.800, 0.784, 0.812, 0.769, and 0.804, respectively ( Figure 2 A). For HCAP patients, the AUROC values for A-DROP, CURB-65, PSI, I-ROAD, and SOFA scores were 0.773, 0.66, 0.614, 0.573, and 0.774, respectively ( Figure 2 C). Of note, the SOFA score had the highest diagnostic value for both CAP and HCAP patients (0.804 and 0.774, respectively). AUROC values for CAP and HCAP patients with qSOFA scores of ≥2 are shown in Figure 2 B and Figure 2 D, respectively. With regard to the other scores, the AUROC values of the qSOFA score and SIRS criteria for 30-day mortality among all patients, CAP patients, and HCAP patients were 0.726 ( p = 0.001, 95% confidence interval (CI) 0.618–0.835) and 0.470 ( p = 0.66, 95% CI 0.357–0.583), 0.667 ( p = 0.323, 95% CI 0.281–1.000) and 0.197 ( p = 0.073, 95% CI 0.069–0.325), and 0.701 ( p = 0.007, 95% CI 0.584–0.818) and 0.525 ( p = 0.736, 95% CI 0.409–0.642), respectively ( Table 4 ). Prognostic factors of 30-day mortality among the CAP and HCAP groups Several potential prognostic factors for 30-day mortality in both CAP ( , S1) and HCAP ( Supplementary Material , S2) patients were analyzed. Using univariate analysis, it was found that the combination of a qSOFA score ≥2 and a SOFA score ≥4, as well as the presence of pleural effusion, were both factors indicating a poor prognosis for 30-day mortality ( Supplementary Material p = 0.038 and p = 0.006, respectively; Table 5 ). Further, logistic regression analysis showed that the combination of a qSOFA score ≥2 and a SOFA score ≥4 was an independent poor prognostic factor among CAP patients (odds ratio (OR) 18.0, 95% CI 1.2–262.7; p = 0.035). For HCAP patients, 11 prognostic factors were evaluated by univariate analysis ( Table 5 ). Logistic regression showed that a combination of qSOFA ≥2 and a SOFA score ≥6 (OR 21.5, 95% CI 1.8–254.1; p = 0.015), initial treatment failure (OR 10.3, 95% CI 2.0–53.2; p = 0.005), and Alb <3.0 mg/dl (OR 2.3, 95% CI 2.3–115.2; p = 0.005) were independent factors indicating a poor prognosis for 30-day mortality. Discussion This study showed that a combination of qSOFA and SOFA scores was the best indicator of both pneumonia severity and prognosis. The qSOFA is determined by three vital signs: respiratory rate, systolic pressure, and altered consciousness, and qSOFA and SOFA scores are very easy to obtain compared to the other predictive assessments for pneumonia. These scores could thus help any physician to make the decision regarding where a patient should be admitted (to a general ward or an intensive care unit), as well as which antibiotic therapy should be employed. In this study, it was found that pneumonia severity was greater in CAP and HCAP patients with qSOFA scores of ≥2 than in those with scores of 0 or 1. A previous study demonstrated that the qSOFA score could evaluate the state of sepsis accurately with equivalence to the SIRS criteria ( Raith et al., 2017 ). Thus, a qSOFA score ≥2 could be correlated with the severity of pneumonia by any predictive values. Indeed, a comparison of AUROC values for 30-day mortality among CAP patients revealed that all of the predictive assessments for pneumonia appeared to be equally good prognostic tools. The present study results also suggest that tachypnea, hypotension, and altered consciousness might be factors indicating a poor prognosis for patients with pneumonia. It was observed that the predictive ability of the SOFA score was superior to other predictive assessments for CAP patients with a qSOFA score ≥2. For HCAP patients, A-DROP and SOFA scores both showed a higher AUROC value compared to CURB-65, PSI, or I-ROAD. In HCAP patients with a qSOFA score ≥2, the AUROC value for SOFA score was higher than all other predictive assessments. These results suggest that a combination of qSOFA and SOFA scores might be the best method to assess the prognosis in CAP and HCAP. Matsunuma et al. ( Matsunuma et al., 2014 ) reported that I-ROAD was the best predictive assessment for 30-day mortality in HCAP patients. However, the present study did not reproduce this result. This discrepancy may be due to differences in the variables assessed, the patient group, and the study design. In this study, the clinical profiles of the HCAP patients were quite different from those of the CAP patients. In particular, the HCAP patients showed a greater number of comorbidities than the CAP patients. Additionally, the HCAP patients in this study were more likely to be treated with anti-pseudomonal antibiotic therapy than the CAP patients. The HCAP group patients also showed higher 30-day and in-hospital mortality rates than the CAP group patients, consistent with the results of previous studies ( Matsunuma et al., 2014; Shindo et al., 2009; Ugajin et al., 2014 ). However, the 30-day and in-hospital mortality rates for the HCAP patients in the current study were lower than those reported previously (5% in the current study vs. 13.7–18.9% in previous studies). This study also found a small proportion of severe HCAP patients compared with previous studies, with 37% of patients showing an A-DROP score of 4–5, 73% of patients showing a CURB-65 score of 4–5, and 67% of patients showing a PSI score of IV–V. The differences in severity are likely related to the differences in mortality rates between the present study and previous studies. There were fewer severe/very severe pneumonia patients and more mild/moderate pneumonia patients due to the location of the study institution. In particular, patients living in nursing homes tended to present early to the hospital even if they were not very ill. This is because such patients cannot be cared for in a nursing home in Japan. With regard to the CAP patients in this study, the sample of patients was not large enough to identify differences between this study and previous studies. The mean age of patients in this study was 75.4 years, which seems to represent the elderly population in developed countries. The mean age in this study was higher than that reported in previous studies. The study institution is located in a rural area and there is no municipal hospital in the city. This institute works not only as a university hospital but also as a municipal hospital. These special factors could reflect the very high mean age of the cohort. Previously, Maruyama et al. ( Maruyama et al., 2013 ) reported that initial treatment failure and hypoalbuminemia were unfavorable prognostic factors for 30-day mortality in HCAP patients. In contrast, another study reported that inappropriate antibiotic therapy was not a poor prognostic factor for 30-day mortality ( Matsunuma et al., 2014 ). In support of these results, it was observed in the present study that neither inappropriate antibiotic therapy nor the detection of PDR pathogens was correlated with a poor outcome among either HCAP or CAP patients. Although the reasons for these results are unclear, it is possible that PDR pathogens are not always associated with pneumonia but rather may colonize the bronchial tracts or the lungs. Thus, CAP-related pathogens should be covered as an initial treatment. The overuse of broad-spectrum antibiotics does not contribute to improved outcomes among CAP and HCAP. Rather, it may lead to the occurrence of Clostridium difficile infections or the emergence of PDR pathogens, both of which could result in an increased risk of in-hospital mortality. There are several limitations to the present study. First, the study employed a retrospective design and only included a relatively small number of patients from one hospital. A large-scale multicenter study is thus necessary to assess the efficacy and accuracy of the qSOFA and SOFA scores as prognostic tools for 30-day mortality among CAP and HCAP patients. Second, this study had a lower proportion of severe HCAP patients compared with previous studies. This difference in severity may explain the lower mortality rate seen in the present study compared to previous studies ( Matsunuma et al., 2014; Shindo et al., 2009; Ugajin et al., 2014 ). In conclusion, qSOFA and SOFA scores were able to accurately evaluate the severity of CAP and HCAP. These tools could thus be useful in the treatment of this condition. The study results suggest that the combination of a qSOFA score ≥2 and a SOFA score ≥4 is an independent unfavorable prognostic factor for 30-day mortality among CAP patients, while the combination of a qSOFA score ≥2 and a SOFA score ≥6 is an independent unfavorable prognostic factor for 30-day mortality among HCAP patients. Funding source None to declare. Ethical approval This study was approved by the Institutional Review Board of Aichi Medical University Hospital. Conflict of interest No competing interest declared. Author contributions NA: study design, data collection, data analysis, writing; HW: data collection; AS: data analysis; DS: supervised microbiology; HK: supervised antibiotics; MH: data analysis; YK: data analysis; YY: data analysis; HS: supervised microbiology; HM: study design and final draft. Acknowledgements We are grateful for the diligent and thorough critical reading of our manuscript by Dr Yoshihiro Ohkuni, Chief Physician, Taiyo and Mr John Wocher, Executive Vice President and Director, International Affairs/International Patient Services, Kameda Medical Center (Japan). We acknowledge the 12 th Award in the category of Clinical Research conferred by the director of the West Japan Branch of the Japanese Society of Chemotherapy. Appendix A Supplementary data Supplementary material related to this article can be found, in the online version, at doi: https://doi.org/10.1016/j.ijid.2019.04.020 . Appendix A Supplementary data The following are Supplementary data to this article:
|
[
"AMERICANTHORACICSOCIETY",
"ASAI",
"BRITISHTHORACICSOCIETYSTANDARDSOFCARECOMMITTEE",
"CHARLES",
"CHARLSON",
"CLINICALANDLABORATORYSTANDARDSINSTITUTE",
"DEGROOT",
"ISHIDA",
"JAPANESERESPIRATORYSOCIETY",
"MANDELL",
"MARUYAMA",
"MATSUNUMA",
"MIYASHITA",
"MIYASHITA",
"NIEDERMAN",
"RAITH",
"RANZANI",
"SEKI",
"SHINDO",
"UGAJIN",
"VENDITTI",
"WHOGLOBALHEALTHOBSERVATORYGHO"
] |
cb90a7877b6d431ab01c9b3b7b5a9cdd_Multitask learning-based secure transmission for reconfigurable intelligent surface-aided wireless c_10.1016_j.icte.2022.05.003.xml
|
Multitask learning-based secure transmission for reconfigurable intelligent surface-aided wireless communications
|
[
"Moon, Sangmi",
"You, Young-Hwan",
"Kim, Cheol Hong",
"Hwang, Intae"
] |
Reconfigurable intelligent surfaces (RISs) represent a highly promising technology that enhances the capacity and coverage of wireless networks by intelligently reconfiguring the wireless propagation environment in highly advanced wireless communications. The objective of this study is to solve the problem of secrecy rate maximization for multiple RIS-aided millimeter-wave communications by jointly optimizing the active RISs and the RIS phase shifts of the considered system. For this nonconvex problem, we propose multitask learning in a deep neural network to predict the RIS phase shift and active RISs. Numerical results based on realistic, three-dimensional, ray-tracing simulations show that the proposed solution can predict the RIS phase and active RIS with an accuracy rate > 96%. These results confirm the viability of RIS-aided secure wireless communications.
|
1 Introduction Millimeter-wave (mmWave) communication have been considered as a key technology for fifth-generation wireless communications systems because of its considerably high data rate and wide bandwidth [1] . However, the fundamental challenge is to increase the sensitivity of the mmWave radio channel to blockages owing to reduced diffraction, higher path, and penetration loss [2] . Reconfigurable intelligent surface (RIS) technology is being studied to solve this problem. This technology can increase the communication propagation distance by overcoming path loss and can secure the line-of-sight by adjusting the phase shifts of the RIS using a large number of reconfigurable and passive reflecting elements [3] . In addition, it has the advantages of low cost and low power consumption compared with the existing repeater method. For the integrated access and backhaul networks, Diamanti et al. [4] proposed the optimization of energy efficiency with respect to the phase shifts of the RIS elements. Achieving secure transmission of confidential information and avoiding eavesdropping have challenges in the design of wireless communication systems [5] because the wireless signal is vulnerable to eavesdropping due to the openness of the wireless communication environment. 1.1 Prior studies Recently, the RIS-aided secure wireless communications were investigated based on the physical layer security (PLS) [6–11] . Shen et al. [6] maximized the power of received signals subject to the transmission power and unit modulus constraints to further improve the secrecy rate. Dong et al. [7] proposed an iterative optimization method to maximize the secrecy rate with respect to the phase shift coefficient and transmit covariance of the RIS. Zhou et al. [8] proposed a RIS-aided secure transmission scheme that considered hardware impairments. Tang et al. [9] utilized the jamming scheme to further improve the secrecy rate for RIS-aided networks. Trigui et al. [10] introduced the use of quantized phases in secure transmissions wherein the phase shift was optimized based on the quantized phase. Dong et al. [11] solved the nonconvex SR optimization problem based on this design and proposed an alternating optimization algorithm to jointly optimize the beamformer at the transmitter and the reflecting coefficient matrix at the RIS. Machine learning (ML) has been considered a powerful tool for classification and regression (prediction) problems. Recently, deep learning (DL) has emerged as a subcategory of ML and has led to several performance breakthroughs in areas such as speech processing and computational vision. These breakthroughs have motivated the application of DL in communications problems, particularly in the field of wireless communications. The use of ML/DL in RIS-enhanced wireless networks has been investigated in a number of previous studies [12–14] . Taha et al. [12] exploited the DL method for learning the RIS reflection matrices directly from the sampled channel knowledge without any RIS knowledge. Khan and Shin [13] investigated signal estimation and detection in RIS-enhanced wireless networks. A DL-based approach was proposed to estimate channels and phase angles from a reflected signal received by an RIS. Gao et al. [14] proposed a DL-based algorithm for the optimal design of the RIS phase shift by training the DL offline. 1.2 Contributions of study Most of the existing studies on RIS have focused on a single RIS that cannot satisfy the users’ high-quality service requirements owing to its limited coverage. Conversely, deploying multiple RISs in wireless communications can significantly enhance the quality of the service [15] . In this study, we propose a DL solution for multiple RIS-aided mmWave communications to maximize the secrecy rate. The main contributions of this study can be summarized as follows: • We propose a multiple RIS-aided secure transmission solution to configure the RIS phase shift and achieve active RIS. Through the proposed solution, we could establish communication via multiple RISs and thus enhance the coverage, propagation quality, and secure rate. • We adopt a multitask learning model to reduce the computational burden of the training process. In addition, we design a deep neural network (DNN) to learn the mapping from the input parameters, such as the user, eavesdropper (EVE), and RIS position, to the output parameters, such as the RIS phase shift and active RIS, and then makes predictions. • We conduct a performance analysis of the proposed DNN of multiple RISs in mmWave communications. Specifically, we simulate the accuracy to assess the prediction of the RIS phase shift and active RIS. In addition, the secrecy rate demonstrated the efficiency of the proposed solution with low complexity, thus rendering it as a potential solution for multiple-RIS systems. 2 System model and problem formulation 2.1 System model We considered multiple RIS-aided downlink communications with a legitimate base station (BS), an EVE, and a legitimate user, as illustrated in Fig. 1 . In this system, multiple RISs, r , are attached to each building, and each RIS has ∈ R N reflecting elements. The BS, user, and EVE are equipped with a single antenna. This assumption is only adopted for simplicity of exposition, and the proposed solutions and the results in this study can be readily extended to multiple antennas. In addition, owing to the high path loss or obstacle blockage, there is no direct link between the BS and the user [2] . The RIS is configured to facilitate communication between the BS and a user. Let , h B , r ∈ ℂ N × 1 , and h U , l ∈ ℂ 1 × N represent the mmWave channels for the BS to the h E , l ∈ ℂ 1 × N th RIS, r th RIS to a user, and the r th RIS to EVE, respectively. Let r represent the Θ l = diag ( θ r ) ∈ ℂ N × N th RIS phase shift matrix, where r and θ r = θ r , 1 , … , θ r , N T ∈ ℂ N × 1 , θ r , n = e j ϕ n with n = 1 , … , N being the reflection phase shift. The received signal at the user is denoted as ϕ n The received signal at the EVE is denoted as (1) y u = ∑ r = 1 R x r h U , r Θ r h B , r s + n u . where (2) y e = ∑ r = 1 R x r h E , r Θ l h B , r s + n e . is the transmission data that satisfy the relation s ∈ ℂ 1 × 1 . E s s H = 1 is a binary variable, x r x r ∈ , whereby 0 , 1 indicates that the x r = 1 th RIS is active. When r , the x r = 0 th RIS is inactive. r and n u ∼ CN ( 0 , σ u 2 ) are the additive white Gaussian noise signals at the user and EVE, respectively. n e ∼ CN ( 0 , σ e 2 ) To consider the characteristics of the mmWave channel, we adopted wideband geometric channels with clusters L [16] . In this model, each of the clusters contributed a ray with a time delay as well as azimuth and elevation angles of arrival (AoAs) given by τ l and θ l , respectively. ϕ l represents the pulse-shaping function for p ( τ ) -spaced signaling evaluated at T s seconds. Accordingly, the delay ( τ ) channel vector between the BS and the d th RIS can be expressed as; l where (3) h R , l = N ρ ∑ l = 1 L α l p d T s − τ l a θ l , ϕ l , represents the path loss between the BS and the ρ th RIS, l is the complex gain for the α l l th path, and is the array response vector of RIS at the AoAs a θ l , ϕ l and θ l . Similarly, the channel between ϕ l l th RIS and user/EVE, , can be defined. h U , l / h E , l 2.2 Problem formulation The objective was to maximize the secrecy rate by jointly optimizing the active RIS and RIS phase shift. Based on (1) , the achievable rate for the user is given by Based on (4) R u = log 2 1 + 1 σ u 2 ‖ ∑ r = 1 R x r h U , r Θ r h B , r ‖ 2 . (2) , the achievable rate for EVE is As a result, the secrecy rate is denoted as (5) R e = log 2 1 + 1 σ e 2 ‖ ∑ r = 1 R x r h E , r Θ r h B , r ‖ 2 . [17] where (6) R s = R u − R e + , . x + = max ( 0 , x ) Based on (4) – (6) , we formulate the secrecy rate maximization problem as; where (7a) max θ , x R s , (7b) s.t. θ l , n ∈ P , ∀ l ∈ L , ∀ n ∈ N , (7c) x l ∈ 0 , 1 , ∀ l ∈ L is constrained to a predefined codebook θ and P . x = x 1 , … , x R T 3 Multitask learning-based secure transmission Owing to the non-convexity of the objective function and the constraints, the secrecy rate maximization problem in (6) is highly non-convex. In this section, we propose a multi-task learning solution to predict the phase shift and active RIS as shown in Fig. 2 . 3.1 System operation Both the prediction of phase shift and active RIS are presented as classification problems, and both share the same input data. Therefore, to reduce the computational burden of the training process, we propose a multitask learning model in DNN for two tasks. The proposed model operates in two steps: offline training and online prediction. During the offline training, we first collected the dataset by an exhaustive search to solve the secrecy rate maximization problem. The dataset included the user, EVE, and RIS positions as the input parameters and the phase shift and active RIS as the output parameters. After a sufficient amount of data was collected by the dataset of the RIS controller, we used the DNN to train the multitask learning model using the collected dataset. During the online prediction, by feeding the input parameters (such as user, EVE, and RIS positions) into the trained multitask learning model, the parameters for optimally secure transmission, including the optimal phase shift and active RIS, can be predicted at the output. By moving the complexity of online computation to offline training, the complexity of solving the secrecy rate maximization problem in (6) was determined by online prediction. 3.2 Multitask learning model in DNN We considered the predictions of RIS phase shift and active RIS as two individual deep learning tasks owing to the interaction between two independent outputs (i.e., the RIS phase shift and active RIS) of the secrecy rate maximization problem. The learning efficiency and prediction accuracy can be improved using a multitask learning structure compared with the case in which the models are trained separately [18] . Fig. 3 shows the framework of the trained multitask learning model. We adopted the DNN as the deep multitask learning model. Two learning tasks interact with each outer. Each learning task is performed with a training data set, which consists of J training samples. Accordingly, we have where (8) S i = X j i , Y j i , represents the X j i th training instance in j th task, and i represents its label. For the input, two tasks shared the same inputs, i.e., user, EVE, and RIS positions. For the output, the dimensions of outputs in each prediction varied depending on the number of classes. There were Y j i choices for RIS phase prediction and | P | choices corresponding to active RIS (active or power-off) for each RIS. 2 L − 1 Both tasks can be regarded as a classification problem, where the probability of each class is predicted using the Softmax function, i.e., the predicted probability for the th class is d where (9) p d = e z d ∑ i = 1 D e z i , , represents the z i , i = 1 , … , D th element of the i -dimensional projection vector, and D is total number of classes D [18] . The loss function is defined as the cross-entropy. Accordingly, we have where (10) L = − ∑ d = 1 D t d log t ˆ d , and t d represent the target vector and the actual output of neurons, respectively. The loss function of the proposed multitask learning model is defined as the weighted sum of the two cross-entropies, which is expressed as t ˆ d where (11) L M T L = ξ P L P + ξ A L A , and ξ P represent the weights of the phase shift and active RIS, respectively. ξ A and L P represent the loss functions of the phase shift and active RIS, respectively. L A 3.3 Complexity analysis In this subsection, we analyze the computational complexity of the proposed learning-based phase shift and active RIS prediction method. In the offline training, the computational complexity of the DNN can be expressed as where C D N N ∼ O ∑ l = 1 L − 1 n l − 1 n l and n l − 1 denote the numbers of neurons of n l and l − 1 layers, respectively. In addition, we only need to train our learning model once, and the complexity of solving the optimization problem is determined by online prediction by moving to offline training. Therefore, the solution to the optimization problem can be obtained efficiently by performing feedforward calculation without iterations. Thus, the complexity can be decreased considerably. l 4 Simulation results 4.1 Simulation setup The simulation setup was based on the publicly available, generic DeepMIMO [19] dataset based on the outdoor ray-tracing scenario “O1”. The parameters are described in Table 1 . The positions of BS and EVE were fixed, while the user could take any random position in a specified x–y grid, as illustrated in Fig. 4 . We selected BS 6 and 8 as the RISs. We constructed our DNN in Keras with a tensorFlow backend. The rest of the simulation was implemented in MATLAB. We adopted a DFT codebook for the RIS phase shift matrix [20] . Specifically, considering the UPA structure, we defined the RIS phase shift codebook as N H × N V where the codebook (12) P D F T = P N H ⊗ P N V , is a DFT codebook for the azimuthal (horizontal) dimension. The P N H ∈ ℂ N H × N H th column, n H is defined as n H = 1 , 2 , … , N H . The codebook 1 , e − j 2 π N H n H , … , e − j ( N H − 1 ) 2 π N H n H T was also defined for the elevation (vertical) dimension. P N V 4.2 Performance evaluation The proposed architecture performs two tasks: (i) prediction of the phase shift of each RIS, and (ii) prediction of active RISs, i.e., the active or power-off status of each RIS. For phase shift prediction, we evaluated the top-1, top-3, and top-5 accuracies with respect to training sets with varying sizes, as shown in Fig. 5 . Top-1 accuracy shows that the architecture will likely identify the correct beam with an accuracy rate of 77%. This accuracy could be improved further with additional beam training considering the top-3 and top-5 beams predicted by the architecture. For instance, upon beam-training the top-3 predictions of the architecture, the top-1 prediction accuracy increased by 16% (from ∼ 77% to ∼ 89%). Thus, the prediction performance of the communication link could be improved. Furthermore, this architecture achieved a near-perfect prediction for this task with a ∼ 96% accuracy rate. This confirmed that the proposed DNN was capable of predicting an optimal phase shift based on the user, EVE, and RIS positions with a high success probability. ∼ Given that the prediction is binary, we evaluated only the top-1 accuracy with respect to training sets of varying sizes for active RIS prediction, as shown in Fig. 6 . The results show that the DNN model successfully predicted the active RIS with an accuracy rate greater than 98%, based on the user, EVE, and RIS positions. Fig. 7 shows the cumulative distribution function of the secrecy rate for different RIS-aided schemes. The proposed RIS-aided scheme can achieve a higher secrecy rate than the single RIS-aided scheme. This is because multiple RISs can provide more than one received signal paths when deployed in space. The proposed RIS-aided scheme has a higher secrecy rate than the all active RISs-aided scheme because the activity of the multiple RISs-aided scheme can adaptively reduce the interference from the EVE while increasing the signal gain of the user; this leads to reduced EVE effects and increased secrecy rate. Lastly, the performance of the proposed algorithm is similar to that of the exhaustive research-based algorithm, but it can obtain the near-optimal solution with low complexity. 5 Conclusion In this study, we proposed a multitask learning model to maximize the secrecy rate for multiple RIS-aided communications using the DNN to jointly optimize the active RISs and corresponding phase shifts. We used accurate three-dimensional ray-tracing to analyze the performance of the proposed deep learning solution in RIS-aided secure wireless communications. The simulation results demonstrated that the proposed solution could predict the RIS phase shift and active RIS with an accuracy rate exceeding 96%. Thus, it can be used for RIS-aided secure wireless communications in the future. CRediT authorship contribution statement Sangmi Moon: Conception and design of study, Acquisition of data, Analysis and/or interpretation of data, Writing – original draft, Writing - review & editing, Approval of the version of the manuscript to be published. Young-Hwan You: Conception and design of study, Approval of the version of the manuscript to be published. Cheol Hong Kim: Writing – original draft, Approval of the version of the manuscript to be published. Intae Hwang: Conception and design of study, Analysis and/or interpretation of data, Writing – original draft, Writing - review & editing, Approval of the version of the manuscript to be published. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments “This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT: Ministry of Science and ICT) ( 2020R1I1A1A01073948 and 2021R1A2C1005058 )”. “This research was supported by the BK21 FOUR Program (Fostering Outstanding Universities for Research, 5199991714138) funded by the Ministry of Education (MOE, Korea) and National Research Foundation of Korea (NRF) ”.
|
[
"MITRA",
"MARCUS",
"YEH",
"ZHOU",
"SHEN",
"DONG",
"ZHOU",
"TANG",
"TRIGUI",
"DONG",
"TAHA",
"KHAN",
"GAO",
"HUANG",
"ALKHATEEB",
"LU",
"RUDER",
"AMADORI"
] |
802cf2aaadda4cceb76acfc1c96d8805_Chimeric engulfment receptors A new cell therapy approach for SIV and HIV infection_10.1016_j.omtm.2022.12.010.xml
|
Chimeric engulfment receptors: A new cell therapy approach for SIV and HIV infection
|
[
"Orentas, Rimas J."
] | null |
In this issue, authors Daniel Corey, Francoise Haeseleer, Joe Hou, and Lawrence Corey from CERo Therapeutics and the Fred Hutchinson Cancer Center describe a new approach to creating chimeric antigen receptors (CARs) in “Chimeric engulfment receptors (CERS): a new cell therapy approach for SIV- and HIV-infection”. Effective disease control with engineered T cells against targets other than hematologic malignancies, whether the application is for solid tumors or viral-specific CARs, remains elusive. Viral-specific T cells for cytomegalovirus (CMV) and Epstein-Barr virus (EBV) have been shown to be effective in some settings. Activity against HIV, whether mediated by a T cell receptor (TCR)- or CAR-based approach has proven to be much more difficult. In an entirely orthogonal approach, Corey et al. targeted a characteristic of an infected cell population as opposed to an antigen encoded by the virus itself or an overexpressed tumor-associated protein. 1 The expression of phosphatidyl serine (PS) on the surface of effete red cells, apoptotic cells, and tumor cells is well established. Corey et al. observed that PS is also overexpressed on cells infected by lentiviruses early post-infection. The cell surface glycoprotein TIM-4 specifically binds PS. Expression of TIM-4 by macrophages facilitates the clearance of apoptotic cells and the engulfment of effete red cells, removing them from circulation. Corey et al. linked the extracellular aspect of TIM-4 that binds to PS with intracellular signaling domains normally associated with CAR-T activity, thus creating chimeric “engulfment” receptors (CERs). CERs were demonstrated to mediate enhanced killing of simian immunodeficiency virus (SIV)-infected CD4+ T cells, demonstrating that an innate immune receptor system can be redirected to clear virus-infected cells. 1 Expression of PS on the cell surface can be induced by cells whose biology has been altered by viral or parasitic infection. HIV specifically triggers the expression of PS early in infection not only through a cell death pathway but by a much earlier event whereby scramblase enzymes (which redistribute phospholipids on the membrane) are activated in order to promote membrane fusion events required for infection and viral budding. 2 Building on these observations, Corey et al. designed CERs to take advantage of the induced exposure of PS, leveraging the ability of TIM-4 to bind PS and thereby transmit a CAR-like transmembrane signaling cascade. 3 , Another novel aspect of developing the reported CER constructs was the screening assays carried out to evaluate both standard T cell activation domains often included in CARs, such as CD3-zeta or CD28, as well as signaling receptors from innate immune receptors involved in pathogen recognition. These included the signaling domains from Toll-like receptors (TLRs), TRAF6, DAP10, and DAP12. The greatest anti-SIV effects were seen with CER constructs expressing either TLR8-CD3zeta or TRAF6 intracellular signaling domains. The TLR8 signaling pathway includes MyD88 and TRAF6, both resulting in nuclear factor κB (NF-κB) expression and cellular activation. 4 While the CER approach is novel, the mechanism of cell killing has yet to be established. CERs are active in CD4 T cells but not CD8 cells, which has not been reported in either CAR or recombinant TCR approaches. Thus, the activated T cells may be activated to kill their PS-expressing targets by expression of NK-like receptors, i.e., CTAK activity, by nibbling of target cell membranes or by a yet-to-be-described mechanism of membrane alteration. 5 , Linking of TLR8-CD3zeta or TRAF6 signaling to a specific killing mechanism would allow for a broader exploration of the reported findings. 6 The significance of this new work lies in the demonstrated ability to engineer cells of the adaptive immune system to recognize cellular characteristics normally associated with innate immune cell activity. Corey et al. used PS as a signal to control retroviral infection by an engineered T cell. Similar receptors, if expressed by T regulatory cells, could be used to modify tissue inflammation. Future work could use a cascade of these signals to gate effector or regulatory functions that react to a more broadly expressed target cell characteristics that reflect cellular physiology as opposed to tradition microbial or cancer-associated antigens. Immediate next steps are to test the SIV model presented in vivo to determine if CER-T cells can control SIV infection in an animal model. Declaration of interests R.J.O. has received research support from Miltenyi Biotec.
|
[
"ZAITSEVA",
"KOBAYASHI",
"MIYANISHI",
"WANG",
"BOBBIN"
] |
35e3013d306d4dbdb0acd458152ec4ec_Microstructure of an additively manufactured Ti-Ta-Al alloy using novel pre-alloyed powder feedstock_10.1016_j.addlet.2023.100144.xml
|
Microstructure of an additively manufactured Ti-Ta-Al alloy using novel pre-alloyed powder feedstock material
|
[
"Lauhoff, C.",
"Arold, T.",
"Bolender, A.",
"Rackel, M.W.",
"Pyczak, F.",
"Weinmann, M.",
"Xu, W.",
"Molotnikov, A.",
"Niendorf, T."
] |
Binary Ti-Ta and ternary Ti-Ta-Al alloys attracted considerable attention as new potential biomaterials and/or high-temperature shape memory alloys. However, conventional forming and manufacturing technologies of refractory based titanium alloys are difficult and cost-intensive, especially when complex shapes are required. Recently, additive manufacturing (AM) emerged as a suitable alternative and several studies exploited elemental powder mixing approaches to obtain a desired alloy and subsequently use it for complex shape manufacture. However, this approach has one major limitation associated with material inhomogeneities after fabrication. In present work, novel pre-alloyed powder material of a Ti-Ta-Al alloy was additively manufactured. Hereto, electron beam powder bed fusion (PBF-EB/M) technique was used for the first time to process such Ti-Ta based alloy system. Detailed microstructural analysis revealed that additively manufactured structures had a near full density and high chemical homogeneity. Thus, AM of pre-alloyed feedstock material offers great potential to overcome major roadblocks, even when significant differences in the melting points and densities of the constituents are present as proven in the present case study. The homogeneous microstructure allows to apply short-term thermal post treatments. The highly efficient process chain detailed will open up novel application fields for Ti-Ta based alloys.
|
1 Introduction Nowadays, additive manufacturing (AM) techniques are widely adopted in academia and industry to fabricate functional metal parts and components of unprecedented geometrical complexity. For the two major categories of metal AM technologies, i.e. powder bed fusion (PBF) and directed energy deposition (DED), structures are fabricated directly from a computer-aided design (CAD) file through layer wise melting of either powder or wire feedstock material. The possibility for a tool-free design allows to overcome limitations of conventional manufacturing processes, especially when challenging materials have to be processed causing high production costs [1] . Furthermore, AM techniques were reported to be employed for direct microstructure design. By controlling the thermal gradient and the solidification velocity via an adequate choice of the processing parameters, local microstructural features and, thus, mechanical properties can be tailored [2–5] . Given these AM characteristics, topology optimized geometries and locally tailored microstructures can lead to a new generation of lightweight designs being highly attractive for applications in the automotive, aerospace and biomedical sectors. Over the last two decades, a variety of materials were qualified for AM technologies to open up new industrial markets [ 6 , 7 ]. Among these materials, titanium and its alloys are particularly prominent due to their superior properties, i.e. high strength to weight ratio, good biocompatibility, and excellent corrosion resistance [8] . Ti-Ta alloys have attracted significant attention for multipurpose use as bone implants as well as high-temperature actuators due to an enhanced osseointegration [9–11] and shape memory properties [12] , respectively. Binary Ti-Ta alloys compete with Ti-6Al-4V, the latter being the most common alloy used for orthopedic implants. Tremendous efforts have been devoted on processing of Ti-6Al-4V via AM and the relationships between process, microstructure and mechanical properties are well understood [13–15] . However, the suitability of Ti-6Al-4V as bone implant alloy has recently been questioned due to potential toxicity concerns associated with the alloying elements [ 16 , 17 ] and its inherently high elastic modulus of 113 GPa being much higher than that of human bone (5 – 30 GPa). The latter promotes the stress-shielding effect, bone resorption and eventually implant failure [18] . Alloying titanium with tantalum, in turn, seems to be a highly promising approach for designing novel biomaterials. Tantalum is known for its nontoxic nature and Ti-Ta alloys were reported to show reduced elastic moduli [19] , leading to better biocompatibility and superior implant integration with the human body (compared to Ti-6Al-4V [9–11] ), respectively. Furthermore, Ti-Ta alloys can feature shape memory behavior well above 100°C and, thus, came into focus of research as high-temperature shape memory alloy (HT-SMA) candidates [ 12 , 20 ]. Their unique functional properties are based on a thermoelastic, reversible phase transformation between the high-temperature austenitic β-phase (body-centered cubic, bcc) and the low-temperature martensitic α’’-phase (orthorhombic) [12] . Transformation strains of up to 3.6% were reported [21] . Unfortunately, binary Ti-Ta is prone to rapid functional degradation during thermal or thermo-mechanical cycling. Formation of the hexagonal ω-phase leads to the stabilization of the high-temperature β-phase, resulting in a deterioration or even loss of the functional properties [ 12 , 22 , 23 ]. However, alloying with ternary elements such as tin and aluminum can improve the functional stability by delaying the formation of the ω-phase [ 21 , 24 , 25 ]. In addition, compared with Ni-Ti-Hf, being currently the most promising HT-SMA system [ 26 , 27 ], Ti-Ta- X (e.g. X = Al, Sn) HT-SMAs contain more reasonably priced constituents and provide good workability [ 12 , 25 ]. Despite these advantages, Ti-Ta(-Al) alloys are still not widely adopted in industrial applications. The challenging alloy formation caused by the substantial differences in the alloying elements’ melting points and densities is a major roadblock towards their widespread use. The differences can cause vaporization of the lower-melting elements and chemical inhomogeneities by segregation of the constituents during the solidification process, respectively [ 28 , 29 ]. In case of conventional processing, alloy ingots must be remelted and annealed many times to obtain adequate homogeneity [28] . Recently, AM came into focus as a new processing route being capable to produce refractory titanium alloys (e.g. Ti-Nb, Ti-Ta) via in situ alloying. Since in most cases pre-alloyed feedstock material was not available so far, it was shown that mechanical mixing of elemental powders and subsequent processing can lead to the desired alloy fabrication [30–35] . However, it was noted that in situ alloying results in local inhomogeneity of the microstructure due to presence of unmolten niobium and tantalum particles, at least when the energy input is too low. On the other hand, an increase in energy leads to substantial keyhole formation [30–32] . Some success was reported by Brodie et al. [32–34] for Ti-Ta and Huang et al. [35] for Ti-Nb. To promote homogeneity, the authors utilized a remelting scanning strategy. However, this approach reduces the productivity of the AM process and limits its applicability of printing structures with fine features. Pre-alloyed feedstock material is an optimal candidate to overcome aforementioned limitations. One example was reported by Schulze et al. [36] focusing on a Ti-Nb alloy. Ternary Ti-Nb-Ta pre-alloyed powders especially designed for application in AM processes have been recently patented [37] . However, to the best of the authors’ knowledge, no work has been published on AM of pre-alloyed Ti-Ta based material. Furthermore, all of the previous studies about additively manufactured Ti-Ta based alloy systems focused on laser beam powder bed fusion (PBF-LB/M, abbreviation according to ISO/ASTM 52900 standard terminology). Thus, in order to close this prevailing gap, the present study reports on a pre-alloyed Al-modified Ti-Ta alloy and its processing by electron beam powder bed fusion (PBF-EB/M). Aim of the investigations conducted was to shed light on the microstructural evolution along the whole process chain from feedstock material to additively manufactured bulk material. Hereto, detailed microstructure analysis has been performed using optical and scanning electron microscopy as well as high-energy synchrotron diffraction. In particular, alloy formation being characterized by a homogenous element distribution is proven for this highly challenging processing conditions, i.e. processing under vacuum of an alloy system featuring melting point and density differences in extreme. 2 Material and methods 2.1 Material and processing In the present study, a Ti-Ta-Al alloy with a nominal chemical composition of Ti-25Ta-5Al (wt.-%) was additively manufactured. While a tantalum content of 25 wt.-% was chosen in light of orthopedic implant applications [ 10 , 19 ], small amounts of Al have been added (regardless of the potential toxicity concerns for biomaterials) due to its well-known stabilizing effect on shape memory properties [ 21 , 24 ]. Thereby, an alloy system with constituents featuring highly different physical properties was evaluated. Pre-alloyed powder feedstock material was produced via electrode induction melting inert gas atomization (EIGA). For powder manufacturing, pure elemental powders were blended and subsequently cold isothermally pressed (CIP) to form rod-shaped electrode material with a diameter of 45 mm and a length of about 330 mm. Employing a Leco TC-436 analysing unit, the oxygen and nitrogen contents of the CIP rod were determined to be 2834 µg/g and 292 µg/g, respectively. The carbon content was found to be 120 µg/g using a Leco TC-444 analyzing system. The atomization process was performed using inert argon gas (99.999%) at a gas pressure of 25 bar. From the obtained spherical powder, a powder fraction featuring nominal particle sizes between 63 and 125 µm was extracted by sieving. The PBF-EB/M process was conducted on a GE Additive Arcam A2X machine. In order to limit the volume of required powder material, a build plate reduction with dimensions of 50 × 50 mm 2 was utilized. Cuboidal blocks with dimensions of 10 × 10 mm 2 base area and 50 mm in height were fabricated on a steel (AISI 304) build plate employing beam currents and beam speeds in a range of 7 – 13 mA and 2500 – 3500 mm/s, respectively. A bidirectional meander scanning strategy with 90° rotation between successive layers was employed. In each layer, the exposure was accomplished block by block. For the sake of brevity, only the material processed using the set of parameters leading to the highest density is detailed in the present paper. A summary of these processing parameters is given in Table 1 . Using those values for the acceleration voltage, beam current, beam speed, and hatch distance, the resulting energy per unit area is calculated to be 1.71 J/mm 2 . 2.2 Sample preparation and characterization Plates of 1.5 mm thickness were machined along the build direction (BD) from the PBF-EB/M manufactured cuboids by electro-discharge machining (EDM). In order to remove the EDM-affected surface layer, the plates were ground down to 5 µm grit size. The additively manufactured Ti-Ta-Al was investigated in two different material states: as-built and after a heat treatment at 1200 °C for 21 h followed by water quenching. All plates were encapsulated in quartz glass tubes under argon atmosphere to avoid oxidation. For microstructure characterization, plates in both conditions were vibration-polished for 1 h with a 0.04 µm colloidal silica suspension (OP-S NonDry, Struers). Etching was performed for 30 s using Kroll's reagent. Analysis of the process-induced defect distribution was conducted using optical microscopy (OM). For analysis of the chemical composition, local segregations and crystallographic texture, scanning electron microscopy (SEM) including energy-dispersive X-ray spectroscopy (EDS) and electron backscatter diffraction (EBSD) using a Zeiss Ultra Plus Gemini microscope was used. The SEM measurements were performed with an acceleration voltage of 20 – 30 kV. All microsections shown in the following were recorded in a plane parallel to the lateral surfaces, i.e. parallel to the build direction (BD), and depict representative areas from the center of the PBF-EB/M processed cuboids being not affected by surface phenomena. Phase analysis was carried out at the Deutsches Elektronen-Synchrotron (DESY) at beamline P61A in Hamburg, Germany. High-energy synchrotron diffraction allows to probe sample volumes of several mm 3 and, thus, provides for a detailed high-resolution microstructure analysis. Polychromatic synchrotron radiation, covering an energy range of 20 – 200 keV, and a Mirion high purity Germanium point detector with collimator-slit system were used. For further details on the synchrotron beamline P61A, the reader is referred to Reference [38] . 3 Results and discussion Fig. 1 shows SEM images of the pre-alloyed Ti-Ta-Al powder in as-atomized and sieved condition. Following the EIGA process, the particles are highly spherical and feature smooth surfaces being free of satellites and any defects like cracks. The powder material is fully deagglomerated as seen from the secondary electron (SE) image in Fig. 1 a. It should be noted that a slight residual amount of particles with diameters below 25 µm has remained in the sieved powder fraction. However, this fraction has no detrimental effect on the processability (powder flowability and printability) of the powder material at all. Structures with high relative density could be fabricated in this study (cf. Fig. 3 and see details below). A cross-section backscattered electron (BSE) micrograph of the powder particles is shown in Fig. 1 b, indicating a dendritic microstructure. EDS mappings of a polished particle cross-section in Fig. 2 reveal the chemistry of the powder material. Within the limits of EDS accuracy, the quantitative results of the EDS analysis ( Table 2 ) show that the overall chemical composition of the particles is in good agreement with the nominal composition of the Ti-25Ta-5Al alloy. As can be deduced from the mappings, however, the powder particles feature dendrite-type structures, i.e. chemical inhomogeneity associated to microsegregation during solidification. The inter-dendritic phase is enriched in titanium ( Fig. 2 a). The inherent cooling rates during powder synthesis by EIGA are not sufficiently high to fully prevent segregation processes and, thus, findings are fully in line with results recently reported on gas-atomized Ti-42Nb [36] and tungsten containing Ti-Al powders [39] . Nonetheless, it will be shown below that the PBF-EB/M processed Ti-Ta-Al does not comprise such dendritic microstructural features. Additively manufactured structures with very homogenous element distribution could be obtained using the pre-alloyed powder material for fabrication. For initial microstructure characterization after PBF-EB/M processing, OM was conducted. A representative micrograph is shown in Fig. 3 revealing a crack-free microstructure for the Ti-Ta-Al blocks (10 × 10 × 50 mm 3 ) in the as-built condition. In AM processes, high residual stresses may result from steep thermal gradients [ 40 , 41 ]. However, elevated base plate and built temperatures effectively reduce process-induced residual stresses [ 40 , 41 ], eventually hampering substantial crack formation as in present case. At this point, size effects are known to have a significant influence on cracking in PBF-LB/M processes, however, PBF-EB/M as a hot-bed AM process does not suffer from such issues to the same extent. Thus, it is expected that the findings discussed and presented here will be transferable to real components. Such assessment, however, is out of the scope of present work and will be the focus of follow-up studies. Only a small contribution of porosity is visible after processing. Using ImageJ software, a relative density of 99.86% has been determined from a series of optical micrographs (not shown). Pore sizes of up to 157 µm (cf. Fig. 3 ) and an average pore sphericity of 0.96, being very close to a spherical shape, were found. The coincidence of high sphericity and small diameters of pores points at gas porosity induced from gas entrapment in the initial powder material, while the larger pores (cf. defect in the middle of Fig. 3 ) are likely resulting from keyholing effects [42–44] . Lack of fusion porosity [42] , however, has been avoided with the set of processing parameters used in the present work. In order to shed light on the phase composition of the PBF-EB/M processed Ti-Ta-Al, structure identification was conducted employing high-energy synchrotron diffraction experiments. Fig. 4 shows the corresponding diffraction patterns obtained at room temperature from the additively manufactured material in both as-built and annealed condition. Please note that the latter state will be considered later. The microstructure of the Ti-Ta-Al in as-built condition, i.e. without conducting a post-process heat treatment, mainly consists of the hexagonal close-packed (hcp) α-phase. In addition, minor fractions of the bcc β-phase are also present (cf. the low intensity diffraction peak at around 87.5 keV). The lattice parameters are a α = 0.2930 nm / c α = 0.4694 nm and a β = 0.3286 nm for the α- and β-phase, respectively. All parameters are in accordance with data reported in literature for binary Ti-Ta [ 45 , 46 ]. It should be mentioned that no detailed phase diagram is available for the alloy composition investigated. Tantalum and aluminum have opposite influence on the phase formation of titanium alloys; tantalum is a β-stabilizer whereas aluminum is known to stabilize the α-phase [ 8 , 45 ]. Based on the empirical molybdenum equivalency ([Mo]eq) and aluminum equivalency ([Al]eq) [47–49] , the Ti-25Ta-5Al alloy can be assessed in the context of β-phase stability, possible constituent phases and resulting microstructure. With a [Mo]eq of 5.5 and an [Al]eq of 5.0, the current Ti-25Ta-5Al alloy, to a large extent, is similar to Ti-6Al-2Sn-4Zr-6Mo (Ti-6246) [ 47 , 48 , 50 ] as a heat-treatable α+β dual-phase alloy. Consequently, the α+β dual-phase microstructure observed in PBF-EB/M Ti-25Ta-5Al is supposed to be close to equilibrium state. Due to the relative high processing temperatures of around 850 °C (cf. Section 2 ) as well as the inherent slow cooling within the process chamber after melting the uppermost layer in PBF-EB/M, decomposition of the high-temperature β-phase into the low-temperature α-phase takes place. In addition, there is no evidence for the existence of the non-equilibrium martensitic phases α’ (hcp) and α’’ (orthorhombic) and the non-equilibrium ω-phase (hexagonal) in the as-built condition ( Fig. 4 ), again indicating near-equilibrium state [45] . The prior-β grain structure as well as the morphology and crystallographic orientation of the volume-dominant α-phase are clarified by the SEM EBSD analysis shown in Fig. 5 . The α-phase was indexed with an hcp crystal structure (P6 3 /mmc) and the lattice parameters determined from the synchrotron diffraction pattern in Fig. 4 , i.e. a α = 0.2930 nm / c α = 0.4694 nm. The inverse pole figure (IPF) mapping and the corresponding IPF ( Fig. 5 b and c, respectively) reveal a well-known Widmanstätten patterned microstructure without an obvious global texture of the α lamellae formed upon PBF-EB/M processing. In bulk metallic materials, microstructures formed during AM, e.g. PBF-EB/M, PBF-LB/M as well as directed energy deposition (DED), are often dominated by columnar grains oriented in BD due to epitaxial growth along the main direction of heat flow [1] . As can be deduced from the SE image ( Fig. 5 a), the aspect ratios (length/width) of the prior-β grains are > 1. Thus, in accordance with grain structures reported in other studies for additively manufactured Ti-Ta alloys [ 31 , 51 , 52 ], a clear tendency to columnar prior-β grain formation is also visible in the present study. However, it has to be noted that microstructures after PBF-EB/M processing can significantly differ from PBF-LB/M microstructures in terms of the phase composition. The inherent high cooling rates of the PBF-LB/M process effectively hamper diffusion-controlled processes and, thus, in situ quenched-in non-equilibrium martensitic phases are often reported in PBF-LB/M fabricated α and α+β Ti alloys [ 13 , 31 , 34 , 53 ]. In contrast, (near)-equilibrium phases are typically seen upon PBF-EB/M as rationalized before [ 13 , 54 ]. Upon formation of the α-phase within the parent β-phase by a solid state transformation, the α-phase enriches in α-stabilizer and depletes in β-stabilizer, and vice versa [8] . Beside decomposition, aluminum and tantalum are also known to form intermetallic compounds such as Al 3 Ta and Al 69 Ta 39 [ 55 , 56 ]. However, the synchrotron results ( Fig. 4 ) provide for clear evidence that these intermetallic phases have not formed in the alloy system under consideration, which is also perfectly in line with detailed microstructure analysis conducted on conventionally processed Ti-Ta-Al alloys in the past [57] . The alloying element partitioning observed in present work is confirmed by EDS results for the Ti-Ta-Al material in the as-built condition ( Fig. 6 b-d). While in the present study the α-phase (dark areas in Fig. 6 a) is enriched in aluminum up to 6.5% (77.7Ti-15.8Ta-6.5Al), a chemical composition of 61.3Ti-34.1Ta-4.6Al with pronounced enrichment in tantalum up to 34.1% is found for the β-phase (bright areas in Fig. 6 a). To better assess the overall chemical homogeneity of the PBF-EB/M processed Ti-Ta-Al, a thermal treatment at 1200 °C for 21 h has been conducted. After annealing in the single β-phase region and subsequent water quenching (suppressing diffusion activities effectively), a non-equilibrium, fully martensitic microstructure consisting of the orthorhombic α’’-phase evolved as can be seen from the diffractogram and the optical micrograph in Fig. 4 . This is akin to the presence of α’’ martensite in Ti-6Al-2Sn-4Zr-6Mo [48] additively manufactured by PBF-LB/M, where a much higher cooling rate is often achieved compared to PBF-EB/M. The lattice parameters are determined to be a α’’ = 0.3083 nm, b α’’ = 0.4948 nm, and c α’’ = 0.4587 nm. Following both PBF-EB/M processing (as-built condition) and annealing treatment, a slight decrease in the aluminum content can be observed (on the average, global scale) due to evaporation when compared with the initial powder feedstock material ( Table 2 ). However, most importantly the overall element distribution significantly changed upon annealing. As is evident from the EDS mappings in Fig. 6 f-h, the initial dendritic microstructure of the powder feedstock material (cf. Fig. 1 b and 2 ) completely vanishes. Accordingly, the constituents, i.e. titanium, tantalum as well as aluminum, are dispersed homogeneously on a sub-micron scale in the annealed and subsequently quenched condition. In summary, despite obviously tremendous differences in the alloying element densities and melting points, the use of pre-alloyed Ti-Ta based feedstock material, including even low-melting and light elements such as aluminum, allows to overcome current limitations using mixed elemental powders [ 31 , 33 , 34 , 51 , 52 ]. In this regard, the pre-alloyed Ti-Ta-Al powder allows direct fabrication of bulk metallic components featuring microstructures with excellent chemical homogeneity and concomitantly highest density. 4 Conclusions In the present study, an Al-modified Ti-Ta alloy was successfully additively manufactured by electron beam powder bed fusion (PBF-EB/M) using novel pre-alloyed feedstock material. In order to investigate defect populations, the chemistry, crystallographic texture, and phase compositions, detailed microstructure analysis was conducted employing optical microscopy (OM), scanning electron microscopy (SEM) including energy-dispersive X-ray spectroscopy (EDS) and electron backscatter diffraction (EBSD), as well as high-energy synchrotron diffraction. The main findings can be summarized as follows: - Following electrode induction melting inert gas atomization (EIGA), pre-alloyed feedstock material with highly spherical powder particles is obtained. Slight chemical inhomogeneities, i.e. Ta-rich dendritic segregations embedded in a Ti-rich inter-dendritic phase, are found within the particles. - Ti-Ta-Al bulk structures with densities > 99.86% are fabricated using PBF-EB/M technique. Residual pores are likely resulting from gas entrapments and keyholing effects. - An α+β dual-phase microstructure with columnar prior-β grain structure is observed in the as-built condition. The volume-dominant α-phase is enriched in Ti and Al, featuring a lamellar morphology with no preferred crystallographic orientation. - After a post-process thermal treatment at 1200 °C for 21 h in argon atmosphere followed by water quenching, a non-equilibrium, fully martensitic microstructure (α’’-phase) is present. In this post-processed condition, the Ti-Ta-Al bulk material is characterized by a very homogenous distribution of the alloying elements. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments The authors gratefully acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF and Helmholtz-Zentrum Hereon, for the provision of experimental facilities. Parts of this research were carried out at PETRA III. Dr. Philipp Krooß and Dr. Guilherme Abreu Faria are thanked for assistance with the experiments and in using P61A – WINE, respectively. Beamtime was allocated for proposal ID: 11012003. C. Lauhoff acknowledges funding by the Alexander von Humboldt Foundation.
|
[
"DEBROY",
"NIENDORF",
"TODARO",
"FROEND",
"ANDREAU",
"LI",
"GORSSE",
"LUTJERING",
"ZHOU",
"ZHOU",
"BRODIE",
"BUENCONSEJO",
"LIU",
"HARUN",
"BEESE",
"KUMAR",
"ZHANG",
"NIINOMI",
"ZHOU",
"PAULSEN",
"NIENDORF",
"PAULSEN",
"MAIER",
"BUENCONSEJO",
"BUENCONSEJO",
"SEHITOGLU",
"CANADINC",
"ZHANG",
"MORITA",
"FISCHER",
"HUANG",
"BRODIE",
"BRODIE",
"BRODIE",
"HUANG",
"SCHULZE",
"FARLA",
"COURET",
"LI",
"CARPENTER",
"BRENNAN",
"CHEN",
"SOLA",
"MURRAY",
"FERRARI",
"WEISS",
"CARROZZA",
"MEHJABEEN",
"BOYER",
"SING",
"XING",
"BRODIE",
"HAO",
"OKAMOTO",
"SINA",
"NIENDORF"
] |
e69ba42ae65749c9b75be274a55cb0f9_Urban neighbourhood environment assessment based on street view image processing A review of researc_10.1016_j.envc.2021.100090.xml
|
Urban neighbourhood environment assessment based on street view image processing: A review of research trends
|
[
"He, Nan",
"Li, Guanghao"
] |
The urban neighbourhood is one of the most important places for public activities and behaviour spaces in cities, and the quantification of their environments is receiving increasing attention from researchers. In the era of big data, numerous urban data sources, represented by street view images, are documenting the evolution of people's lifestyles in various ways. With the rapid development of image processing technology, street view images have become an emerging data source for urban research. Street view image processing can be used to obtain spatial elements of large scale urban neighborhoods, thus enabling rapid urban neighbourhood evaluation. However, no systematic literature review has been conducted so far on the research of street view images application in urban neighbourhood environment. This paper systematically reviews the research trends of existing publications on the use of street view images for the quantitative analysis of urban neighbourhood environments. The number of publications began to grow rapidly in 2010. From 2010 to 2020, the number of publications increased from 6 to 341, with an annual growth rate of approximately 30.4%. Recent studies have focused on five areas: thermal environment, neighbourhood morphology, environmental perception, socio-economic factors, landscape design and environmental evaluation. The publications use experiment and simulation as the main research methods. Deep learning is the mainstream and advanced image processing method, and the data analysis models include numerical analysis and spatial analysis. Finally, the overall research framework and future research trends of street view images in the current quantitative research of urban streets are obtained.
|
Nomenclature GSV Google street view BSV Baidu street view TSV Tencent street view SVF Sky view factor LST Land surface temperature MRT Mean radiant temperature GVI Green view index SW Street walkability AT Air temperature CNN Convolutional neural network DCNN Deep convolutional neural network FCN Fully Convolutional Networks BVF Building view factor SVM Support vector machine TVF Tree view factor NDVI Normalised Difference Vegetation Index NDWI Normalized difference water index PET Physiologically Equivalent Temperature SVFp GSV-based SVF SVFs DEM-based SVF SVFd The difference between GSV-based SVF and DEM-based SVF 1 Background 1.1 City neighbourhood a spatial carrier The city neighbourhood is a key component and defining element of the city's spatial structure, and is also an important social space carrier for citizens' daily lives ( Marzot et al., 2002 ). With the acceleration of urbanization, it is difficult to describe the evolutionary mechanism of urban spatial change as a complex system with accurate and rapid quantitative data, which makes it difficult for traditional theoretical approaches to cope with rapid urban development ( Philo, 2018 ). The introduction of GIS into urban space has greatly facilitated the process of quantitative research, and the morphological elements of quantitative analysis of urban neighborhoods include road network, building form, etc. The current research hotspots in neighborhoods include physical activity, urban physical environment, ecology, health, and economic impact. A variety of data such as remote sensing and raster data have also been applied, and analysis methods have been diversified, resulting in quantitative urban analysis methods around GIS such as Place Syntax, Spacematrix, Form Syntax, and so on. At this point, based on the traditional theory of urban space analysis, a new phase of urban space research based on big data and artificial intelligence has been formed. 1.2 City neighborhoods as information carriers In addition to the research and utilization of urban big data itself, we should realize that the data actually reflect the changes of urban residents' way of life and urban space operation. All kinds of big data (e.g. cell phone signalling, web maps, etc.) are now a reflection of evolving lifestyles. Before using data for urban planning and design and urban research, it is important to recognize the changes that are taking place in the city itself and the possible trends in its future evolution. For mesoscale studies of street canyons, however, field research is generally used, which is time-consuming and difficult to apply on a large scale. Although remote sensing images can be used to show building height information, they are expensive and difficult to obtain with high resolution. At the same time, many scholars have used software such as Rayman ( Gál et al., 2014 ), ENVI-Met ( Carrasco-Hernandez et al., 2015 ) to model street canyons. Due to the limited computing power of the software, it is not possible to simulate the city on a large scale. Emerging urban data sources are vividly documenting how people use and perceive cities in real-time. 1.3 The rise of machine learning and image processing technology Computer vision is a discipline that uses mathematical algorithms to retrace information about three-dimensional objects in a two-dimensional image and to establish an understanding of the image as a whole, giving computers the ability to interpret visual information in a way that is similar to humans ( Cordts et al., 2016 ). Convolutional neural networks are currently one of the core technologies for deep learning applications in the field of image recognition. The recognition of street images is also dependant on the development of datasets for scene segmentation, multi-object recognition and semantic understanding, where images are labelled into different categories for training neural networks. The representative of these includes the ImageNet( dataseta ) ( http://www.image-net.org/about-join ), ADE20K( datasetb ) ( http://groups.csail.mit.edu/vision/datasets/ADE20K/ ) and Camvid( datasetc ) datasets ( http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/ ). 1.4 Complete openness of the street view image database [1] Street view data provide high-resolution real-view image data for urban street canyon environment research. Common street view data sources include map companies, Car DVR, social media (Facebook, MicroBlog), public maps (OpenStreetMap) and various types of city cameras. The most commonly used data sources are those provided by map companies, including Google Street View (GSV) ( https://developers.google.com/maps/documentation/streetview/.) , Baidu Street View (BSV) ( http://lbsyun.baidu.com/index.php?title=viewstatic ) and Tencent Street View (TSV) ( http://lbs.qq.com/panostatic_v1/.) . By accessing the APIs of these companies, static street view images can be obtained. One of the advantages of street view image is its wide coverage. Google Street View, for example, has a global network covering cities in 114 countries and regions around the world. For mainland China, Baidu Street View and Tencent Street View have also covered some large cities. This makes it possible to compare different countries or different cities within a country. Another advantage of street view image is that approximates the pedestrian view, covering multiple levels of information about the mesoscale street. Besides, street view images are cost-effective to acquire, and the workflow is simple and easy to use. Street View images from map companies are pixel consistent and of controlled quality, which facilitates image processing during the research process. In summary, the relationship between people and the urban neighbourhood environment has always been an important research subject in the field of urban research. The urban neighbourhood environment, as a connecting element between individual streets and the city as a whole, is of central importance to the quality of the overall urban environment. However, no systematic literature review has been conducted so far on the application of street view in urban street environment research. The review allows for a summary of the current publication status, application scope, technical path, data types, image processing techniques, research methods, and parameters used in street view images for urban neighbourhood evaluation. This will allow for a better development of the research field and give a better reference to policy makers, thus providing a systematic review and analysis for the future wider application of street view images in micro-scale urban environment research. 2 Methodology This paper uses a systematic review method, which is inclusive and accurate. The method used in this paper follows the three stages ( Fig. 1 .) of the systematic review method to study the application of street view in urban block environment research. The three phases of the review are: Retrieve of articles from databases, screening according to criteria, classification according to themes and content of articles. 2.1 Retrieve This paper uses a clearly defined scope and direction of research as search criteria. The review in this paper relates to the application of street view images in the direction of urban neighbourhood research. Following previous reviews of the literature, two databases, Scopus, Web of Science were selected for literature searches in this paper ( Mavrigiannaki et al., 2016 ). The common search terms for both databases are: ``street view*'' OR ``street view image*'' OR ``street imagery*'' OR ``street image*''. This paper uses multiple keywords for the search, to capture different popular items. And refine the search literature by selecting research disciplines: urban studies, environmental sciences ecology, public environmental occupational health, architecture, acoustics. The search for literature in this paper started in July 2020 and was re-run in December 2020. 2.2 Screening The searched literature was next screened to exclude articles that did not meet the review criteria ( Mavrigiannaki et al., 2016 ). The following types of articles were removed from this review. 1. Duplicate literature 2. Non-peer-reviewed journal articles, i.e. clinical trial, book or editorial, case report, correction or meeting, early access, letter, data paper, unspecified, biography, news and patent. 3. Non-English articles. 4. Research not relevant or unrelated to street view. Identified by reviewing the title, abstract, methods and results of each article. 2.3 Classification The selected articles are classified according to the research topics and contents to reflect the research focus and research methods of each article. The classification yielded five broad categories of directions and 15 sub-categories of research directions, with research methods including review, experimental-based research, simulation, experimental + simulation, survey/audits. This review will elaborate on the themes and content trends in the use of street view in the study of urban neighbourhood environments in Section 3 . 3 Results The following sections analyse the research trends by analysing the publication, journals, countries and institutions, keywords of the retrieved papers. It also classifies the selected articles according to the research trends and elaborates on the research methods, research parameters and research progress of each research direction. 3.1 Publication trends 3.1.1 Publication and citation year From a temporal perspective, research into the use of street views in urban environments began in 2000. 2000–2010 saw a slow development, with image recognition being used for the first time in 2011, focusing on urban colour ( Yamada-Rice et al., 2011 ), building extraction, etc. In 2012, research began to focus on urban landscape, street greenery and street geometry. Breakthrough growth was achieved in 2017, with the number of publications reaching 40, a 12-fold increase from 2000.2017 to date has been a period of rapid growth for research in this area, with several scholars conducting several studies around urban spatial quality, quantification of street morphology, physical activity on the street, and the physical environment of street canyons. The average annual growth rate of publications to 2020 is 34%. From the results shown in the Scopus, WoS databases, the literature retrieved in this review has been cited a total of 3712 times as of December 2020. The number of citations increased from 168 in 2015, to 270 in 2016 and 405 and 760 thereafter, nearly doubling the number of citations per year. The popularity of street view applications in urban neighbourhood studies is also expected to increase in the coming years as the trend of publishing in this direction continues to grow ( Fig. 2 ). 3.1.2 Journals The search results show that the distribution of research disciplines include: urban greening, urban design, environmental science, remote sensing, health medicine and geography, with the most research on urban design and environmental science. Amongst them, Urban Forestry Urban Greening, which focuses on urban greening, published the most papers (44, 15.77%). Landscape And Urban Planning, which focuses on urban planning and design and landscape, was the next most published journal, with 31 papers (11.11%). The remaining journals with a high number of publications are listed in Fig. 3 and those with no more than 8 publications are listed in other journals. The top 10 most cited papers and their journals are listed in Table 1 . (up to 2020). The most highly cited papers were in the area of urban greening, with three articles, followed by street walkability. The next most cited papers are in the areas of land use classification, building instance classification and urban environment. 3.1.2 Countries and institutions The main countries currently engaged in research in this area are the USA(36.5%), China(31.9%), Germany(5.6%) and Italy(5.2%) ( Fig. 4 ). In terms of research institutions, the Massachusetts Institute of Technology, the University of Connecticut and the City University of Hong Kong are pioneers. The focus varies from institution to institution, with Li at MIT using street view images to conduct studies on the greenness of streets, SVF and solar duration. The team at the University of Connecticut has proposed a method of using street view images to classify land use types and landscape characteristics. The City University of Hong Kong, on the other hand, has focused on street morphology and the physical environment of street canyons and human physical activity in high-density cities. The research team at Tsinghua University has proposed a framework for researching the quality of street canyons on a 'human scale'; Tongji University has focused on using street view images to study the eye-level street greenery, while Wuhan University has focused on using street view images to study the thermal environment within street canyons ( Fig. 5 ). 3.1.3 Keywords A co-occurrence network analysis of the journal's key words was done using Citespace. The current popular keywords for street view images and urban neighbourhoods include green view, street space quality, walkability, physical activity, greenery, health, built environment. From the keyword co-occurrence graph, we can find that the search keywords also include computer vision, convolutional neural network, deep learning, algorithm, etc. These are all key technologies for processing street images ( Fig. 6 ). 3.2 Thermal environment One of the most important elements influencing the thermal environment of an urban area is the urban geometry ( Lai et al., 2019 ). The SVF directly affects the radiant temperature in the canyon and indirectly affects the air temperature in the canyon. The SVF was first proposed by Oke (1981) and has been used to study urban heat islands and thermal comfort ( Bourbia and Boucheriba, 2010 ; Johansson, 2006 ). The research method of using street view image to study street physical environment is generally:(1) the proportion of sky and non-sky pixels in the panorama is obtained by image segmentation; (2) then convert the panoramas into fish-eye images and get the SVF ( Watson and Johnson, 1987 ; Chapman and Thornes, 2004 ; Liang et al., 2017 );(3) use SVF for software simulation or reckoning. Amongst the many studies using SVFs obtained from street views for urban microclimate modelling, ENVI-Met is the most widely used modelling software. In studies of thermal environments, 60 out of 97 studies from 1998 to 2018 used ENVI-met ( Yang et al., 2019 ). 3.2.1 Thermal environment simulation Street view images are used for radiation and temperature simulations mainly through a combination of meteorological data calculations and numerical modelling calculations. Carrasco-Hernandez et al. (2015) used Google Street View images to construct a hemispherical image of a street canyon, then used the Rayman to calculate the short-wave global irradiance under the hemispherical image, and verified its accuracy through field survey. Richards et al. (2017) used GSV to analyse streets in Singapore for the street tree canopy ratio which is used to estimate the proportion of annual radiation reaching the ground that is shaded by trees. ( Zhang et al., 2019b ) calculated the SVF for the Phoenix, Arizona metropolitan area and estimated the daytime and nighttime Land surface temperature (LST) variation using ordinary least squares and geo-weighted regression. 3.2.2 Thermal comfort simulation Thermal comfort in the street is affected by air temperature, humidity and wind speed. An increase in vegetation cover in the street canyon affects the wind speed and humidity, which affects the thermal comfort of the street canyon. The SVF of a street canyon is obtained by calculating the shading of the sky by trees, buildings etc. within the canyon, which is an important means of quantifying the thermal comfort of the canyon. Middel et al. (2017) , compared GSV-based SVF with SVF obtained from a 3D model of Google Earth and the GSV-based SVF was used by Rayman to simulate thermal comfort at the Arizona State University campus. 3.2.3 Solar radiation Sun duration is an important parameter for measuring the amount of solar radiation received in the canyon, which affects the thermal environment, the lighting of buildings, plant growth and pedestrian health. Sun duration can be calculated by projecting the trajectory of the solar motion onto the fisheye images from the street view. Du et al. (2020) used BSV to study the streets of Beijing and found that streets with different built environments, orientations and grades affect the number of hours of sunlight. As the street view is similar to the driver's view of a car, this method has also been used to calculate the solar glare time for car driving. Li et al. (2019) used GSV in combination with a solar trajectory map to calculate the position of the sun and the driver's angle relative to the sun to calculate the time of occurrence of solar glare. 3.3 Neighbourhood morphology 3.3.1 Land use type Land use types are important references for urban planning and the urban landscape varies according to land use type. Remote sensing data, LiDAR data and GSV can all be used for land use classification studies. Zhang et al. (2017) used a set of generic image features to represent different classes of road and trains the street view images using a support vector machine (SVM) to produce models suitable for distinguishing between residential and commercial buildings. 3.3.2 Street canyon morphology Utilizing computer vision technology, it is possible to calculate the geometric and morphological parameters of the street include the street aspect ratio, SVF, street symmetry, street orientation, street alignment, etc. These morphological parameters determine the solar radiation received in the street canyon, the wind speed, the rate of pollutant dispersion, etc. Hu et al. (2019) derived a deep multi-task learning model for street classification based on three parameters: street aspect ratio, street symmetry and street complex structure. 3.3.3 Building and facade Architecture is an important component of the neighbourhood environment, its history, construction style, and location carry the form and appearance of a city. Street view images provide multidimensional information on the form, colour, materials and other aspects of buildings, and by extracting this information, elements that influence the landscape characteristics and the physical environment can be obtained. Xu et al. (2014) used tagged building images with architectural styles (Baroque, Gothic, etc.) from wiki and used Deformable Part-based Models (DPM) to capture the morphological features of basic architectural components, and proposed Multinomial Latent Logistic Regression (MLLR) to construct an architectural style recognition algorithm. Kang et al. (2018) proposed a method to classify facade structure from GSV and retrieve fine boundaries of individual buildings. Shalunts et al. (2012) classified the dome style based on their shape using clustering and learning of local features. 3.4 Environmental perception of neighbourhood How the physical and objective properties of the street influence subjective perceptions of people has always been a hot topic in urban research. But the evaluation of the environment is often in face-to-face form, which is time-and resource-intensive. The extraction of features from street view images has become faster and more accurate, making it possible to automate the evaluation of spatial characteristics and place quality in streets ( Mennis et al., 2021 ). As street view images do not contain elements of live auditing such as noise, temperature, humidity, airflow, etc., as well as socio-economic and human perception data that cannot be collected on site. Research in this direction should focus on the integration of street view with multiple data sources. Socio-economic data comes from a variety of sources, including government statistics and other aspects. Human perception data is collected through questionnaires, crowdworker audits and other methods. Key research in this area includes: (i) walkability; (ii) human emotion perception; (iii) noise perception; and (iv) perception of vitality and health. 3.4.1 Street walkability Street walkability(SW) is a measure of the level of services and facilities provided by a city and the street view image contains physical features such as street grade and accessibility that influence street viability. Research has focused on the study of the factors influencing SW and on assessing and predicting SW. Visual crowdworkers audits, questionnaires using street view images are main ways to collect people's perception data of SW. The use of street view image to evaluate streets at multiple levels of quality and morphology, combined with other big data (social media check-ins, points of interest) and GIS analysis, can also be used to study the potential factors that induce walking. Research into the walkability of streets has been divided into three main areas: (1) Using crowdworkers audits directly to assess small sample of streets is widely used. Crowdworkers asked several brief, visual questions to assess the walkability of streets, and they may also leave information such as gender, age, income, etc. to analyse different social groups’ preference to streets ( Hanibuchi et al., 2019 ). Hara et al. (2013) used untrained crowd workers from Amazon Mechanical Turk ( n = 402) to identify and mark the factors affecting the feasibility of accessible pavements in GSV. (2) Street view images quantify the landscape characteristics and analyses their impact on SW, which includes greenery, buildings and other elements ( Nagata et al., 2020 ). Li et al. (2018) studied the relationship between street greenness, street enclosure and people's walking activity under different land use types. Ki and Lee (2021) studied correlation between traditional greening variables (NDVI, park area) and GVI. Multiple regression models were applied to determine the relationship between walking time and walking activity across different income groups and GVI. (3) Use image processing techniques to count the number of people and vehicles in a street view image. Using pedestrian detection techniques, Yin et al. (2015) carried out a large-scale count of pedestrians to get a sense of the popularity of walking in different streets. 3.4.2 Spatial emotional perception Factors such as the visual quality of the street and the degree of urban management can have a significant impact on the overall ambience of the street, and thus on the psychological state of the residents, street crime rates ( Wijnands et al., 2019 ; Anderson et al., 2013 ), etc. The Place Pulse study at MIT's Senseable City Lab established ``a link between street view images and people's subjective perceptions'' ( Salesses et al., 2013 ; Li et al., 2015 ). The street view scores are generated by manually scoring a large sample of street view images with questions such as ``Which of these two images do you feel safer in'' and comparing them. These street view images are semantically segmented to achieve city perception through a human-machine confrontation scoring system. Quercia have set up a website to collect cognitive samples of the environment in different neighbourhoods to enable a cognitive assessment of the beauty, quietness and pleasantness of London ( Quercia, 2013 ; Quercia et al., 2014 , 2014 ). 3.4.3 Noise perception Human perception of noise is influenced by the interaction between visual and acoustic. Several of these studies have demonstrated that building facades ( Zhang et al., 2018 ), courtyards ( Calleri et al., 2018 ), and streetscapes ( Taghipour et al., 2019 ) c Zamir an influence residents' annoyance and stress values towards noise ( Jiang et al., 2018 ; Sun et al., 2018 ). Van Renterghem et al. (2016) used a camera to take photographs of the front door to the street at human eye height and used the RGB pixel channel to separate green pixels and calculate their percentage. From there, it was investigated how the amount of vegetation visible to residents through their living room windows affected noise annoyance. 3.4.4 Vitality and health perception Streets are one of the main places where people interact and move around on a daily basis. According to the survey, streets are the most common place for walking, cycling and other physical exercise, followed by homes and parks. The GVI, form of the street affect the willingness of pedestrians to engage in physical activity. Lu (2019) used GSV to assess the quantity and quality of street greening in Hong Kong, relating it to the recreational and sporting activities of 1390 residents in green outdoor environment. After controlling for socio-demographic, the quality and quantity of street greening were found to be positively correlated with people's willingness to do activities. The street environment also affects people's physical and mental health ( Helbich et al., 2019 ). Wang et al. (2019) used TSV to assess the perceived attributes of wealth, security, vitality, depression, boredom and beauty of 48 Haidian district, Beijing. The values of these attributes were correlated and analysed with the physical and mental health of 1231 elderly people. The results showed a significant correlation between security and physical and mental health. 3.5 Analysis of socio-economic factors 3.5.1 Population distribution Many studies have proved that population density is related to the intensity of urban construction and the degree of street greening etc. ( Wang et al., 2021 ; Lin et al., 2021 ). The street view image records the physical environment of the city. Combined with the data of the material attributes of the urban space, the analysis and prediction of population, economy and health can be realized. Gebru et al. (2017) collected the motor vehicle brand, model and population, education level and other information linked to estimate the income of different regions of the United States, race, education and voting methods. Arietta et al. (2014) used street view images to propose a predictive relationship between the physical visual appearance of cities and non-physical attributes such as crime rates, house prices and population density. 3.5.2 Lifestyle Street spaces contain information about people's activities and record the lifestyles of their inhabitants ( Chen et al., 2020 ). By analysing various types of data (food, clothing, housing and transportation) the evolution of urban life can be glimpsed. Zhang et al. (2019) recorded the locations and times of taxi pick-up and drop-off during the working day in Beijing. Combining data with analysis of street view images to predict taxi travel trajectory throughout the city and hourly changes in the city's road network. The social media check-in index reflects the popularity of a social place (restaurants, entertainment), which is influenced by a number of factors such as price and environment. The combination of the street view and check-in data can be explored to analyse areas with development potential. Zhang et al. (2020) used social media check-ins to compare different life patterns of tourists and residents, and street images to analyse the physical environment of social places. The aim is to discover Beijing's beautiful but undesirable outdoor spaces and to provide a reference for future urban design. 3.6 Landscape design and environmental assessment 3.6.1 Street greening For the existing built environment of a street, changing the greenery in the street is the first choice to enhance the landscape and physical environment of the street. Street View provides high-resolution, multi-layered information on trees, shrubs, lawns and other greening in the street and allows for greening evaluation ( Xia et al., 2021 ). (1) GVI-based street greening study The green view index (GVI) reflects the greening of a street in terms of a 'human perspective'. The use of street view imagery makes up for the lack of manual handling and improves research efficiency. Li et al. (2015) used GSV to extract the pixels occupied by plants in the image and modified the original GVI calculation formula to evaluate the tree cover of streets in Manhattan. The correlation between tree cover and GVI values was validated and found to be significantly correlated. Seiferling et al. (2017) used an image segmentation technique to process GSV, and obtained GVI in the image and compare the value to the percentage of tree canopy. A large number of researchers have also explored the relationship between GVI and the physical activity of residents. Lu et al. (2019) used GSV to calculate the amount of vegetation visible to cyclists and compared it with the NDVI to verify the feasibility of street view images to calculate street greening. Wang et al. (2020) analysed the regression relationships between the amount of greenery visible to riders and their willingness to ride. (2) SVF-based street greening study As the SVF of a city calculated using 3D building model simulations only contains the blocking of the sky by buildings, the street view image contains information about the various types of landscape (mainly trees and buildings). The difference in SVF between the two methods is therefore often considered to be the blockage of the sky by trees. This conclusion has been confirmed by several academic studies ( Liang et al., 2017 ). Gong et al. (2018) calculated the SVF, TVF (Tree view factor) and BVF (Building view factor) for the Kowloon Peninsula using GSV. It was found that the 3D-GIS-based SVF was on average 0.11 higher than the GSV-based SVF. Li and Ratti (2018) used the GSV to calculate the SVF in urban street canyons and a building height model to simulate the ratio of shading to the sky for buildings. The difference between the SVF obtained by the two experimental methods was defined as a quantitative index factor for the shading of street trees. (3) Street trees visual audits Street view images also provide the opportunity for researchers to carry out street greening audits without going outsides, which can effectively generate some street tree data. Berland's virtual audits of street trees using street view images enables high accuracy surveys of tree genera, but poor accuracy surveys of street tree species and diameters ( Berland and Lange, 2017 ). Wang et al. (2018) used panoramas to remotely measure diameter at breast height (DBH), tree height, and canopy projection size. 3.6.2 Quality of space The 'quality of space' of a street is evaluated in terms of both the objective spatial quality of the street and the subjective psychological perception of the users. This quality of street affects the pedestrian experience of the space, the shape of the urban character. Long et al. (2019) suggests that the 'human-scale' refers to the urban scale that can be seen, felt and touched by people, which is closely related to the human body. As the perspective of street view images is similar to that of people, street view images can be used to evaluate the quality of urban space at a 'human scale'. Identification includes: greenery, street openness, facade enclosure, degree of motorization, degree of enclosure, human scale, transparency, neatness. Rundle et al. (2011) collected 143 audit items relating to the construction of seven neighbourhood environments. The project includes factors such as aesthetics, physical comfort, safety, motor vehicles and parking, motor vehicle facilities, pavement facilities, and social and commercial activities that have an impact on the quality of the street. With the development of technology in computer vision, the application of street view images will be further refined. Some street quality parameters which are difficult to identify, such as pavement materials ( Majidifard et al., 2020 ), have been identified and calculated, which will also provide more detailed reference data for the improvement of street quality. 3.4.3 Environmental characteristics By analysing the physical landscape and appearance of the city contained in the street view images, it is possible to extract architectural style, building facade, and the morphology of street wall that influence the urban landscape. Liu et al. (2017) used TSV to propose a model that enables large-scale automatic assessment of environmental quality. The results showed that the area between the North Fourth and North Fifth Ring Roads in Beijing has the most modern urban character, while the south of the city is less modern. Lee et al. (2015) used the HOG method to analyse historical GSV to study the potential developmental lineage of modern architecture, and obtained changes in the style of architectural elements such as windows, doors and balcony lights in Paris. 4 Discussion 4.1 Research trends and methods Street view images, with their unique advantage of approximating the pedestrian perspective, have been applied to research from architecture, engineering, urban design, acoustics, aesthetics and social studies. From Table 2 , it can be seen that GVI and SVF are the main parameters used in urban studies of street view, and GVI is mostly used in studies of SW and street greenery, and SVF is mostly used as a parameter for thermal environment simulation and description of the current state of greening. Studies involving quality evaluation and people's physical and mental perception of the urban environment require multiple parameters for systematic evaluation. Current methods of street view image recognition for urban research can be divided into three main areas ( Fig. 7 .): (1) Street view images can be used directly to implement environmental visual audits. The method uses manual recognition of street view images to obtain a perceptual evaluation of the street environment, which avoid the intensive labour work of collecting questionnaire and field survey data. (2) street view image recognition can extract some basic elements which can be used directly in the study of the physical and aesthetic urban environment, and indirectly in the evaluation of the economic and social urban environment. (3) The multidimensional use of machine learning resources also enables an 'understanding' of street scene images. For example, the prediction of street spatial perception is scored by environmental predictions via scene parsing segmentation, random forest model and human-machine confrontation. 4.2 Street view image recognition development The application of image recognition technology in urban research has evolved from scene classification ( Xiao et al., 2010 ), object background differentiation ( Iovan et al., 2012 ) and location recognition ( Zamir et al., 2010 ), to the current direction involving the physical environment of streets and spatial perception. The image processing technology for street view images goes through the process of ``manual - simple image processing - deep learning model''. To date, deep learning has become the main tool for street view processing. Representative CNN and DCNN networks include; PSPNet, SegNet, DeepLab v3, +U-Net, etc. (1) Manual street view recognition is only suitable for street view image recognition of a small sample size, and software such as Photoshop is commonly used for manual recognition. (2) Simple image processing of street view images involves pixel-based image analysis. In short, all the pixels in a street view image are traversed to determine the urban element corresponding to the different pixels, often using threshold methods etc. to determine pixel values. There are also street view image processes that use simple image algorithms, such as segmentation algorithms. This algorithm first divides the image into physically meaningful homogeneous polygons and then gives each polygon different properties based on its spectral and geometric properties. Common segmentation methods include threshold segmentation, edge detection and area extraction. Zeng et al. (2018) used Canny edge detection to identify sky elements in the BSV. (3) Artificial Neural Networks, represented by CNN, DCNN are widely used in street scene recognition, offering the advantages of high accuracy and fast processing speed. Wegner et al. (2016) used DCNN to locate and classify street tree species, and Cai et al. (2018) built the open source library ``Treepedia'' and demonstrated that DCNN has better model performance and evaluation speed than unsupervised learning to recognise street view images. 4.3 Diversity of analytical methods The elements extracted from street view images are used in a variety of statistical analysis methods to determine the relationship between the research topic and the urban elements. Commonly used methods include the following categories: (1) Correlation analysis: correlation coefficients are often used to characterize the degree of correlation between two variables, such as Pearson's correlation and Spearman correlation. Helbich et al., 2019 used Spearman correlation to analyse the green and blue space measures extracted from street view images versus NDVI and NDWI. (2) Regression analysis: Regression analysis is used to obtain the correlation, direction of correlation and strength of correlation between two or more variables and to develop mathematical models to predict the target variables. Wang et al., 2019 used multilevel regression models to analyse the associations between exposure to green and blue spaces and self-rated Geriatric Depression Scale (GDS-15) scores. (3) Numerical spatial analysis: This type of analysis includes two-dimensional data visualisation, overlay analysis, buffer analysis, network analysis, etc. Two-dimensional data visualisation allows the direct presentation of the numerical situation in different study areas. Du et al. (2020) , Gong et al. (2017) used this method to analyse the SVF distribution in Beijing, Boston and Hong Kong respectively. Wang et al. (2020) used the Thiessen polygon method to create non-overlapping buffer zone for each metro station, and examined the association between street-view greenness and cycling frequency around metro stations. (4) Data analysis based on spatial statistics: Spatial statistics describe how a unit relates to other units in a spatial location. Commonly used theories include spatial autocorrelation and the general approach to data analysis is: general analysis, local validation. This theory is often used to analyse the correlation between crime rates, perceptions of safety, the health of residents and the urban environment. Salesses et al. (2013) used Getis Spatially Filtered Regression (GSFR) to analyse the correlation between the urban perception of inequality and homicides. 4.4 Future studies (1) Diverse street view resources and interactive platforms With the advent of the 5 G era, real-time uploading of street view images recorded in the form of webcasts, geo-tagged social media, traffic recorders, etc. will be faster and more convenient, and street view data stored in the cloud will be more diverse and real-time. Developments in computer vision technology and web-based development also offer the possibility of an interactive platform for street view uploading and analysis. For example, MIT Media Lab, City Science Group ( Noyman, 2019 ) has developed the construction of virtual city models. Supported by an interactive device, users are free to change the urban design and get real-time feedback on the street view. (2) Combination with other data sources For future research into street view images in the street canyon environment, there should be a greater focus on integration with big data, such as public transport cards, prices of second-hand properties, reviews of consumer prices, etc. Attention should also be paid to the use of new instruments such as eye tracking, car loggers. The traditional use of meteorological data, solar trajectories should also continue to be developed to enable the integration of multiple data sources to analyse the urban street environment. 4.5 Limitations A current limitation of street view image is the time variability. For example, street view images are shot in different seasons and the degree of greenery vary, with leaves changing colour in autumn or falling leaves in winter posing a challenge for image recognition. At the same time, different neighbourhood models have been developed around the world for different political, economic and environmental reasons. In China, for example, the country's unique unit compounds and enclosed neighbourhoods have given rise to many internal roads that cannot be covered by moving vehicle imaging equipment. The tools for expressing the street pattern in different national contexts are therefore open to question. 5 Conclusion This paper summarises the articles in the Scopus and WoS databases through a systematic literature review approach. By analysing the themes and content of the articles, the results show that experimentation and simulation are the main research methods used in street view for environmental evaluation in urban neighbourhoods. Deep learning is the mainstream and advanced image processing method and data analysis methods include: numerical analysis (correlation analysis, regression analysis), spatial analysis, etc. The work flow is generally ``extraction of characteristic factors from the street view - factor analysis/factor used in simulation - urban neighbourhood environment evaluation''. Interdisciplinarity is a trend, and different parameters can be used in urban neighbourhood studies using street view. The main directions of the research are shown below. (1) The application of street view images to the urban thermal environment, mainly using experimental and simulation methods to study radiation, thermal comfort, solar hours, etc. The most commonly used software is ENVI-met. The percentage of publications is about 13.6%. (2) Research on urban morphological evaluation focuses on neighbourhood land use, extraction of street composition factors (buildings and facades), street canyon morphological parameters, and the feasibility of the study is mostly verified through experiments. CNN is mostly used to process complex image information. The percentage of publications is about 12.4%. (3) Neighbourhood environment perception research focuses on the influence of urban information contained in street view images on human perceptions and behaviour choices. Most of the research is experimental. Research has undergone almost eight years of development together with image recognition technology, so there are more diverse ways of recognising street views. Publications account for the largest share, around 34.1%. (4) The studies of socio-economic environmental are primarily experimental researches. The main parameters studied are greening rates, broken windows, etc. The percentage of publications is about 13.6%. (5) Landscape design and environmental evaluation are primarily experimental researchs. This type of image recognition has varied from the colour differentiation to the current scene analysis. Publications account for a relatively large share of the research, around 26.3%. Funding The authors would like to acknowledge the financial support provided for this research by the National Natural Science Foundation of China (No. 51478136 ). Declaration of Competing Interest We would like to submit an original article titled “Evaluation of tree shade effectiveness and its renewal strategy in typical historic districts: A case study in Harbin, China” for consideration by Computer, Environment and Urban System. The paper was coauthored by Nan He, Guanghao Li. There are no conflicts of interest to declare. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.envc.2021.100090 . Appendix Supplementary materials Image, application 1
|
[
"ANDERSON",
"ARIETTA",
"BERLAND",
"BOURBIA",
"CAI",
"CALLERI",
"CARRASCOHERNANDEZ",
"CHAPMAN",
"CHEN",
"CORDTS",
"DU",
"GAL",
"GEBRU",
"GONG",
"HANIBUCHI",
"HARA",
"HELBICH",
"HU",
"HU",
"IOVAN",
"JIANG",
"JIANG",
"JOHANSSON",
"KANG",
"KI",
"LAI",
"LI",
"LI",
"LI",
"LI",
"LI",
"LI",
"LIANG",
"LIN",
"LIU",
"LONG",
"LU",
"LU",
"MAJIDIFARD",
"MARZOT",
"MAVRIGIANNAKI",
"MENNIS",
"MIDDEL",
"NAGATA",
"NOYMAN",
"OKE",
"PHILO",
"QUERCIA",
"QUERCIA",
"QUERCIA",
"RICHARDS",
"RUNDLE",
"SALESSES",
"SEIFERLING",
"SHALUNTS",
"SUN",
"TAGHIPOUR",
"VANRENTERGHEM",
"WANG",
"WANG",
"WANG",
"WANG",
"WATSON",
"WEBER",
"WEGNER",
"WIJNANDS",
"XIA",
"XIAO",
"XU",
"YAMADARICE",
"YANG",
"YIN",
"YIN",
"ZAMIR",
"ZENG",
"ZHANG",
"ZHANG",
"ZHANG",
"ZHANG",
"ZHANG"
] |
79b136d52d3b46899cb867fdf252f5af_Herbicide bioremediation from strains to bacterial communities_10.1016_j.heliyon.2020.e05767.xml
|
Herbicide bioremediation: from strains to bacterial communities
|
[
"Pileggi, Marcos",
"Pileggi, Sônia A.V.",
"Sadowsky, Michael J."
] |
There is high demand for herbicides based on the necessity to increase crop production to satisfy world-wide demands. Nevertheless, there are negative impacts of herbicide use, manifesting as selection for resistant weeds, production of toxic metabolites from partial degradation of herbicides, changes in soil microbial communities and biogeochemical cycles, alterations in plant nutrition and soil fertility, and persistent environmental contamination. Some herbicides damage non-target microorganisms via directed interference with host metabolism and via oxidative stress mechanisms. For these reasons, it is necessary to identify sustainable, efficient methods to mitigate these environmental liabilities. Before the degradation process can be initiated by microbial enzymes and metabolic pathways, microorganisms need to tolerate the oxidative stresses caused by the herbicides themselves. This can be achieved via a complex system of enzymatic and non-enzymatic antioxidative stress systems. Many of these response systems are not herbicide specific, but rather triggered by a variety of substances. Collectively, these nonspecific response systems enhance the survival and fitness potential of microorganisms. Biodegradation studies and remediation approaches have relied on individually selected strains to effectively remediate herbicides in the environment. Nevertheless, it has been shown that microbial communication systems that modulate social relationships and metabolic pathways inside biofilm structures among microorganisms are complex; therefore, use of isolated strains for xenobiotic degradation needs to be enhanced using a community-based approach with biodegradation pathway integration. Bioremediation efforts can use omics-based technologies to gain a deeper understanding of the molecular complexes of bacterial communities to achieve to more efficient elimination of xenobiotics.
With this knowledge, the possibility of altering microbial communities is increased to improve the potential for bioremediation without causing other environmental impacts not anticipated by simpler approaches. The understanding of microbial community dynamics in free-living microbiota and those present in complex communities and in biofilms is paramount to achieving these objectives. It is also essential that non-developed countries, which are major food producers and consumers of pesticides, have access to these techniques to achieve sustainable production, without causing impacts through unknown side effects.
|
1 Introduction Agriculture is constantly trying to increase of productivity. One strategy to achieve this goal is the use of herbicides. These chemical agents act by blocking the biosynthesis of amino acids, carotenoids or lipids, or by interrupting the flow of electrons in the process of photosynthesis. Nevertheless, massive use of herbicides and other pesticides leads to contamination of agricultural soils, river systems, and nearby groundwater, changing the structure and function of soil microbial communities. Herbicides directly or indirectly impact organisms other than their primary targets, including human. Herbicide use and misuse also causes selection pressure on microbes in soil and water, possibly resulting in changes to microbial processes, especially if there are genes encoding enzymes related to herbicide degradation. Finally, xenobiotic compounds may increase the production of reactive oxygen species (ROS). These compounds affect the survival of microorganisms that subsequently need to develop strategies to adapt to these conditions to maintain their ecological functionality. Without adaptation, specific populations of microorganisms will likely disappear. Researchers have tested the use of several bioremediation technologies, aiming to render environmental herbicide contaminants non-harmful or less harmful. These promising environmental technologies are based on microbial metabolic activities (via enzymes) to transform toxic components into harmless molecules. While some microorganisms able to transform recalcitrant compounds have indeed been isolated, trials of their use in environmental applications have been disappointing. Therefore, before degradation is achieved, microbial communities must develop survival strategies against the stresses induced by herbicides. Adaptation of bacteria to stressful environments is achieved by the interaction of several systems in a complex manner. A better understanding of these interactions has been achieved via novel molecular investigations of microbiota using various omics-based studies, all aiming to improve understanding of the complex relationships that constitute the systems of bacterial responses to resistance and the capacity for degradation of herbicides. There are consortia of various species with complex integrated networks of degradation mechanisms that function via coordinated quorum sensing systems. Currently, these can only be fully dissected and exploited using omics-based technologies. Moreover, there remains concerns that agricultural countries with high levels of contamination by pesticides and lesser degrees of scientific development will not be able to use these methodologies to maintain a broad level of ecological sustainability. 2 The importance of herbicides for agriculture Commercial farmers are constantly trying to increase food crop production to satisfy worldwide demands. The World Health Organization data from 2016 ( http://www.fao.org/sustainable-development-goals/indicators/211/en/ ) indicate that 11% of people in the world are undernourished, concentrated in African and Asian countries. One strategy to increase food crop production involves use of herbicides. These chemicals act by blocking the biosynthesis of amino acids, carotenoids, lipids, or interrupting the flow of electrons in the process of photosynthesis in weeds. 3 Herbicide modes of action The rapid development of new tools for synthetic organic chemistry during the late 20 th century led to the synthesis of a variety of useful compounds. Among these were synthetic, herbicides, fungicides, insecticides, rodenticides, nematicides, and plant growth-promoting compounds ( Aktar et al., 2009 ) that collectively improved crop production and reduced the weed-related damage. Some of these compounds target light harvesting or photosynthesis reactions, cell metabolism, or growth/cell division processes in weed plants (Table 1, Supplementary Material). Herbicides have been classified into 25 groups ( Beffa et al., 2019 , www.hracglobal.com ) based on inhibited proteins, targets, or via the similarity of induced symptoms [Herbicide Resistance Action Committee (HRAC) Herbicide Classification System]. In 2020, 261 herbicides were classified according to this system ( www.hracglobal.com ). A closer look at the chemical structures of these herbicides reveals a common factor: many have residues with high electronegativity that aid in the disruption of their targets or any other structure susceptible to oxido-reductive destruction, whether in weeds or non-target organisms. This review describes reactivities of some of the most commonly-cited herbicides in the literature, and widely used around the world ( Maggi et al., 2019 ), for which there is more information regarding their degradation as well as the responses they induce in various organisms. Some of these important herbicides and their modes of action are displayed in Table 1 (Supplementary Material). Other herbicides can be described according to this classification system. Imazethapyr is a chiral herbicide used in production of rice, soybeans, peanuts, and other crop plants. Imazethapyr is classified in the imidazolinone chemical family and the B HRAC group, affecting cell metabolism in their targets ( Beffa et al., 2019 ). This herbicide selectively controls dicotyledonous weeds, inhibiting the synthesis of acetohydroxyacid or acetolactate. This enzyme catalyzes the initial reaction in the biosynthetic pathway for the branched chain amino acids valine, leucine, and isoleucine. Alachlor, acetochlor, butachlor, and metolachlor are chloroacetamides used against annual broad-leaved weeds. These chemicals are classified in the K3 HRAC group, affecting growth/cell division of their targets ( Beffa et al., 2019 ). Paraquat has high reducing potential and can capture electrons from photosystem I and decrease the concentration of NADPH + . The free electrons generated by this system are often reactive with the herbicide and can induce the formation of free radicals. These are converted to their original form by oxygen in a cycle that can harm various cell structures; eventually it may exhaust the supply free electrons for the continuation of the process ( Gravina et al., 2017 ). The mode of action occurs via inhibition of protoporphyrinogen. This herbicide belongs to the bipyridylium chemical family and is classified in the D HRAC group, affecting light processes in their targets ( Beffa et al., 2019 ). Azimsulfuron is used for controlling weeds in paddy fields. This sulfonylurea herbicide inhibits the enzyme acetolactate synthase, which is involved in the biosynthesis of branched chain amino acids in plants and microorganisms. Azimsulfuron belongs to sulfonylurea chemical family and is classified in the B HRAC group, affecting the cell metabolism of their targets ( Beffa et al., 2019 ). 4 Weed resistance to herbicides While the use of herbicides in production agriculture was revolutionary and led to several-fold increases in crop yields, these compounds generated complications, in large part due their modes of action and chemical structures. One major problem with herbicide use and overuse is the development of weed resistance due to selection for mutants. This is the same phenomenon that occurs with use of antibiotics in humans ( Dodds, 2017 ). To overcome this problem, manufactures produce herbicides that exploit several chemistries with differing modes of action. The concept is straightforward: the simultaneous evolution of various characteristics will occur only rarely in the same population. For example , Conyza is a cosmopolitan weed characterized by several herbicide resistances due to the pyramiding of point mutations. Only individuals that carry different mutations could survive combinations of herbicides with differing modes of action, including sulfonylurea or imidazolinone plus atrazine ( Matzrafi et al., 2015 ). Other controls over resistance of weeds include increases in the concentration of active ingredients and bundling of highly resistant plant lines with specific herbicide use. This has been the strategy for the use of Roundup (glyphosate) resistant crop plants with herbicide application. This idea may have also led to herbicide overuse. 5 Herbicides and Their Fate in the environment The chemical structures of the active ingredients in herbicide formulations differentially interact with environmental matrices such as soil, sediments, particles, water, or with microorganisms that may degrade these compounds. These interactions may have major impacts on either the fate of the chemicals in the environment, their routes of degradation, or the formation and bioavailability of more toxic metabolites. For example, the herbicide 2,4-D (2,4 dichlorophenoxy acetic acid) is an organic acid with a systemic mode of action that has been used worldwide to control broad-leaf weeds in grass, wheat, rice, corn, sorghum, and sugarcane. This herbicide is translocated through the plant and accumulates in roots, stopping its growth. This herbicide, and its most commonly known degradation product, 2,4-dichlorophenol (2,4-DCP), are very soluble in water, and can be found in rivers and lakes, or in groundwater, even if it has not been used for long periods ( Silva et al., 2007 ). This is one of the main reasons for discovering efficient degradation processes for 2,4-D. In addition to resistance, herbicide use and overuse have led to the production of degradation product metabolites in the environment, including aminomethylphosphonic acid, a metabolite of glyphosate. This compound persists in soil, water, and plants with potential toxicological problems caused by the accumulation of residues in the food chain. The US Environmental Protection Agency (EPA) describes glyphosate as practically non-toxic and concluded that it was not an irritant under the acute toxicity classification system. Nevertheless, data regarding the toxic level of this herbicide have been generated on the basis of its mode of action in the shikimic acid pathway that is used for the production of amino acids in a small number of organisms, most of which are green plants ( Bai and Ogbourne, 2016 ). Samples analyzed in Hungary from 1990 to 2015 showed systematic contamination in watercourses by herbicides such as trifluralin, atrazine, diazinon, acetochlor and (more recently) glyphosate ( Székács et al., 2015 ). Glyphosate has also been detected as a contaminant in groundwater, drinking water, and in urine of farmers in Mexico ( Rendón-von Osten and Dzul-Caamal, 2017 ). The herbicides atrazine, ametryne, aetolaehlor, simazine, acetochlor, metolachlor and alachlor were detected in tap, surface, and groundwater samples in China ( Li et al., 2018 ). 6 Herbicides affecting non-target organisms A search of the National Center for Biotechnology Information PubMed website, on September 9, 2019, using the keywords "herbicidal effects" and the major kingdoms of living things, recovered 14,511 papers for bacteria, 77 for archaea, 5,757 for protozoa, 21,408 for plantae, 13,875 for fungi, and 63,408 for animals. Importantly, a search of “herbicide and humans” recovered 41,245 papers. The structure and mode of action of active ingredients present in herbicide formulations are not specific to killing weeds, due in part to the high number of electronegative residues in their molecules, including oxygen, hydroxide, sulfonyl, phosphoric acid, amine, and chlorine. As such, these herbicides have high oxidative potential across various chemical targets and organisms, in addition to microorganisms ( Figure 1 ). For example, atrazine has been postulated to exert some indirect effects over non-target organisms such as shrimp by inducing an oxidative stress response through enhanced peroxide production, as well as the induction of superoxide dismutase (SOD), glutathione-S-transferases, and glutathione reductase. These organisms possess systems to overcome these problems by activating antioxidant responses; however, there is an energy cost because there was a decrease in lipid storage in these animals ( Griboff et al., 2014 ). Despite this evidence, the proposed impact of atrazine on non-plant biota remains controversial and without substantial scientific support. Various non-target organisms can also be directly influenced by the electronegative properties of some herbicides. The long-term use of thiobencarb caused imbalances in agricultural soils and in aquatic systems, mainly due to its toxicity in invertebrates, fish, and microorganisms, reducing their number and diversity ( Chu et al., 2017 ). A similar change in aquatic and agricultural ecosystems was seen after long-term application of chloroacetamide herbicides ( Dong et al., 2015 ). The world-wide use of the herbicide 2,4-D has also impacted groundwater on account of its high solubility. One simple explanation for why non-target organisms are adversely affected by herbicides is due to their ubiquity in the environment, such that many organisms cannot escape exposure. This is the case for the herbicide butachlor that was reported to negatively impact zebrafish in a dose-dependent manner, a commonly used aquatic model for early-life stage toxicity evaluation of environmental contaminants. Butachlor also caused enhanced production of ROS and malondialdehyde in zebrafish ( Xiang et al., 2018 ). Herbicides in antifouling paints also appear to produce alterations in non-target species. These paints are used to prevent the attachment of organisms to submerged surfaces of vessels and aquatic structures. The active ingredients, which in some cases have the composition of broad-spectrum herbicides and fungicides, are released from the coated surface and protect the surfaces. One of the antifouling components, tributyltin, is so persistent, that even though its use was banned in 2008, it continues to be found in the environment. Alternative biocides for anti-fouling paints, such as irgarol (2-methylthiol-4-tert-butylamino-6-cyclopropylamino-s-triazine) and diuron also have widespread distribution in oceans, and may cause harmful effects despite their low concentrations ( Manzo et al., 2014 ). Compound M1 (2-methylthio-4-tert-butylamino-6-amino-s-triazine) a by-product of metabolism of irgarol, and the parent compound are bioaccumulated by aquatic plant species in marine environments ( Fernandez and Gardinali, 2016 ), contributing to the persistence of these contaminants. Similarly, the long-term use of the herbicides alachlor, acetochlor, butachlor, and metolachlor also caused imbalances in aquatic and agricultural environments. This is important for public health because the US EPA has suggested that acetochlor may have carcinogenic potential ( Wang et al., 2015 ). In some instances, herbicide over-application has negatively impacted soil microbiota, affecting the dynamics of biogeochemical cycles and soil fertility ( Elias and Bernot, 2014 ), likely due to loss of sensitive microbial populations providing specific ecological functions. Nevertheless, the chemical structures of herbicides may provide essential nutritional components for the growth of microorganisms. An additional effect of overuse of herbicides is the impact that they have on soil microbial community structure and composition, with secondary influences on plant nutrition and herbicide sensitivity. This is in large part due to impacts on the functions of microorganisms in mutualistic interactions with plants. Herbicides can affect the metabolism of plants, altering exudates used for signaling in plant-microbe interactions. This was shown in the model plant Arabidopsis thaliana exposed to the herbicide imazethapyr. Application of the herbicide resulted in changes in root cell wall structure and increased citrate production and exudation. These changes were thought to subsequently modify microbial community structure in the rhizosphere microorganisms, and to alter root morphology ( Qian et al., 2015 ). This is a fundamental reason to focus attention on the excessive use of herbicides in agriculture; they sometimes exert toxic effects that travel up the entire food chain. “As pointed out by Rachel Carson in her book “Silent Spring”, pesticides are not only toxic to their intended primary targets, but also to non-targets, resulting in ecological imbalances ( Carson, 2002 ).” ( Carson, 2002 ). 7 Bacterial herbicide resistance systems that do not involve biodegradation Microorganisms that are important for maintaining soil fertility can be affected by oxidative stress caused by the electronegativity of the chemical structures making up the active ingredients of herbicides. While the primary purposes of herbicides are to damage or kill target weeds, they can provoke oxidative stress in a variety of non-target organisms through the production of free radicals. While inhibiting metabolic pathways in weeds, the active molecules of the herbicides generate ROS, thereby affecting enzymes in non-target organisms as well. More specifically, the primary effect of herbicides that alter photosynthetic systems can affect plants other than their primary targets, as well as affecting photosynthetic cyanobacteria. In some situations, metabolic intermediates of herbicide degradation may be toxic to non-target organisms, possibly maintaining electronegative residues in their molecular structures. Such is the case for degradation residues of quinclorac, which are phytotoxic to many crops, vegetables and microorganisms ( Liu et al., 2014 ). Studies done using antioxidative enzymes have demonstrated their effectiveness in allowing some microorganisms to overcome toxicity due to herbicides. For example, Gravina et al. (2017) evaluated the influence of paraquat on the physiology and adaptive capacity of mutant strains of non-target Escherichia coli , knocked out in the Mn-SOD ( sodA ) and Fe-SOD ( sodB ) genes. SOD is an ancient enzyme that evolved to adapt to oxidative atmospheres. The metals Fe and Mn are important for enzyme stability and catalysis, as well as for the overall structure of the enzymes. These enzymes possess significant differences in their oxidation and reduction potentials and may have provided significant advantages to organisms in variable O 2 and heavy metal environments ( Case, 2017 ). Therefore, mutations in the SOD genes may alter metabolism and antioxidative responses of these strains, through generation of new isoforms that vary according to the oxidative conditions generated by the herbicides. This versatility is a good model of phenotypic plasticity, leading to adaptation to herbicide. This model can be found in several organisms, including the marine ciliate Euplotes focardii that lives in Antarctica within a very narrow temperature range (4–5 °C) ( Pischedda et al., 2018 ). According to these authors, a major issue for this organism is the oxidative stress due the substantial amounts of dissolved oxygen that characterize Antarctic marine environments. The origin of these isoforms can be derived from gene duplication and diversification. This diversification probably occurred by independent mutations and selection pressure. Another example of an antioxidative system responding to herbicidal toxicity is found in Pantoea ananatis , isolated from agricultural soil. This bacterium resists and grows in the presence of mesotrione, likely due to the presence of a polymorphic catalase (CAT) enzymes controlling oxidative stress. Bacteria resistant to mesotrione show changes in lipid membrane saturation, likely leading to increased membrane impermeability, and enhanced formation of glutathione-s-transferase-mesotrione (GST-mesotrione) conjugates, enhancing herbicide degradation levels ( Prione et al., 2016 ). Structural changes are also related to herbicide-induced stress tolerance. Changes in the membrane lipid saturation pattern in bacteria can act as selective barriers against herbicides ( Dobrzanski et al., 2018 ; Prione et al., 2016 ; Rodríguez-Castro et al., 2019 ). In this review, we use the term “resistance” to refer to the ability of bacteria to grow in the presence of herbicides, irrespective of duration of treatment ( Brauner et al., 2016 ). Various enzymatic and non-enzymatic systems act in response to the oxidative effects of herbicides, as is the case for the cyanobacterium Synechocystis, which responds to arsenite and arsenate via induction of general stress responses, induction of redox scavenging systems and chaperones, and by repression of genes involved with photosynthesis and growth ( Sánchez-Riego et al., 2014 ). In addition, arsenic is present in herbicides such as monosodium methylarsenate, as trivalent arsenicals that react with thiol groups in proteins and inhibit various biochemical pathways; there is no specific target for this herbicide. Trivalent arsenicals interfere with small molecule thiols such as reduced glutathione, resulting in the production of ROS and oxidative stress ( Chen et al., 2015 ). The hazardous effects of herbicides on non-targeted microbes and plants can be mitigated through the accumulation of stress metabolites such as poly-sugars, proline, glycine-betaine, and abscisic acid, and through upregulation in the synthesis of enzymatic and non-enzymatic antioxidants such as SOD, CAT, ascorbate peroxidase (APX), glutathione reductase, ascorbic acid, α-tocopherol, and glutathione ( Gouda et al., 2018 ). High levels of thioredoxin, glutaredoxin and GPX were associated with atrazine stress in the interaction between the mycorrhizal fungus Glomus mosseae and alfalfa ( Medicago sativa ) ( Nath et al., 2016 ). 8 Bacterial nonspecific responses to herbicides Several responses systems in bacteria are not herbicide-specific but are rather related to other stressful substances. These nonspecific response systems enhance the survival and fitness potential of these organisms. For example, bacteria respond to certain environmental stresses by altering the transcription of regulons, thereby enabling the cell to cope with the stress. The same operons however, may also be regulated by different stresses, as in the case of antibiotics and herbicides; as is the case of paraquat inducing resistance to norfloxacin in E. coli ( Rosner and Slonczewski, 1994 ). The same is true for the herbicides dicamba, 2,4-D, and glyphosate, that at sub-lethal doses were found to induce changes in soxS-lacZ fusion strains of E. coli and Salmonella enterica in response to antibiotics. This regulon is responsible for the upregulation of efflux pumps and reduction of porins, enhancing antibiotic resistance ( Kurenbach et al., 2015 ). Herbicides and other chemicals used in agriculture and domestic gardens can induce phenotypes akin to multiple-antibiotic resistance in potential pathogens faster than the lethal effect of the antibiotics. The combined use of both herbicides and antibiotics near farm animals and insects like honeybees might lead to an immediate decrease in their therapeutic usefulness, eventually leading to even greater use of antibiotics ( Kurenbach et al., 2015 ). While some bacterial responses to herbicides are specific, such as induction and modulation of antioxidant enzymes and herbicide degradation genes, others generate nonspecific responses that lessen secondary damage to cellular functions. For example, the herbicide Callisto was shown to induce changes in lipid saturation and membrane permeability in Bacillus megaterium strains isolated from various agricultural environments ( Dobrzanski et al., 2018 ). Complementary routes to obtain energy can also be used to reduce the toxicity of herbicides. P. ananatis, for example, can degrade mesotrione, the active ingredient of the herbicide Callisto, but without using it as a carbon, nitrogen, or sulfur source for growth. For this bacterium, mesotrione catabolism required glucose supplementation ( Pileggi et al., 2012 ). Herbicide degradation may also be hampered by collateral effects due to exposure of bacteria to toxic metals in environment, leading to accumulation of intracellular ROS, and the consequent upregulation of genes related to herbicide degradation. For example, the soil bacterium Cupriavidus pinatubonensis, when exposed to sub-lethal concentrations of copper, increased the concentration of ROS. As a result, there was upregulation of an Ohr/OsmC family member protein, subsequently affecting the degradation of phenoxy acid herbicides ( Svenningsen et al., 2017 ). There are also reports of non-enzymatic systems in the control of ROS. Such is the case for the role of up-regulated genes encoding for spermidine production, which contribute to the survival of Burkholderia pseudomallei in stressful environments, mainly under physiological and oxidative stress conditions (e.g., hydrogen peroxide) ( Jitprasutwit et al., 2014 ). Perhaps a better example of nonspecific responses can be seen in the case of superoxide stress that leads to the production of ROS. In order to survive under these conditions, cells must coordinate regulation of a variety of metabolic pathways. One major adjustment is via increased production of NADPH and a concomitant decrease in NADH generation in E. coli ( Rui et al., 2010 ). In this case cellular strategies which maximize survival under stress conditions takes precedence over metabolic efficiency. 9 Bacterial herbicides degradation pathways and bioremediation Many microorganisms utilize herbicides as sole sources of nutrients for growth and survival in the environment. The process of natural selection has undoubtedly improved fitness of microorganisms harboring herbicide degradation genes. This has led to some positive aspects of herbicide effects on microbial diversity. This review focuses on the study of the great biochemical diversity associated with phylogenetic diversity ( Weissenbach, 2019 ) that can therefore be the basis for the wide system of responses to herbicides in bacteria. The application of the herbicides from the thiocarbamate, dinitroaniline, and chloroacetamide families increased microbial biomass, measured by the chloroform fumigation method, probably due to direct degradation or via co-metabolic processes. This increased the availability of mineral carbon, nitrogen, and phosphorous to the soil and resulted in higher mineralization of these herbicides ( Barman and Das, 2015 ). Chloroacetamide herbicides can be transformed by microbial metabolism in natural soils to 4,2-methyl-6-ethylaniline, and this intermediate can be used as a sole nutrient source for a Sphingobium strain. This intermediate can also undergo a series of enzymatic reactions, resulting in the production of 2-methyl-6-ethylhydroquinone and 4-hydroxy-2-methyl-6-ethylaniline. The horizontal transfer of genes encoding enzymes involved in these degradative pathways in bacteria is probably important for the survival of these organisms in polluted environments ( Dong et al., 2015 ). 10 Major herbicides degradation pathways Strategies to reduce 2,4-D contamination in agricultural soils have been tested using bio-augmentation techniques that did not show good efficiency on account of the low survival rate of degrading strains, because laboratory conditions cannot reproduce the stressful conditions of the natural environment. An alternative would be the introduction of plasmids containing 2,4-D degradation genes into indigenous bacteria, which are well adapted to the environment where bioremediation will be performed ( Kumar et al., 2016 ). The degradation of 2,4-D occurs by two well-known metabolic pathways, with several enzymes and microorganisms already described with this ability ( Figure 2 ). The impacts of herbicides on microbial consortia may also reflect evolutionarily-selected organizations to optimize specialization and sharing of metabolic routes. A microbial consortium, mainly containing the genera Bacillus , Phyllobacterium , Pseudomonas , Rhodococcus , and Variovorax, could use azimsulfuron as the sole nutrient source, degrading the herbicide better together than what was achieved using isolated pure cultures. This is likely due to complementary (synergistic) metabolism among bacterial consortia members for the degradation of the herbicide ( Valle et al., 2006 ). Glyphosate is degraded by 19 bacterial and five fungal species, via at least two distinct metabolic routes. In the route where a sarcosine intermediate was found, degradation genes were organized into the phn operon, encoding a C–P lyase ( Sviridov et al., 2015 ) ( Figure 3 ). In the systems where the aminomethylphosphonic acid (AMPA) intermediate was found, the glpA (homologous with hygromycin phosphotransferase genes) and glpB genes are involved. Other genes related to this degradation route involve the glyphosate oxidoreductase ( gox ) gene, responsible for the transformation of this herbicide into glyoxylate and its major degradation product AMPA. Herbicide-resistant transgenic crops were obtained by transformation with these genes ( Huang et al., 2017 ). Due to the high toxicity of glyphosate and AMPA, the bioremediation process needs to be performed on biosafety compounds. Routes based on C–P lyases have low efficiency because this enzyme is inactivated under field conditions ( Figure 3 ). Another difficulty is the search for combinations of strains that mineralize this herbicide faster, especially to prevent the accumulation of toxic intermediates ( Sviridov et al., 2015 ). Despite the idea of transforming indigenous bacteria with these degradation genes, thereby obtaining bioremediating microbes already adapted to the contaminated sites, we believe that more sustainable processes are based on the assembly of bacterial consortia. Bacteria such as Pseudomonas ADP and Arthrobacter aurescens have acquired the ability to metabolize atrazine, but only after six or so genes were acquired by each species ( Martinez et al., 2001 ; Mongodin et al., 2006 ). In some rare cases, the evolutionary pressure may result in the assembly of all pathways for herbicide degradation in a single bacterium, as was the case for Pseudomonas ADP. This strain harbors all genes required for the compete degradation of atrazine ( Sadowsky et al., 1998 ) ( Figure 4 ). There are now numerous reports of specific routes of herbicide degradation, leading to the belief that these systems were selected for after contact with the agent. Nevertheless, even these routes are related to the degradation of structurally similar herbicide families, because de novo gene conversion is a rare event. For example, AtzB is a key enzyme in the metabolic pathway for s-triazine biodegradation. AtzB is essential for microbial growth on s-triazine herbicides and is responsible for the hydrolytic conversion of hydroxyatrazine to N-isopropylammelide ( Martinez et al., 2001 ). The AtzB enzyme contained conserved mononuclear amidohydrolase superfamily active-site residues. Substrates for this enzyme require a monohydroxylated s-triazine ring, with at least one primary or secondary amine substituent, and either a chloride or an amine leaving group. Consequently, the enzyme catalyzes both deamination and dichlorination reactions ( Seffernick et al., 2007 ). Due to its composition with several nitrogen atoms, nitrogen fertilization may affect the degradation rates of this herbicide in agricultural soil. The addition of carbon sources may induce the increase of populations harboring plasmids containing atrazine degradation genes, placing bioaugmentation as an alternative for mitigating contaminated soils ( Singh and Singh, 2016 ). Microbial consortia in biofilms function to mineralize organic xenobiotic compounds, possibly by the sharing of metabolic routes by different species and optimization of the production and consumption of energy in metabolically-integrated communities. Such is the case for metabolic association among the proteobacteria Variovorax spp., Comamonas testosteroni , and Hyphomicrobium sulfonivorans . Together, this consortium converts the phenylurea herbicide linuron into products that are degraded by other bacteria in the consortium. In its presence, the gene encoding linuron hydrolase, hylA , and others contributing to carbohydrate, amino acid, nitrogen, and sulfur pathways showed significantly increased expression. It appears that the Variovorax strain indirectly gained nutrients and energy from linuron by metabolizing excretion products produced from the C. testosteroni and/or H. sulfonivorans strains. The Variovorax strain also had an elevated stress response and overexpressed genes involved in cell-to-cell interaction systems, such as quorum molecule signaling and type VI secretions. The latter two systems could be used by Variovorax in interference competition with C. testosteroni and H. sulfonivorans ( Albers et al., 2018 ). There are other examples where metabolically-integrated microbial communities have shown great potential for degrading a wide variety of herbicide substrates. A novel thiobencarb degradation pathway has been proposed for an Acidovorax strain. This bacterium oxidized and then cleaved the C–S bond of thiobencarb, producing diethylcarbamothioic S-acid and 4-chlorobenzaldehyde. These products were subsequently oxidized to 4-chlorobenzoic acid and then hydrolytically-dechlorinated to 4-hydroxybenzoic acid by other strains ( Chu et al., 2017 ). Another example is herbicide biodegradation by P. ananatis that proceeds through to the formation of GST-mesotrione conjugates, enhancing herbicide degradation levels ( Prione et al., 2016 ). While most bacteria degrade mesotrione via 2-amino-4-methylsulfonyl benzoic acid or 4-methylsulfonyl-2-nitrobenzoic acid, recent LC–MS/MS analyses indicated that biodegradation of mesotrione by other microorganisms leads to the formation of novel intermediates ( Pileggi et al., 2012 ). The transformation of herbicides in soils do not only involve free-living microbes, which are likely few in soil systems. The selective effect of herbicides may also favor the interaction between endophytic bacteria and their host plants grown on commercial farms. For example, the biodegradation of quinclorac in natural settings is relatively slow and transformation residues are toxic to many crops, vegetables, and microorganisms. An endophytic B. megaterium strain obtained from the roots of tobacco degraded 93% of quinclorac in 7 days. The degradation products were different from those presented in previous publications, suggesting that this bacterium uses novel routes for the degradation for quinclorac. Studies of tobacco grown in pots suggested that B. megaterium alleviates quinclorac phytotoxicity ( Liu et al., 2014 ). 10.1 Bioremediation approaches In addition to their effects on free-living soil microorganisms, the impacts of herbicides on the environment can also be mitigated using endophytic bacteria, those living within plant tissues that are capable of herbicide degradation. Endophytic strains may contribute to the survival of both agricultural and weed plants in herbicide-contaminated environments via xenobiotic degradation pathways ( Tétard-Jones and Edwards, 2016 ). A similar concept was tried in the past for control of corn borer by using an endophytic Clavibacter xyli subsp. cynodontis adapted to plants containing the cry gene, which encodes a toxin with effects against insects ( Fahey et al., 1991 ). In this manner, agricultural products would be protected against insect attacks through the metabolites produced by an endophytic strain. Liu et al. (2014) used this same strategy to transform quinclorac and to identify its metabolites; however, in this context, endophytic strains protect against the toxic effects of herbicides. This herbicide, used to control several grass species in rice, canola, barley, corn, and sorghum, is degraded by the endophytic B. megaterium strain Q3. Owing to the plasticity of metabolic pathways in bacteria, their use for bioremediation is one key method of addressing these issues, even using a classically non-environmental bacterium such as Escherichia coli . The E. coli strain DH5-α was found to degrade the compound mesotrione (2-(4-methylsulfonyl-2-nitrobenzoyl) cyclohexane-1,3-dione) in only 3 h without previous exposure to the herbicide ( Olchanheski et al., 2014 ). Mesotrione is the active ingredient of the herbicide Callisto, used for control of weeds that grow in maize crops. This active ingredient is synthesized from a phytotoxin found in the plant Callistemon citrinus that inhibits the enzyme 4-hydroxyphenylpyruvate dioxygenase, which converts tyrosine to α-tocopherol and plastoquinone. Inhibition of the latter leads to a decrease in synthesis of carotenoids, resulting in tissue death ( Olchanheski et al., 2014 ). There are several technologies aimed at eliminating herbicides in the environment, mainly from water. There are systems based on adsorption onto iron composite nanoparticles ( Ali et al., 2016 ), absorption by graphene nanosheets ( Kamaraj et al., 2017 ) and bioremediation. Despite the advanced technologies, herbicide contamination in drinking water remains a worldwide problem (see “Herbicides and Their Fate in the Environment”). There are options for treatment; however, current strategies have proven to be ineffective in remediating water. Bioremediation is a complex process because it is related, as described in this article, to resistance to toxic substances through general systems involving structural and enzymatic systems ( Figure 5 ). The degradation metabolic pathways involve various steps and different routes, with the participation of various species of microorganisms possessing interconnected degradation networks (Figures 2 , 3 , and 4 ) that are organized in biofilm consortia via chemical quorum sensing signaling. This type of study is overly complex and requires molecular approaches, which will be discussed in the next section. 11 Bacterial xenobiotic responses by omics-based approaches and perspectives for bioremediation technologies Modern high-throughput techniques of molecular analysis, the omics-based approaches, generate very large amounts of data regarding taxonomies and genetic structures of bacterial communities, potential functional capabilities, and stressor responses that can be explored more efficiently with the help of bioinformatic tools. These approaches include methods such as gene amplicon sequence (sequencing of a gene or gene fragment of an entire community), shotgun metagenomics (sequencing of community DNA), metatranscriptomics (analysis of mRNA profile in a community), proteomics (proteins present in a biological sample), and metabolomics (metabolites present in biological samples ( Rebollar et al., 2016 ). According to these definitions, metaproteomics can be understood as a set of techniques that allows the study of a community's set of proteins in certain environments, allowing associations between gene expression and adaptation ( Gutleben et al., 2018 ). Therefore, omics-approaches can be understood as methodologies designed to understand the dynamics of molecules related to gene expression and metabolism of an entire cell or community. In this context, the plant Arabidopsis thaliana was exposed to trace concentrations of the S- and R-imazethapyr enantiomer to examine herbicide toxicity effects on root proteome via iTRAQ-based quantitative studies. Computational, physiological, and metabolic analyses showed that imazethapyr reduced branched chain amino acid content in tissues by strongly suppressing their synthesis and by increasing their catabolism ( Qian et al., 2015 ). 11.1 Sequencing approaches Traditional techniques in environmental microbiology have facilitated the study of metabolic and genetic associations in communities of microorganisms structured in biofilms. Nevertheless, new molecular techniques and bioinformatics approaches achieve these same goals much more effectively. Next-generation Illumina transcript-sequencing technology (RNAseq) that allows analysis of global gene expression between strains, has been used to identify potential genes related to interspecies interactions. This technique has proven to be especially useful in examining microbial consortia present in biofilms that have the ability to transform (mineralize) xenobiotic compounds such as the phenylurea herbicide linuron ( Albers et al., 2018 ). Nevertheless, it is important to keep in mind that there are novel methods beyond omics-based techniques that have been used to study bioremediation and biodegradation. For example, the microbial electrochemistry based in transfer of electrons between cells and electron conductors such as naturally occurring minerals or solid-state electrodes can be used to remove oxidized and reduced pollutants from the environment in a bioremediation process called microbial fuel cell ( Wang et al., 2020 ). Historically, the application of microorganisms for bioremediation processes occurs after advancing studies of degradative metabolism. For example, Vilela et al. (2018) describe a variety of microorganisms that can degrade endocrine-disruptor compounds, a class of hormones considered to be hazardous pollutants; however, degradation must occur completely so that even more hazardous metabolites are not produced. However, more efficient levels of technical development in the elimination of xenobiotics can be obtained using omics-based technologies, since it provides information on the interrelationships between metabolic routes in bioremediation communities. This ultimately can provide needed information on possible ecological impacts. Such was the case when RNAseq-based differential transcriptomics were used to examine consortia gene expression occurring during linuron degradation ( Albers et al., 2018 ). This approach led to discovery of previously uncharacterized proteins with functions relevant to cellular performance. 11.2 Genomes approaches Several issues remain controversial in environmental microbiology, including the identification of bacterial species and the issue of whether the decrease in bacterial diversity is related to loss of soil functions. There are methodological questions about the reliability of microbial diversity and functionality assessments ( Nannipieri et al., 2017 ). These are issues that we are considering in this review, mainly related to the effectiveness of bioremediation programs and their ecological sustainability. These are challenges that omics-based technologies have faced. In this sense, Thavamani et al. (2017) anticipated that ecological recovery in post-mining processes would occur after the introduction of plant-specific microbial consortia for synchronized plant-microbial remediation. In that case, the identification of contaminants and biodegrading microorganisms was essential. Functional metagenomics combined with gene expression experiments allows the description of genes displaying unknown functions in isolation. This approach also can be used for previously assigned functions that had never been shown to be involved with the subject under study. This was the case for arsenic, for which the microbial communities from the Tinto River, a natural acid mine drainage site, were explored to search for novel genes involved in arsenic resistance ( Morgante et al., 2015 ). The predicted metagenomics bioinformatics analysis was used in biofilm and planktonic communities in reservoirs containing herbicide-contaminated wastewater to characterize important genes, which functions were relevant for survival in these environments, by performing only 16S rDNA amplicon next-generation sequencing, and analyzing the genes present in the identified OTUs. With this technique it was possible to identify genes functions related to biofilm formation and structure, membrane transport, quorum sensing and xenobiotic degradation ( Lima et al., 2020 ). Omics-based approaches are also interesting for the rare biosphere. This consists of bacterial, archaeal, and fungal species that occupy an exceedingly small segment of the microbial communities in soil and water environments. While low in numbers, these rare microbes may be functionally important and are inherently difficult to study even through molecular approaches. For example, Wang et al. (2017) , studied water samples from Lake Lanier, located at the northern part of the state of Georgia, USA, used as a drinking water reservoir. These authors added 40 μM of 2,4-D, among other chemicals, to samples and considered this to be a perturbation of the chemical quality of water. The population of degraders of organic compounds such as 2,4-D that are rarely detected in these environments by quantitative PCR techniques (qPCR) or metagenomic sequencing increased significantly in abundance following the environmental perturbation. Data obtained from sequence analyses of various isolates with degradation capacity, or from metagenomes, showed that differing co-occurring alleles of degradation genes are often transmitted on plasmids. Studies also showed that several species dominated post-enrichment microbial communities. This genetic reservoir, represented by members of the rare biosphere, can often be missed in metagenomic analyses; nevertheless, they are important because they enable microorganisms to respond to organic pollutants. Pyrosequencing of 16S rRNA gene amplicon and predicted metagenomic analysis were performed to identify species of microorganisms with higher potential for degradation of primitive electronic waste from aquatic contaminated environments. In this manner, new omics approaches could be used to detail potential genes related to the degradation of toxic organic pollutants and heavy metals associated with specific taxonomic units ( Liu et al., 2018 ). Metagenomics also can help in the prospecting for herbicide degradation genes. With this intention, Jin et al. (2007) constructed a metagenomic library comprised of DNAs collected from soils from a glyphosate storage area with a 15-year herbicide contamination history. The library was screened by using an E. coli mutant harboring a kanamycin cassette within the aroA sequence encoding an enzyme in the shikimic acid pathway, 5-enoylpyruvylshikimate-3-phosphate synthase. As a result, this bacterial strain, sensitive to glyphosate, was unable to grow in a minimal medium with the herbicide, at least if a DNA fragment from the metagenomic library containing a gene encoding a glyphosate-insensitive enzyme was inserted in the mutant strain genome. Using this approach, a gene was fished out of the library that generated the ability to restore growth to the aroA mutant ( Jin et al., 2007 ). 11.3 Metabolic approaches Many bioremediation strategies are based on metabolic processes of isolated bacteria and sometimes fungi. However, some factors may hinder the application of these microbiota to many environments. One issue is that the metabolic processes may depend on communication among microbial communities organized in biofilms and may depend on quorum sensing. C. testosteroni , H. sulfonivorans and Variovorax spp. cooperate in biofilm structures in soil for the synergistic degradation of the herbicide linuron. None of these species alone was able to degrade linuron ( Flemming et al., 2016 ). Thus, the speed of this process may differ between isolated strains and those in communities. Another factor is that the process may be incomplete, and the metabolites may be more toxic than the active molecules of the herbicides. Rather than studying this piecemeal, tolerance to oxidative stress, herbicide degradation, and other complex response systems may be better understood using omics-based approaches. Other alternative strategy to improve xenobiotics bioremediation is through coordinating expression of genes encoding for degrading enzymes by quorum sensing systems. Quorum sensing is characterized by signaling molecules dependent on population density that control the behavior of various species of microorganisms that influence biofilm formation and metabolic pathways in coordinated fashion. According to this view, one strategy chosen to improve polycyclic aromatic hydrocarbon (phenanthrene and pyrene) bioremediation by Pseudomonas aeruginosa is via the coordinated expression of genes coding for degrading enzymes impacted by quorum sensing. These data were confirmed by using intercellular signaling acylated homoserine lactone bioreporters and GC-MS analysis ( Kumari et al., 2016 ). A synthetic consortium of E. coli strains was designed to directly produce isopropanol from cellobiose, by metabolic paths sequentially coordinated by a synthetic quorum sensing system ( Honjo et al., 2019 ). The ability to coordinate gene expression in different microbial species in cooperative response to environmental stimuli increases the ability to adapt to toxicologically-impacted environments. Various molecular approaches have also shown the importance of genes related to communication, including quorum sensing and community structuring in biofilms. For example, genes encoding enzymes related to polycyclic aromatic hydrocarbon degradation were found in P. aeruginosa using a network analysis approach. Co-expression data from a publicly available database, the Gene Expression Omnibus, were used to uncover degradation genes under various stress conditions. As expected, no gene acted alone, and several stresses usually induced distinct metabolic pathways for degradation, quorum sensing, biofilm formation, and tolerance to antibiotics ( Yan and Wu, 2017 ). Another way of controlling the characteristics of microbial communities in structured biofilms is by the introduction of plasmids that control cell numbers. This strategy can be used with aerated and non-aerated membrane systems used in various water treatment operations, as well as in the food and power generation industries. Biofouling typically reduces flow and increases energy consumption in membrane-based systems due to the build-up of microorganisms in the polymeric matrices of biofilms. It may be possible to engineer the materials and bacteria to prevent biofouling by limiting bacterial cell numbers and consequently biofilm thickness. This concept is best exemplified by the engineering of a “beneficial” biofilm to encode an epoxide hydrolase. This enzyme can be used to degrade the xenobiotic epichlorohydrin, as well as limiting its own thickness by modulating a quorum sensing system and by secretion of nitric oxide. Epichlorohydrin is commonly used as a precursor for the synthesis of glycerin, epoxy resins, elastomers, pesticides, textiles, membranes, paper, and pharmaceuticals ( Wood et al., 2016 ). To avoid issues of horizontal transfer of the genes involved in quorum sensing, coding sequences were integrated into the bacterial chromosome. New methods using DNA, RNA, proteins, metabolites, metagenomes, and epigenomes have been used to elucidate the behavior of populations of various species under the influence of environmental contaminants. This is a shift from the standard biotechnological view of individual strains to one using biotechnology based on microbial communities, consortia, or biofilms. In studies on aquatic environments contaminated with hexavalent chromium, the resistant bacteria Pannonibacter phragmitetus BB was evaluated using a number of molecular approaches to define its multiple-response system, including enzyme activity assays, chemotaxis assays, genome sequencing, comparative genome analysis, proteomic analysis, and metabolomic analysis. The results showed several enzymes and cellular processes involved with the resistance and reduction capacity of hexavalent chromium, including quorum sensing. However, the authors believe that a single bacterial strain in this case is more efficient in bioremediation than communities because of the oxidative stress generated ( Chai et al., 2019 ). The issue of multiple metabolic steps is a major issue here; therefore, a more detailed approach should be taken with communities subjected to chromium or other xenobiotic contamination. Positive practical results are more likely to be achieved when the genetic and biochemical context of different species of bacteria is known in more depth, as has been shown for sulfamethoxazole-degrading strains, Vibrio alginolyticus and Pseudomonas pseudoalcaligenes , in the presence of bacterial communities with different ecological functions, as ammonia oxidation, photosynthesis, and nitrogen fixation, can restore the environmental balance and water quality in milkfish culture ponds ( Chang et al., 2019 ). The third generation of high-throughput DNA sequencing is based on platforms of true single molecular sequencing (tSMS) of Helicos Biosciences, the PacBio of Pacific Biosciences, and the nanopore single-molecule technology of Oxford Nanopore Technologies. Zhang et al. (2019) used the PacBio platform to obtain the sequence of complete genome of Klebsiella pneumoniae 2N3. These authors obtained insights into genes that encode degradation enzymes of sulfonylurea herbicides and support for further exploration of degradation pathways for possible use for bioremediation purposes. Using this technology, the authors were able to describe regulation systems for biodegradation, including esterase SulE and cytochrome P450. Despite the knowledge of response systems to herbicides obtain through omics approaches and the possibilities of efficient bioremediation by bacterial communities, there is a possible pitfall for developing countries. Data from the Food and Agriculture Organization of the United Nations ( http://www.fao.org/statistics/en/ ) showed that, in 2016, there was no proportional relationship between pesticide use and percentage of undernourished in continents. For example, Europe had the lowest rates of undernourished people in the world, with 1.5% prevalence of severe food insecurity, using only 1.66 kg/ha of pesticides. By contrast, Asia had the one of the highest malnourishment rates in the world (11.4%) but with high pesticide use rates (3.64 kg/ha). Despite the possibilities of manipulating bacterial communities for more sustainable bioremediation processes, the very concept of omics introduces several problems: they are more expensive and complex approaches than the traditional analyses and require more powerful bioinformatics systems to analyze the large amounts of data generated ( Pathak et al., 2018 ). Without help to implement and fund omic technologies, developing agricultural countries will have greater problems in achieving self-sufficiency to solve problems of environment. 12 Conclusions One of the guiding principles for sustainable use of herbicides in agriculture is that they should only target weed-specific systems such as photosynthesis-related enzymes, amino acid production, and growth regulators. Unfortunately, the improper use of herbicides results in increased waste in the environment, which may lead to the selection of herbicide-resistant weeds and decreased viability of non-target organisms, including soil and water microbial communities. Several strategies are used to mitigate this situation. One of them is bioremediation, based on the enzymatic capacity of microorganisms responsible for herbicide degradation, transformation, or mineralization. There are limitations to this approach, including the production of more toxic metabolites via incomplete herbicide degradation processes. Herbicides cause oxidative stress; therefore, for degradation processes to occur, microorganisms need more plastic antioxidant mechanisms. The various techniques for mitigating herbicides in the environment have low efficiencies in elimination of waste, generating important environmental liabilities. Alternatives based on mixed microbial communities that showing higher genetic and metabolic diversity appear to be more efficient than single strains in bioremediation. These communities present higher levels of gene complexity and interactions of several metabolic pathways, quorum sensing communication, and organization of microbial populations in biofilms, all of which requires molecular approaches (the omics) to obtain deeper access to the large amounts of generated data. Bioremediation processes based on integrated bacterial consortia and manipulated by quorum sensing may represent the paradigm shift needed to achieve herbicide mineralization in a more efficient and sustainable manner than currently occurs. Nevertheless, it is necessary that developing countries, which are major food producers and consumers of pesticides, have access to these techniques so as to achieve sustainable production. Declarations Author contribution statement All authors listed have significantly contributed to the development and the writing of this article. Funding statement This work was supported by the Coordination for the Improvement of Higher-Level Personnel ( CAPES ), the National Council of Technological and Scientific Development ( CNPq ), and the Foundation for Research Support of the State of Paraná (Fundação Araucária). Data availability statement Data included in article. Competing interest statement The authors declare no conflict of interest. Additional information No additional information is available for this paper. Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2020.e05767 . Appendix A Supplementary data The following is the supplementary data related to this article: Supplementary Material Table 1_V2.docx Supplementary Material Table 1_V2.docx
|
[
"ALBERS",
"ALI",
"AKTAR",
"BAI",
"BARMAN",
"BEFFA",
"BRAUNER",
"CARSON",
"CASE",
"CHAI",
"CHANG",
"CHEN",
"CHU",
"DOBRZANSKI",
"DODDS",
"DONG",
"ELIAS",
"FAHEY",
"FERNANDEZ",
"FLEMMING",
"GAO",
"GOUDA",
"GRAVINA",
"GRIBOFF",
"GUTLEBEN",
"HONJO",
"HUANG",
"JIN",
"JITPRASUTWIT",
"KUMAR",
"KAMARAJ",
"KUMARI",
"KURENBACH",
"LI",
"LIMA",
"LIU",
"LIU",
"MAGGI",
"MANZO",
"MARTINEZ",
"MATZRAFI",
"MONGODIN",
"MORGANTE",
"NANNIPIERI",
"NATH",
"OLCHANHESKI",
"PATHAK",
"PILEGGI",
"PISCHEDDA",
"PRIONE",
"QIAN",
"REBOLLAR",
"RENDONVONOSTEN",
"RODRIGUEZCASTRO",
"ROSNER",
"RUI",
"SADOWSKY",
"SANCHEZRIEGO",
"SEFFERNICK",
"SILVA",
"SINGH",
"SVENNINGSEN",
"SVIRIDOV",
"SZEKACS",
"TETARDJONES",
"THAVAMANI",
"VALLE",
"VILELA",
"WANG",
"WANG",
"WANG",
"WEISSENBACH",
"WOOD",
"XIANG",
"YAN",
"ZHANG"
] |
4e69a37ccb8f4a1cb1e5ffcfd2308268_Sinergisme Lumbricus rubellus dengan Pseudomonas putida Pf-20 dalam Menginduksi Ketahanan Mentimun t_10.1016_S1978-3019(16)30300-X.xml
|
Sinergisme Lumbricus rubellus dengan Pseudomonas putida Pf-20 dalam Menginduksi Ketahanan Mentimun terhadap Cucumber Mosaic Virus
|
[
"WAHYUNI, WIWIEK SRI",
"ADDY, HARDIAN SUSILO",
"ARMAN, BUDI",
"SETYOWATI, TRI CANDRA"
] |
Both Lumbricus rubellus and Pseudomonas putida decompose soil organic matters. The population of P. putida Pf-20 increased if L. rubellus was introduced to the cucumber growth medium. The process of organic decomposition was much better if the medium was introduced with both L. rubellus and P. putida Pf-20, compared to the medium contained only either one of those organisms. The activity of L. rubellus may serve to provide nutrients for both the cucumber and P. putida. The role of P. putida to reduce disease severity was increased if L. rubellus was introduced to the growth medium. The synergism of these two organisms, reduced either the level of disease severity to CMV-48 and C/N ratio of medium, but increased the content of available phosphor and potassium.
|
PENDAHULUAN Penggunaan bakteria golongan pseudomonad pendar-fluor yang termasuk Plant growth-promoting rhizobacteria (PGPR), misalnya Pseudomonas cepacia, P. fluorescens, P. putida, P. aeruginosa , dan P. aureofaciens terus dikembangkan sebagai agens hayati pengendali penyakit tumbuhan yang aman bagi lingkungan ( Chancey ; et al. 2002 Haas & Defago 2005 ). Pada media tumbuh dengan kandungan Fe 2+ rendah, bakteri pendar-fluor dapat membentuk antara lain siderofor berupa asam salisilat, pioverdin atau piokelin, yang berperan sebagai sinyal transduksi induced systemic resistance [ISR] ( De Meyer & Hofte 1997 ; Press ). Senyawa ini meningkatkan aktivitas gen-gen et al. 2001 pathogenesis related (PR)-protein penghasil enzim peroksidase, β-1,3-glukanase, atau β-D-glukuronidase dalam tumbuhan ( Leeman ; et al. 1996 Park & Kloepper 2002 ). Ketika tumbuhan berinteraksi dengan patogen, aktivitas enzim tersebut terus meningkat. Zhang membuktikan aktivitas enzim β-1,3-glukanase meningkat dari 1.95 menjadi 3.70%, dan β-D-glukorodinase dari 32.5 menjadi 53.1%. et al. (1998) Maurhofer menemukan siderofor dari et al. (1998) P. fluorescens galur P3 yang mengekspresikan gen pengendali biosintesis asam salisilat. Siderofor ini dapat memperbaiki mekanisme induksi ketahanan sistemik tembakau dan tomat terhadap tobacco necrosis virus (TNV). Wahyuni membuktikan et al. (2003) P. putida Pf-20 mampu menginduksi ketahanan tembakau terhadap cucumber mosaic virus (CMV). Lumbricus rubellus hidup pada habitat yang mengandung banyak bahan organik dan memiliki peran yang penting antara lain (i) dekomposisi bahan organik, (ii) translokasikan hasil dekomposisi bahan organik yang mengandung mikroba ke lapisan tanah atas, serta (iii) berpotensi menyebarkan dan meningkatkan sejumlah bakteri dan mikroba lain di dalam tanah ( Gange 1993 ). Penelitian ini bertujuan untuk mengetahui (i) pengaruh introduksi L. rubellus pada aktivitas P. putida Pf-20 untuk menginduksi ketahanan mentimun terhadap CMV; (ii) komposisi media tumbuh yang paling sesuai untuk pertumbuhan mentimun, L. rubellus dan P. putida Pf-20; (iii) dinamika populasi L. rubellus dan P. putida Pf-20 dalam media tumbuh setelah perlakuan; dan (iv) perubahan sifat kimia tanah media tumbuh mentimun pada sebelum dan sesudah diintroduksi dengan L. rubellus dan P. putida Pf-20. BAHAN DAN METODE Media Tumbuh dan Introduksi L. rubellus Percobaan dilakukan di rumah kaca bebas serangga dengan rancangan acak kelompok (RAK), terdiri atas lima macam media tumbuh dengan atau tanpa introduksi L. rubellus (C) dan P. putida Pf-20 (B). Media V adalah campuran tanah sawah dan humus (1:2). Media W adalah tanah sawah yang dicampur dengan pupuk kandang (1:1). Media X adalah campuran tanah sawah, humus dan pupuk kandang (1:3:2). Media Y adalah campuran tanah sawah, humus dan pupuk kandang (1:2:2), dan media Z adalah tanah sawah. Masing-masing kantong plastik ( polybag ) diisi dengan 3 kg media. Rataan antar perlakuan dibedakan dengan uji Duncan pada taraf 5%. Mentimun ( Cucumis sativus ) sebagai salah satu inang dari CMV dan CMV-48 digunakan sebagai model untuk penelitian ini. Lumbricus rubellus diperoleh dari peternak cacing di Jember dan dipuasakan dulu selama 24 jam dalam media kompos steril sebelum diintroduksikan. Dua puluh L. rubellus diintroduksikan pada tiap media dan bibit mentimun umur tujuh hari ditanam pada lima hari setelah introduksi (hsi) L. rubellus ke media tumbuh. Inokulasi P. putida Pf-20 dan CMV Pseudomonas putida Pf-20 koleksi T. Arwiyanto dari Universitas Gadjah Mada dan CMV-48 (subgrup II-CMV, Wahyuni ) digunakan sebagai bahan kajian pada penelitian ini. Bakteri diperbanyak dalam media air pepton yang mengandung 100 ppm rifampisin. CMV-48 diperbanyak pada mentimun dan digunakan sebagai inokulum pada konsentrasi 5 mg daun ml et al. 2003 −1 bufer PO 4 −3 5 mM, pH 7. Setelah satu minggu introduksi dengan L. rubellus, P. putida Pf-20 diinokulasikan dengan menyiramkan 20 ml suspensi bakteri pada kerapatan 2 × 10 8 cfu /ml tiap media tumbuh. CMV diinokulasikan secara mekanis pada daun primer, pada 7 hsi bakteri. Pengamatan ada atau tidak adanya infeksi CMV pada mentimun dilakukan setiap hari. Keparahan penyakit diukur berdasarkan pada perkembangan gejala mosaik yang tampak. Nk = jumlah daun dengan skala keparahan penyakit pada k (k = 0, 1, 2, 3, 4) tiap tanaman, N = jumlah semua daun yang diamati tiap tanaman, Z = skala keparahan tertinggi. Keparahan penyakit dihitung pada 14, 21, dan 28 hsi bakteri. Menurut Keparahan penyakit = ∑ ( k . Nk ) ZN × 100 % Raupach dan et al. (1996) Ongena tingkat keparahan penyakit tanaman yang menjadi rendah merupakan indikator bahwa bakteri mampu menginduksi ketahanan sistemik tanaman. et al. (2000) Populasi L. rubellus dan P. putida Pf-20 dalam Media Tumbuh Jumlah cacing sebelum dan sesudah perlakuan dibandingkan dan dihitung dengan cara penyinaran untuk mengetahui komposisi media tumbuh yang cocok bagi perkembangan dan pertumbuhan L. rubellus. Bobot populasi cacing dilakukan pada sebelum dan sesudah perlakuan dan perubahan bobot yang terjadi dihitung dengan cara mengurangkannya. Populasi bakteri diamati menurut Wahyuni , untuk mengetahui pengaruh et al. (2003) L. rubellus pada peningkatan populasi bakteri dalam rizosfer, pada permukaan akar, dan dalam jaringan akar. Contoh diambil pada 14, 21, dan 28 hsi bakteri. Perakaran Tanaman Mentimun Untuk mengetahui pengaruh introduksi L. rubellus dan P. putida Pf-20 terhadap perakaran tanaman, dibandingkan total panjang akar dan kerapatan akar tanaman pada tiap macam media. Pengukuran total panjang akar dan kerapatan akar dilakukan seperti pada Wahyuni . et al. (2003) Analisis Perubahan Kandungan N, C, P, K, dan pH Media Tumbuh Lumbricus rubellus dapat mempengaruhi sifat kimia tanah. Analisis kandungan N, P, dan K dalam tanah dilakukan sebelum dan sesudah perlakuan untuk mengetahui perubahan tersebut. Analisis kandungan N total dilakukan dengan metode Kjeldhal, kandungan C-organik dilakukan dengan metode Curmis, P tersedia dilakukan dengan metode Bray-1 dan K tersedia diukur berdasarkan kandungan K yang terekstrak oleh amonium asetat 1N pada pH 7.0. HASIL Penurunan Keparahan Penyakit CMV-48 pada Mentimun Akibat Introduksi L. rubellus dan P. putida Pf-20 Variasi gejala mosaik pada daun dengan nilai skala k yang berbeda dan digunakan untuk menghitung tingkat keparahan penyakit tanaman ( Gambar 1 ). Keparahan penyakit CMV pada tanaman yang tumbuh pada media V, W, X, Y, dan Z yang diintroduksi dengan P. putida dan/tanpa L. rubellus lebih rendah dari pada media yang sama tanpa bakteri (P ≤ 0.05). Pada media X yang diintroduksi dengan L. rubellus, P. putida mampu menurunkan keparahan penyakit hingga 11.58% ( Tabel 1 ). Pada media dengan bakteri, tingkat keparahan penyakit sangat rendah pada 14 hsi bakteri dan pada 28 hsi nilainya sedikit meningkat daripada 21 hsi bakteri. Introduksi L. rubellus pada media V, W, X, Y, dan Z meningkatkan populasi P. putida Pf-20 di rizosfer mentimun, dibandingkan dengan pada media yang sama tanpa cacing merah ( Tabel 1 ). Introduksi L. rubellus pada 3 kg media X meningkatkan populasi P. putida 6.43 kali (42.87 × 10 6 cfu/ g tanah kering), pada media Y meningkatkan 5.63 kali, pada media V meningkatkan 4.11 kali, dan pada media W meningkatkan 3.93 kali. Berdasarkan uji t 0.05 , L. rubellus lebih berperan untuk meningkatkan populasi P. putida di rizosfer dibandingkan dengan pengaruh komposisi media tumbuh ( Tabel 2 ). P. putida lebih berperan untuk menurunkan keparahan penyakit hingga 8.55% dibandingkan oleh peran L. rubellus ( Tabel 3 ). Hal ini menunjukkan bahwa L. rubellus dapat meningkatkan aktivitas P. putida untuk mendominasi rizosfer dan perakaran. Akibatnya, aktivitas P. putida dapat meningkatkan penginduksian ketahanan sistemik mentimun terhadap CMV-48. Pengaruh Komposisi Media Tumbuh dan P. putida Pf-20 terhadap Perubahan Populasi dan Bobot L. rubellus Perubahan populasi dan bobot seluruh L. rubellus diakibatkan oleh perbedaan komposisi media tumbuh dan introduksi P. putida Pf-20 ( Tabel 4 ). Populasi L. rubellus lebih tinggi pada media yang diintroduksi dengan bakteri daripada media tanpa bakteri. Bobot seluruh L. rubellus meningkat pada semua media yang diberi humus jerami dan atau pupuk kandang (media V, W, X, dan Y). Populasi dan bobot seluruh L. rubellus lebih tinggi pada media dengan bakteri daripada media tanpa bakteri. Media X (tanah sawah, humus jerami, dan pupuk kandang 1:3:2) yang mengandung bakteri adalah media terbaik untuk pertumbuhan cacing dengan populasi 86.25% dan bobot 64%, dan pada media tanpa bakteri, populasi cacing 62.50%. Pengaruh Macam Komposisi Media Tumbuh dan Interaksi L. rubellus, P. putida Pf-20, dan CMV-48 terhadap Total Panjang Akar dan Kerapatan Akar Tanaman yang tumbuh pada media dengan komposisi yang berbeda mempunyai total panjang akar yang berbeda ( Gambar 2 ). Total panjang akar tanaman pada media dengan L. rubellus dan bakteri lebih baik dibandingkan dengan pada media yang hanya diintroduksi dengan salah satu organisme. Total panjang akar pada media X dengan L. rubellus dan bakteri tidak berbeda dengan pada media V (P > 0.05), tetapi kerapatan akar lebih baik pada media V. Kerapatan akar pada media dengan L. rubellus dan bakteri lebih baik dibandingkan dengan pada media yang hanya diintroduksi dengan salah satu organisme ( Gambar 2 ). Arsitektur perakaran menjadi lebih baik dengan makin panjangnya total panjang akar dan kerapatan akar. Perubahan Kandungan Hara Media Tumbuh setelah Introduksi L. rubellus, P. putida Pf-20, dan CMV-48 Introduksi L. rubellus dan bakteri mempengaruhi perubahan kandungan hara dan pH media, besarnya pengaruh ini tergantung pada komposisi media tumbuh ( Tabel 5 ). Kecuali pada media Z (tanah sawah), pH media tumbuh menurun setelah perlakuan L. rubellus dan bakteri. Setelah perlakuan, kandungan N-total dan C-organik menurun pada semua media tumbuh yang diintroduksi dengan L. rubellus dan bakteri, atau hanya diintroduksi dengan salah satu organisme. Kandungan P-tersedia meningkat pada media tumbuh yang mengandung humus jerami dan/atau pupuk kandang (media V, W, X, Y), tetapi kandungan K-tersedia menurun pada media W dan Y setelah diintroduksi dengan L. rubellus dan bakteri, atau hanya diintroduksi dengan salah satu organisme. Perubahan kandungan hara dan pH berkaitan dengan perubahan nisbah C/N. Sebelum perlakuan, media X dan Y mempunyai nisbah C/N lebih rendah dibandingkan dengan media lainnya. Setelah diintroduksi dengan L. rubellus dan bakteri, atau hanya diintroduksi dengan salah satu organisme, nisbah C/N-nya menurun ( Tabel 5 ). Hal ini menunjukkan L. rubellus dan bakteri ikut berperan pada proses perombakan bahan organik dalam media dan peran kedua organisme dalam hal ini menjadi lebih berarti dibandingkan dengan bila hanya diintroduksi dengan salah satu organisme saja. PEMBAHASAN Peningkatan populasi bakteri dan penyebarannya di rizosfer dibantu oleh peran Lumbricus sp. sehingga bakteri cepat mendominasi dan mengkolonisasi akar. Makin banyak bahan organik tersedia dalam media (media X atau Y), makin baik perkembangan L. rubellus. Sebaliknya, pada media Z (tanah sawah), populasi dan bobot seluruh L. rubellus menurun, karena kandungan bahan organik dalam tanah sawah tergolong rendah, struktur tanah lebih padat, kandungan liat tinggi dan sulit dilalui oleh cacing merah, sehingga L. rubellus lebih cenderung keluar dari media. Lumbricus rubellus mendekomposisi bahan organik dalam media. Bahan organik dalam pencernaannya dan kotoran (kascing) yang dihasilkan dapat menjadi media yang baik untuk perkembangan P. putida Pf-20. Meskipun pada penelitian ini tidak dihitung populasi bakteri yang terdapat dalam kotoran L. rubellus , Pedersen dan Hendriksen (1993) menemukan bahwa populasi P. putida MM11 meningkat menjadi 3.7 × 10 4 cfu/ g kotoran Lumbricus spp. yang sudah satu hari dikeluarkan dan ini lebih rendah daripada populasi P. putida MM1 (9.9 × 10 6 cfu /g). Oleh karena itu, sinergisme L. rubellus dan P. putida Pf-20 meningkatkan arti dan peran PGPR sebagai penginduksi ketahanan sistemik tanaman. Kemampuan P. putida Pf-20 di rizosfer untuk mengkolonisasi akar ditunjukkan dengan populasi bakteri yang tinggi pada permukaan akar dan dalam jaringan akar. Makin tinggi populasi P. putida Pf-20 yang mengkoloni akar, makin rendah keparahan penyakit CMV-48. Mekanisme penyebab rendahnya keparahan penyakit CMV oleh P. putida Pf-20 belum diteliti, tetapi menurut Ongena et al. (2000) P. putida BTP1 dapat membentuk siderofor pengkelat besi dan siderofor penghasil fitoaleksin untuk melindungi mentimun dari Pythium aphanidermatum. Hal ini ditunjukkan dengan terakumulasinya polifenol dalam daun dan akar tanaman yang terinfeksi. Media tumbuh dengan komposisi yang berbeda, mempunyai pH antara 6.5 sampai dengan 7.5. Keadaan ini sesuai untuk pertumbuhan mentimun ( Sutarya ), et al. 1995 L. rubellus ( Paramita 2004 ), dan P. putida Pf-20 serta kelompok fluoresens pseudomonad lain asal rizosfer Mimosa sp. dari perkebunan tembakau di Deli ( Arwiyanto 1997 ). Penurunan pH setelah perlakuan L. rubellus dan P. putida Pf-20 antara lain sebagai akibat aktivitas L. rubellus dan bakteri dalam mendekomposisi bahan organik. Penurunan pH ini menurut Meeting (1993) disebabkan oleh produksi asam organik yang cukup tinggi selama proses dekomposisi bahan organik oleh mikrobia tanah. Introduksi L. rubellus dan bakteri menyebabkan penurunan C-organik dan N total. Penurunan kandungan C-organik dalam media berhubungan dengan nisbah C/N. Makin tinggi nisbah C/N makin lambat laju dekomposisi bahan organik, sehingga ketersediaan N dalam tanah secara lambat makin berkurang. Selain N dalam tanah digunakan oleh tumbuhan, organisme dan mikroba tanah, NH 3 dan oksidanya yang menguap juga menjadi penyebab penurunan kandungan N dalam media ( Gange 1993 ). Introduksi L. rubellus dan bakteri secara sinergis menyebabkan kandungan PO 4 −3 dalam media meningkat. Kotoran L. rubellus mengandung PO 4 −3 yang tinggi, karena L. rubellus ikut mendegradasi fosfat dalam media menjadi tersedia dalam bentuk H 2 PO 4 − , HPO 4 −2 atau PO 4 −3 ( http://www.agrolinkmoa.my/pqnet/kwln/cacing.html ). Pada media tumbuh dengan proporsi humus jerami lebih banyak, kandungan K meningkat setelah diintroduksi dengan L. rubellus dan P. putida Pf-20, karena 80% sumber kalium adalah humus jerami ( Darmawijaya 1997 ). Sebaliknya, kandungan K menurun pada media tumbuh dengan proporsi pupuk kandang lebih banyak (W dan Y). Pada 45 hari setelah tanam (hst) kandungan K tersedia dalam media menurun, karena sebagian K digunakan oleh L. rubellus untuk membentuk kokon ( http://www.agrolinkmoa.my/pqnet/kwln/cacing.html ). Lumbricus rubellus dalam media tumbuh yang banyak mengandung humus jerami dan pupuk kandang yaitu media X (tanah sawah, humus jerami, dan pupuk kandang, 1:3:2), diikuti dengan media Y (campuran tanah sawah, humus, dan pupuk kandang, 1:2:2), mempunyai peran yang nyata untuk meningkatkan aktivitas P. putida Pf-20 dalam menginduksi ketahanan mentimun terhadap CMV. Sinergisme kedua jenis organisme yang diintroduksikan ini memperbaiki kandungan hara media menjadi tersedia sehingga pertumbuhan akar tanaman menjadi lebih baik.
|
[
"ARWIYANTO",
"CHANCEY",
"DARMAWIJAYA",
"DEMEYER",
"GANGE",
"HAAS",
"LEEMAN",
"MAURHOFER",
"MEETING",
"ONGENA",
"PARAMITA",
"PARK",
"PEDERSEN",
"PRESS",
"RAUPACH",
"SUTARYA",
"WAHYUNI",
"ZHANG"
] |
3c5a157a445248329435eb842202ad20_Overall and stage-specific survival of patients with screen-detected colorectal cancer in European c_10.1016_j.lanepe.2022.100458.xml
|
Overall and stage-specific survival of patients with screen-detected colorectal cancer in European countries: A population-based study in 9 countries
|
[
"Cardoso, Rafael",
"Guo, Feng",
"Heisser, Thomas",
"De Schutter, Harlinde",
"Van Damme, Nancy",
"Nilbert, Mef Christina",
"Christensen, Jane",
"Bouvier, Anne-Marie",
"Bouvier, Véronique",
"Launoy, Guy",
"Woronoff, Anne-Sophie",
"Cariou, Mélanie",
"Robaszkiewicz, Michel",
"Delafosse, Patricia",
"Poncet, Florence",
"Walsh, Paul M.",
"Senore, Carlo",
"Rosso, Stefano",
"Lemmens, Valery E.P.P.",
"Elferink, Marloes A.G.",
"Tomšič, Sonja",
"Žagar, Tina",
"Marques, Arantza Lopez de Munain",
"Marcos-Gragera, Rafael",
"Puigdemont, Montse",
"Galceran, Jaume",
"Carulla, Marià",
"Sánchez-Gil, Antonia",
"Chirlaque, María-Dolores",
"Hoffmeister, Michael",
"Brenner, Hermann"
] |
Background
An increasing proportion of colorectal cancers (CRCs) are detected through screening due to the availability of organised population-based programmes. We aimed to analyse survival probabilities of patients with screen-detected CRC in European countries.
Methods
Data from CRC patients were obtained from 16 population-based cancer registries in nine European countries. We included patients with cancer diagnosed from the year organised CRC screening programmes were introduced until the most recent year with available data at the time of analysis, whose ages at diagnosis fell into the age groups targeted by screening. Patients were followed up with regards to vital status until 2016-2020 across the various countries. Overall and CRC-specific survival were analysed by mode of detection and stage at diagnosis for all countries combined and for each country separately using the Kaplan-Meier method.
Findings
We included data from 228 134 patients, of whom 134 597 (aged 60-69 years at diagnosis targeted by screening in all countries) were considered in analyses for all countries combined. 22·3% (38 080/134 597) of patients had cancer detected through screening. Most screen-detected cancers were found at stages I-II (65·6% [12 772/19 469 included in stage-specific analyses]), while the majority of non-screen-detected cancers were found at stages III-IV (56·4% [31 882/56 543 included in stage-specific analyses]). Five-year overall and CRC-specific survival rates for patients with screen-detected cancer were 83·4% (95% CI 82·9-83·9) and 89·2% (88·8-89·7), respectively; for patients with non-screen-detected cancer, they were much lower (57·5% [57·2-57·8] and 65·7% [65·4-66·1], respectively). The favourable survival of patients with screen-detected cancer was also seen within each stage – five-year overall survival rates for patients with screen-detected stage I, II, III, and IV cancers were 92.4% (95% CI 91·6-93·1), 87·9% (86·6-89·1), 80·7% (79·3-82·0), and 32·3 (29·4-35·2), respectively. These patterns were also consistently seen for each individual country.
Interpretation
Patients with cancer diagnosed at screening have a very favourable prognosis. In the rare case of detection of advanced stage cancer, survival probabilities are still much higher than those commonly reported for all patients regardless of mode of detection. Although these results cannot be taken to quantify screening effects, they provide useful and encouraging information for patients with screen-detected CRC and their physicians.
Funding
This study was supported in part by grants from the German Federal Ministry of Education and Research and the German Cancer Aid.
|
Research in context Unlabelled box Evidence before this study We searched in PubMed for articles reporting on survival of patients with screen-detected colorectal cancer (CRC) in European countries that were published up to January 2, 2022. We used the following search terms: “survival” AND (“colon cancer” OR “rectal cancer” OR “colorectal cancer”) AND “screen*” AND “Europe*”. Higher survival rates for patients with screen-detected cancer compared to patients with symptom-detected cancer have been reported in the context of pilot studies prior to introduction of population-based screening programmes and from a few regional and nationwide studies conducted during the first years of screening implementation. Given the increasing proportion of patients with cancer detected at screening, a comprehensive, up-to-date, Europe-wide survival analysis for this group of patients, especially by stage at diagnosis, is warranted. Added value of this study To the best of our knowledge, this is the first multi-country European study to provide detailed data on overall and CRC-specific survival probabilities of patients with screen-detected CRC, by stage at diagnosis. Implications of all the available evidence Although the data provided in this study cannot be taken to quantify screening effects, they can and should be used to inform patients, physicians, and the general population about the prognosis of patients with screen-detected CRC, who might otherwise feel discouraged by rather unfavourable estimates commonly available for all CRC patients irrespective of mode of detection. The data provided herein may further encourage the eligible population to make use of available screening options. Introduction Colorectal cancer (CRC) is the second most commonly diagnosed cancer and the second leading cause of cancer death in Europe, with nearly 520 000 new diagnoses and 245 000 related deaths in 2020. Five-year net survival has meanwhile reached levels above 60% in many European countries, 1 with large variations by stage at diagnosis – from around 90% for patients diagnosed at stage I to just slightly over 10% for patients diagnosed with metastatic (stage IV) disease. 2 3 Several CRC screening methods have been recommended for population-wide implementation, including faecal occult blood test (FOBT) (in particular faecal immunochemical test [FIT]), flexible sigmoidoscopy, and colonoscopy. In the past two decades, many European countries have launched programmes offering either one or multiple of these screening options, 4 and an increasing proportion of CRC cases are detected by screening. 5 6 Survival rates for screen-detected CRC patients are expected to be considerably higher than those commonly reported for all CRC patients combined due to the more favourable stage distribution of screen-detected cancers, but also within the same stage as a result of detection of less aggressive, more slowly progressing cancers. Furthermore, “lead time”, i.e. mere advancement of the time of diagnosis even if chances of cure are not increased, or overdiagnosis of cancers that would have never been detected in the absence of screening may additionally contribute to higher survival of patients with screen-detected cancer. 7 Although higher overall and stage-specific survival among patients with screen-detected CRC can therefore not be interpreted as reflecting screening benefits, it would still be most valuable for screen-detected CRC patients and their physicians to know about their survival probabilities to prevent them from being discouraged by overly pessimistic survival figures that are commonly available for all patients combined only, regardless of the mode of detection. The aim of this study was to provide overall and stage-specific survival rates for patients with screen- and non-screen-detected CRC in nine European countries with organised screening programmes. 2 Methods Study design and data collection In this longitudinal, international population-based study, data from CRC cases (ICD-10 codes C18-C20) were obtained from 16 population-based cancer registries in nine European countries (Belgium, Denmark, England, Ireland, the Netherlands, and Slovenia with nationwide data; and France, Italy, and Spain with regional data). Patients included in this analysis were diagnosed from the year organised CRC screening programmes were implemented up to the most recent year with available data at the time of analysis (up to 2014-2016 in most countries/regions), and were followed up with regards to vital status until December 2016–January 2020 across the various countries/regions ( Table 1 ). We collected the following patient- and tumour-level data: sex, age at diagnosis, date of diagnosis, mode of diagnosis (ie, screen- or non-screen-detected cancer), topography (ie, tumour site), tumour histology, stage at diagnosis ( Union Internationale Contre le Cancer [UICC] TNM stage at the time of diagnosis), and date of and vital status at last contact (for Belgium, Denmark, England, Ireland, and the Netherlands, intervals in days between diagnosis and follow-up were provided instead of date of last contact). Cause of death information was also obtained from Denmark, England, Ireland, Italy (Turin), Slovenia, and Spain (Basque Country, Girona, and Tarragona). Data sources and relevant data quality indicators are provided in the appendix (pp 2-3) . Additionally, we summarised relevant characteristics of the organised screening programmes implemented in the included countries, notably screening test, year of programme initiation, target age group, screening interval, coverage, and participation ( appendix pp 4-5 ). These data were obtained from Europe-, nation-, and region-wide screening reports ( appendix pp 4-5, 21 ). This study was approved by the Ethics Committee of the Medical Faculty of the University of Heidelberg (S-84/2019). Statistical analyses In this analysis, cases whose ages at diagnosis were not in the age range of the population targeted by screening, cases with missing data on sex, vital status, missing or inconsistent dates of diagnosis and follow-up/death (ie, date of last follow-up/death preceding date of diagnosis) or null survival (same date of diagnosis and death) were excluded ( Table 1 ). Furthermore, cases with missing TNM staging data were excluded from stage-specific analyses. For England (years of diagnosis 2006-2011), Turin, Italy (all years), and the Basque Country, Spain (year 2009), TNM staging data were missing for more than 85% of the cases; therefore, these countries (years of diagnosis) were not considered in analyses of stage. Data were analysed for all countries combined and for each country individually. In analyses where data from all countries were pooled, only patients aged between 60 and 69 years at diagnosis were included, as this was the target group common to all screening programmes in the included countries ( appendix pp 4-5 ). First, we analysed demographic and tumour characteristics of CRC cases, namely sex, age at diagnosis, tumour location (proximal colon [caecum to transverse colon], distal colon [splenic flexure to sigmoid colon], rectum [rectosigmoid junction and rectum], and overlapping or unspecific location), and stage at diagnosis, according to mode of detection. Differences between screen- and non-screen-detected cases were analysed through chi-square test. We subsequently assessed overall survival for screen-detected, non-screen-detected, and all CRC patients combined. Survival time was defined as the difference in days between the date of diagnosis and the date of death (deceased patients) or was censored at the date of last follow-up. For England, Ireland, Italy (Turin), Slovenia, and Spain (Basque Country, Girona, and Tarragona), for which cause of death information was available, CRC-specific survival was also assessed. Survival time was censored at the date of death from causes other than CRC; and deceased cases with unknown cause of death were excluded ( appendix p 1 ). CRC-specific survival analyses were not done for Denmark because cause of death information was missing for a large proportion of cases (34% of deceased patients with screen-detected cancer and 23% with non-screen-detected cancer). Survival was estimated using the Kaplan-Meier method, and three- and five-year survival rates and 95% confidence intervals (CIs) were calculated – for all CRC cases and screen- and non-screen-detected cases separately – by sex, age at diagnosis, tumour location, and stage at diagnosis. Survival curves up to five years after diagnosis were plotted according to mode of detection and stage at diagnosis. For Denmark and the Netherlands, survival was only analysed up to four and three years after diagnosis, respectively, given the recent implementation of screening and lack of data for later follow-up times. We abstained from statistically quantifying potential differences in survival between patients with screen- and non-screen-detected cancer (eg, through Cox proportional-hazards models), because this study was not conceived, and its results should not be used, to quantify screening effects. The purpose, instead, is to inform patients with screen-detected (and non-screen-detected) cancer about their survival probabilities, which may be very different from the ones that are commonly available to them (ie, for all patients combined regardless of mode of detection). All analyses were conducted using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). An alpha level of 0.05 was set for statistical tests. Role of the funding source The sponsor had no role in the study design, data collection, data analysis, interpretation of data, writing of the report, or the decision to submit the paper for publication. Results In total, we included 228 134 CRC cases, of whom 134 597 (aged 60-69 years at diagnosis targeted by screening in all countries) were considered in analyses for all countries combined ( Tables 1 and 2 ). Demographic and tumour characteristics are shown for all patients regardless of mode of detection and separately for patients with screen- and non-screen-detected cancer in Table 2 (all countries combined) and appendix (pp 6-10) (each country separately). The majority of patients were male (62·0% [83 444/134 597]), had cancer in the distal colon or rectum (68·6% [92 396/134 597]), and had cancer detected outside of screening (77·7% [104 517/134 597]); about half of the cancers were diagnosed at advanced stages III or IV (50·8% [38 579/76 012 included in stage-specific analyses]) ( Table 2 ). Across countries, we observed similar proportions of male/female patients to that seen for all countries combined, however, the distribution of cancers by subsite and stage varied considerably. Also, there were large inter-country differences in the proportion of patients with screen-detected CRC (11·3-40·7%) ( appendix pp 6-10 ). In comparison with non-screen-detected cases, screen-detected cases were more often male (66·6% [19 723/30 080] vs. 61·0% [63 721/104 517], p<0·0001), had cancer more often detected in the distal colon (42·0% [12 641/30 080] vs. 28·9% [30 190/104 517]) and less often in the proximal colon (22·3% [6709/30 080] vs. 29·1% [30 459/104 517]) – p<0·0001, and had cancer much more frequently detected at stage I (43·0% [8380/19 469] vs. 18·6% [10 531/56 543) and much less frequently detected at stage IV (7·6% [1476/19 469] vs. 27·3% [15 422/56 543) – p<0·0001 ( Table 2 ). These patterns were also consistently seen for each country separately ( appendix pp 6-10 ). Median follow-up times for all cases combined and screen- and non-screen-detected cases separately, by country, are presented in the appendix (pp 11-13) . In all countries, screen-detected CRC patients had substantially higher overall survival than patients with non-screen-detected cancer and all patients combined ( Figure 1 ; appendix pp 14-17 ). Analysing the data for all countries together, three-year overall survival for patients with screen-detected cancer, non-screen-detected cancer, and all patients combined was 89·9% (95% CI 89·6-90·3), 65·8% (95% CI 65·5-66·1), and 71·1% (70·9-71·4%), respectively. Five years after diagnosis, overall survival for screen-detected CRC patients was still very high (83·4% [95% CI 82·9-83·9%]), and much higher than for patients with non-screen-detected cancer (57·5% [95% CI 57·2-57·8]) and all CRC patients combined (63·1% [95% CI 62·8-63·4%]). Overall survival estimates according to stage at diagnosis are shown in Figure 2 and in the appendix (pp 14-17) . In analyses of all countries together, five-year overall survival for patients with stage I cancer was 92·4% (95% CI 91·6-93·1) (screen-detected), 86·7% (95% CI 86·0-87·4) (non-screen-detected), and 89·1% (95% CI 88·6-89·6) (all patients); and for patients with stage II cancer 87·9% (95% CI 86·6-89·1) (screen-detected), 79·2 % (95% CI 78·5-80·0) (non-screen-detected), and 81·2% (95% CI 80·5-81·8) (all patients). For screen-detected CRC patients, five-year overall survival was also rather high even with diagnosis of stage III cancer (80·7% [95% CI 79·3-82·0]); the corresponding figures for patients with non-screen-detected cancer and all patients combined were 66·2% (95% CI 65·3-66·9) and 69·4% (95% CI 68·7-70·1) only. Further, in the rare case of detection of metastatic (stage IV) cancer through screening, patients still had 48·4% (95% CI 45·7-51·1) probability of survival three years after diagnosis and 32·3% (95% CI 29·4-35·2) five years after diagnosis (compared to 3-year survival of 24·5% [95% CI 23·9-25·2] and 26·6% [95% 25·9-27·3] and 5-year survival of 13·9% [95% CI 13·3-14·5] and 15·4% [95% CI 14·8-16·0] for non-screen-detected and all patients combined, respectively). Survival rates by sex, age, and tumour location are also presented in the appendix (pp 14-17) . Overall, survival probabilities were slightly higher in women than in men, in patients detected at younger than at older ages, and in patients with a cancer located in the distal colon than in those with proximal colon cancer or rectal cancer. CRC-specific survival estimates are shown in Figures 3 and 4 and in the appendix (pp 18-20) . Looking at the data from all countries together, CRC-specific survival five years after diagnosis was 89·2% (95% CI 88·8-89·7) for patients with screen-detected cancer and was as low as 65·7% (95% CI 65·4-66·1) and 71·1% (95% CI 70·7-71·4) for non-screen-detected and all patients combined, respectively. CRC-specific survival patterns by sex, age, stage, and tumour location were in line with those described for overall survival. The abovementioned patterns of survival according to mode of detection and stage at diagnosis were also consistently seen across all countries ( Figures 1-4 , appendix pp 14-20 ). Discussion In this international population-based study, we provided overall and disease-specific survival probabilities for screen- and non-screen-detected CRC patients, and all CRC patients irrespective of mode of detection, for nine European countries that have introduced organised population-based CRC screening programmes. Survival rates for patients with screen-detected cancer were much higher than those found for patients with non-screen-detected cancer and all patients combined, and this pattern was consistently seen for all countries and within each disease stage. Survival probabilities for patients with screen-detected CRC have been previously reported in the context of pilot studies prior to implementation of population-based screening programmes 8 , and in a few regional and nationwide studies conducted during the first years of screening roll out, mostly in the early 2000s. 9 7 , These studies reported five-year overall survival (patients aged 50-69, 50-74, or 50-79) of around 80% or above, ie, close to or within the range of our findings. 10–15 In our study, besides presenting more up-to-date survival probabilities for patients with screen-detected cancer, we provide the first Europe-wide analysis – according to stage at diagnosis – in the era of organised population-based programmes. It is important to stress that this study was not designed to show or prove potential benefits of screening on CRC burden; these have been consistently shown elsewhere by substantial effects on CRC incidence and mortality. In fact, the higher survival of screen-detected cases may partly reflect lead-time bias (mere advancement of diagnosis through screening without improving the chances of prolonged life), length-time bias (higher proportions of slowly growing and less aggressive tumours among screen-detected cases), or overdiagnosis bias (a sort of length-time bias, in which a tumour that would have never caused symptoms or death is found at screening). 16–22 Length-time bias may indeed help explain the higher survival even within each stage for patients with screen-detected cancer than for patients with non-screen-detected cancer. Besides, residual lead-time bias, potentially not fully accounted for by the rather crude classification of stage, might have also played a role; yet a previous study has shown that the higher survival of patients with screen-detected cancer remained even after adjustment for tumour size and number of affected lymph nodes. 23 Moreover, patients undergoing screening might also be more likely to adhere to therapy and behave overall more health conscious (eg, have a healthier lifestyle), 7 potentially influencing prognosis and, to a certain extent, contributing to the observed disparities in survival by mode of detection, particularly for patients with stage III and IV cancers. 5 Irrespective of the causes for the very favourable prognosis of patients with screen-detected cancer, our data show the actual survival probabilities for this increasing group of patients and are thus of high clinical relevance. These data may not only prevent screen-detected CRC patients from being discouraged by unfavourable survival estimates commonly available for all patients regardless of mode of detection, but also encourage the general eligible population to make use of available screening options. We also observed that survival of patients with cancer located in the distal part of the colon was overall higher than that of patients with proximal colon cancer. This observation may be explained, to a large extent, by a more favourable stage distribution of cancers located in the distal than in the proximal colon, as well by distinct molecular features between subsites. 22 , 24–26 Despite the overall very high survival for patients with screen-detected cancer, we still observed some variability across countries in total and stage-specific survival, which might in part reflect disparities in provision of cancer care (eg, adjuvant and palliative therapy). Comparisons between countries should, however, be made with caution given the different years and age groups included, which reflect the variety of screening strategies in the included countries. There are also differences in the primary screening tests available that need to be kept in mind – in Belgium, both guaiac-based FOBT (gFOBT) and FIT were used; in England, gFOBT; in France, gFOBT up to 2014 and FIT from 2015 on; in Italy, flexible sigmoidoscopy and FIT; in the other included countries, FIT. 22 These differences in screening strategies might in part help explain the observed variations in stage and subsite distribution of cancers across countries. For example, the Netherlands and Slovenia, with FIT-based programmes and comparatively high participation rates, are among the countries with the most favourable stage distribution and the highest share of distal CRCs, which are more often found at screening. Besides, when comparing the data for all patients combined, one also needs to take into account that the share of screen-detected cancers varied substantially across countries (overall higher for countries with FIT-based programmes and higher participation rates). For these reasons, we did not place much focus on comparing results between countries and, instead, pointed to the overall patterns. 22 This study has several strengths and limitations. To our knowledge, this is the first multi-country population-based study from Europe providing overall and CRC-specific survival estimates for screen-detected CRC patients separately. To do so, we used high-quality cancer registry data with high completeness levels of stage at diagnosis (> 90% for most registries), which allowed us to conduct detailed survival analyses according to stage. As far as limitations are concerned, besides the inclusion of different years and age groups across countries, the very recent implementation of screening in Denmark and the Netherlands limited us from providing data on survival five years after diagnosis for patients diagnosed in these two countries; and for several countries, over 50% of patients were followed-up for less than five years. Also, the low numbers of patients with screen-detected cancers with long follow-up time in some countries, particularly Ireland, led to estimates of survival with large confidence intervals (especially in stage-specific analyses). Furthermore, there were inter-country differences in registration of mode of detection. Specifically, in France, Ireland, and the Netherlands, data were obtained from patients’ medical records instead of linkage with screening databases from the organised programmes and may be more prone to misclassification. Finally, the lack of information regarding interval cancers did not allow us to provide separate survival probabilities for patients with cancer detected after a negative test/ follow-up colonoscopy and before the next test was due. It is also worth mentioning that the data shown in this study are for patients with cancer diagnosed in nine (high-income) European countries and are likely to be very different from those in other countries or regions. In particular, the lower levels of health care provision, disease diagnosis, and treatment in low-income countries are expected to lead to lower survival probabilities than those reported herein. 2 In summary, we found that patients with screen-detected CRC have a very favourable prognosis in European countries. Even in the rare case of detection of cancer at advanced stage through screening, the survival probabilities are much higher than those reported for patients with non-screen-detected cancer and for all CRC patients combined. These data are essential to appropriately inform patients, physicians, and the general population about the survival probabilities after a screening-based CRC diagnosis. Contributors HB and RC conceived the study. RC conducted the literature search. HDS, NVD, MCN, JC, A-MB, VB, GL, A-SW, MCari, MR, PD, FP, PMW, CS, SR, VEPPL, MAGE, ST, TZ, ALdMM, RM-G, MP, JG, MCaru, AS-G and M-DC prepared the national and regional databases. RC carried out the analysis and drafted the manuscript. All authors contributed to the interpretation of the results and critically revised the manuscript. RC and HB directly accessed and verified the raw data and take responsibility for the integrity and accuracy of the analyses. All authors had full access to all the data reported in the study and accept responsibility to submit the paper for publication. Data sharing statement Summary statistical data will be available from the corresponding author upon reasonable request with the permission of the contributing cancer registries. Declaration of interests HDS and NVD are employed by the Belgian Cancer Registry, which is financed by regional and federal authorities for collecting data regarding new cancer diagnoses and cancer screening in Belgium, and for disseminating associated epidemiological parameters. Acknowledgements We are thankful to all cancer registries and their staff for the efforts in collecting and preparing the data for this study. Specifically, Belgian Cancer Registry (BCR), Danish Cancer Registry, Danish Colorectal Cancer Group Database, Danish Quality Database for Colon Cancer Screening, National Cancer Registration and Analysis Service (NCRAS) – Public Health England (data provided under the Open Government Licence: https://doi.org/10.25503/wd5j-e989 ), Digestive Cancer Registry of Burgundy, Digestive tumors registry of Calvados, Cancer Registry of Doubs, Digestive tumors registry of Finistere, Cancer registry of Isere, National Cancer Registry Ireland, Piedmont Cancer Registry, Netherlands Cancer Registry (IKNL), Slovenian Cancer Registry, Basque Cancer Registry, Girona Cancer Registry, Murcia Cancer Registry and Tarragona Cancer Registry. The centers for cancer screening responsible for the colorectal cancer screening programs in Flanders (Centrum voor Kankeropsporing, CvKO), Wallonia (Centre Communautaire de Référence, CCR) and Brussels (Brussels Prevention, Bruprev) provided BCR with data on colorectal cancer detection mode within existing data flows and legal frameworks. For their tasks regarding colorectal cancer screening, CvKO, CCR, Bruprev and BCR receive funding from the respective regional authorities. Supplementary materials Supplementary material associated with this article can be found in the online version at doi: 10.1016/j.lanepe.2022.100458 . Appendix Supplementary materials Image, application 1
|
[
"SUNG",
"ALLEMANI",
"ARAGHI",
"SCHREUDERS",
"CARDOSO",
"BRENNER",
"PANDE",
"LINDEBJERG",
"GILL",
"PARENTE",
"IDIGORASRUBIO",
"SPOLVERATO",
"IBANEZSANZ",
"TEPES",
"HEWITSON",
"BRENNER",
"ATKIN",
"HOLME",
"MILLER",
"SENORE",
"CARDOSO",
"MISSIAGLIA",
"HUYGHE",
"HOFFMEISTER"
] |
1eb340a0a1044b87baaf058ec220ad75_Evaluation of Achilles tendon rupture using 3-dimensional computed tomography_10.1016_j.asmart.2017.05.294.xml
|
Evaluation of Achilles tendon rupture using 3-dimensional computed tomography
|
[
"Yoshikawa, Masahiro",
"Nakasa, Tomoyuki",
"Sawa, Mikiya",
"Tsuyuguchi, Yusuke",
"Adachi, Nobuo"
] | null |
Introduction: Achilles tendon rupture is the most common injury with the rupture of the ankle ligaments. The Achilles tendon rupture is diagnosed by various local findings and manual tests such as Thompson’s squeeze test. Furthermore, MRI and the ultrasonography are useful for a diagnosis of the Achilles tendon rupture. Although several studies on MRI and ultrasonography for evaluating Achilles tendon rupture have been documented, computed tomography (CT) imaging has not been evaluated for the diagnosis of Achilles tendon rupture. The combining technologic advances in CT with volume-rendering computer graphics can provide a 3-dimensional (3D) visualization of the full feature of the soft tissue with more detailed information than conventional depiction. Recently, it has been reported that 3D computed tomography (CT) imaging with volume rendering can be used for diagnosing several soft tissues, such as muscles, hand and wrist tendons, or anterior talofibular ligament of the ankle. The purpose of this study was to prospectively determine whether 3D CT imaging could evaluate the status of Achilles tendon rupture and it is to compare it with MRI and the operative findings. Methods: From 2013 to September 2016, 6 patients whose preoperative 3D CT scan of the Achilles tendon was available were included in this study. They were routinely examined by MRI, and 3D CT. The patients consisted of 6 men with the average age of 39.0 years (range, 23 -75 years). 2 patients had acute Achilles tendon rupture, 2 patients had re-ruptured Achilles tendon, and 2 patients had chronic Achilles tendon rupture with symptoms of pain and giving way to sensation and some functional disability of the ankle. All patients were treated surgically. The decision to proceed to surgical treatment was made according to the patient’s symptoms, physical examination, and images. MRI scans were performed on a 1.5-T whole-body scanner with a wraparound surface coil designed for the ankle. For all patients, proton density SE and T2-weighted SE images were collected. 3D CT images were obtained with a multidetector row CT scanner. The patient was placed in a supine position with the neutral position of bilateral ankle joint. Then, 3D volume data sets of the ankle joint were obtained. The scanning parameters were as follows: a gantry rotation speed of 0.6 s/rotation, 1.25-mm collimation width × 16 detectors, CT pitch factor of 0.562, and field of view of 25–30 cm. The CT dose index volume was 7.67 mGy. Then, 2D images were reconstructed with 12–25 cm field of view, 1.25-mm retrospective slice thickness, and 0.63-mm overlap. The total table motion was 20–30 cm, and finally, 200–400 slices were obtained. Images were rendered qualitatively with the volume-rendering technique by using a commercially available workstation to take the 3D images. The scanning time ranged from 40 to 60 s, and another 10 to 15 min was needed for postprocessing. The operative findings were compared to MRI and 3D CT images, and evaluate the usefulness of the 3D CT for treatment of Achilles tendon. Result: In acute case, there seems to be at a glance continuity by the MRI. The 3D CT images reveal Achilles tendon rupture. In the operative findings, Achilles tendon had ruptured, and there was a sparse hematoma instead of Achilles tendon, which were the same findings as in the 3D CT images. In re-rupture case, both the MRI and 3D CT images depicted Achilles tendon rupture. In the operative findings, Achilles tendon had ruptured, which were the same findings as in the MRI and 3D CT images. In chronic case, the Achilles tendon seemed to have continuity or rupture was depicted small by the MRI. However, Achilles tendon rupture was depicted clearly by the 3D CT images. In the operative findings, Achilles tendon had ruptured, which were the same findings as in the 3D CT images. Discussion: The Achilles tendon rupture is the most commonly injured in an ankle. After history taking and physical examination, radiographic imaging, including the stress view, is usually performed for accurate diagnosis, and more advanced imaging techniques, such as MRI, are available. MRI is a less invasive technique and is commonly used to evaluate Achilles tendon rupture. Qualitative analysis is possible because of the high-contrast imaging provided by MRI. However, the 3D pathway makes MRI evaluation difficult for examining the full features of Achilles tendon and MRI does not provide detailed information of bony lesion. Furthermore, patients must remain in one position to minimize motion artifact. Patients with pacemakers and metal internal or external prostheses cannot be assessed. In recent years, the volume-rendered multidetector helical CT has remarkably advanced, and the usefulness of this 3D CT technique was demonstrated for evaluating soft tissue such as ligament, tendon, and tumor. The advantage of 3D CT is to clarify the relationship between the bony structure and surrounding tissue. Nakasa et al demonstrated that 3DCT could evaluate the condition of anterior talofibular ligament remnants much better than MRI. Furthermore, 3D CT requires shorter scanning time than MRI. In this study, the scanning time ranged from 40 to 60 seconds, and another 10 minutes was needed to reconstruct the 3D images. The disadvantage of 3D CT imaging is the exposure ionizing radiation. However, 16-detector row CT scanner, which was used in this study, irradiated 7.67 mGy during scan. 3D CT imaging became less invasive technique. Preset study demonstrated that 3D CT has the possibility of being a useful diagnostic tool for evaluation of Achilles tendon rupture, especially in the cases of chronic Achilles tendon rupture. Keywords: Achilles tendon, 3D CT
|
[] |
761cfafa25784a3c913cfc28b046f84d_Predicting the abundance of metal resistance genes in subtropical estuaries using amplicon sequencin_10.1016_j.ecoenv.2022.113844.xml
|
Predicting the abundance of metal resistance genes in subtropical estuaries using amplicon sequencing and machine learning
|
[
"Zhou, Lei",
"Zhao, Zelong",
"Shao, Liyi",
"Fang, Shiyun",
"Li, Tongzhou",
"Gan, Lihong",
"Guo, Chuanbo"
] |
Heavy metals are a group of anthropogenic contaminants in estuary ecosystems. Bacteria in estuaries counteract the highly concentrated metal toxicity through metal resistance genes (MRGs). Presently, metagenomic technology is popularly used to study MRGs. However, an easier and less expensive method of acquiring MRG information is needed to deepen our understanding of the fate of MRGs. Thus, this study explores the feasibility of using a machine learning approach—namely, random forests (RF)—to predict MRG abundance based on the 16S rRNA amplicon sequenced datasets from subtropical estuaries in China. Our results showed that the total MRG abundance could be predicted by RF models using bacterial composition at different taxonomic levels. Among them, the relative abundance of bacterial phyla had the highest predicted accuracy (71.7 %). In addition, the RF models constructed by bacterial phyla predicted the abundance of six MRG types and nine MRG subtypes with substantial accuracy (R2 > 0.600). Five bacterial phyla (Firmicutes, Bacteroidetes, Patescibacteria, Armatimonadetes, and Nitrospirae) substantially determined the variations in MRG abundance. Our findings prove that RF models can predict MRG abundance in South China estuaries during the wet season by using the bacterial composition obtained by 16S rRNA amplicon sequencing.
|
1 Introduction Anthropogenic-derived heavy metals are significant environmental contaminants ( Baker-Austin et al., 2006 ). Although some heavy metals, such as Cu and Zn, are essential in trace amounts for the growth of organisms, they are toxic in excess ( Xiong et al., 2015 ). Moreover, heavy metals in polluted environments are not subject to degradation and can subsequently act as long-term selection pressures ( Stepanauskas et al., 2005 ). Some bacteria have evolved mechanisms, called metal resistance genes (MRGs), enabling them to cope with high concentrations of toxic metals ( Pal et al., 2014 ). More importantly, metal-resistant bacteria are crucial for controlling the bioavailability of metals and participating in their cycling ( Oyetibo et al., 2019 ). Efforts have been made to comprehensively quantify MRGs in microbial communities using metagenomic sequencing. For example, Zhou et al. (2022) obtained and compared the distribution of MRGs in subtropical estuaries of China using metagenomic data. However, the relatively high cost and the need for a professional data analyst limit its application in routine environmental monitoring ( Hendriksen et al., 2019 ). Therefore, it is necessary to evaluate the magnitude of MRGs with more straightforward and cheaper methods of determining heavy metal contamination. With the development of data analysis technology in recent years, many statistical methods are available to quantify complex relationships. Machine learning (ML) approaches have been particularly promising ( Torija and Ruiz, 2015 ). Among the ML models, random forest (RF) analysis is an important machine learning method with which decision trees are generated based on the optimal explanatory variables to predict the response variable ( Breiman, 2001 ). RF models classify samples into different groups and quantitatively predict continuous variables via regression analysis ( Smith et al., 2015 ). Indeed, some studies use the RF analysis to predict environmental health variables based on the microbiome data. For example, Sun and Coworkers successfully predicted the abundance of antibiotic resistance genes using the RF models constructed by bacterial taxa data ( Sun et al., 2021 ). Using RF analysis, Wilhelm et al. (2022) predicted the soil health metrics from microbiome data with about 80 % accuracy. Current studies using ML models to predict functional gene levels were mainly based on microbiome data obtained by metagenomics ( Rahman et al., 2018; Sun et al., 2021 ). However, if metagenomic sequencing is performed, the relevant information can be obtained directly through functional gene annotation without using ML models for prediction. Compared to metagenomics, amplicon sequencing based on the 16S rRNA gene is currently the most frequently used method for studying bacterial communities due to its low cost of sequencing and data analysis ( Bokulich et al., 2018 ). Therefore, it will be highly significant for environmental quality monitoring to apply the ML framework based on the less tedious and cheaper amplicon sequencing data to predict functional gene levels. Estuarine ecosystems receive contaminants from surrounding rivers and streams, resulting in the accumulation of heavy metals emitted from domestic settlements, hospitals, livestock facilities, and aquaculture ponds ( Islam et al., 2018; Rubalingeswari et al., 2021 ). This occurrence makes the estuarine ecosystem a reservoir and exchange hotspot for MRGs ( Lu and Liu, 2021 ). However, there are no reports about the prediction of MRGs by ML methods. In this study, we sampled waters from 30 subtropical estuaries in Guangdong and Guangxi provinces in China. A metagenomics-based approach identified the abundances of MRGs, and the bacterial communities were obtained by amplicon sequencing of the 16S rRNA gene. Then, RF models were trained to predict MRG abundance with explanatory variables of amplicon datasets. The findings from this study can demonstrate the feasibility of using RF models on bacterial community composition to predict the MRGs in complex environments. 2 Material and methods 2.1 Datasets This study used the metagenomic and amplicon sequencing data of 90 water samples from 30 subtropical estuaries in Guangdong and Guangxi provinces in China for RF model training. Our previous study published this dataset ( Zhou et al., 2022 ), wherein the information about the sample collection and sequencing process is also available. The amplicon and metagenomic sequencing data were accessed from the National Center for Biotechnology Information short reads archive (SRA) database under BioProject numbers PRJNA730095 and PRJNA730330, respectively. 2.2 Bioinformatics analysis For MRG annotation, we used an analysis pipeline, ARGs-OAP v2.0 ( Yin et al., 2018 ), with the BacMet2 ( Pal et al., 2014 ) as the reference database. Briefly, the first step was pre-screening the potential MRG sequences and the 16S rRNA gene from short reads in metagenomic datasets using the UBLAST algorithm ( Yang et al., 2019 ). Then, MRGs were annotated and classified using BLASTX, with an e-value at 1e-7. The sequence was considered MRG-like when its best hit had a similarity of ≥ 80 % to reference sequences with a query coverage of ≥ 25 amino acids ( Kristiansson et al., 2011 ). Based on the annotation results, the normalized abundance of MRGs (copies of MRG per 16S rRNA gene) in each sample was obtained. To obtain the bacterial community composition, the amplicon sequenced reads of 16S rRNA gene were qualified, assembled, and clustered to the amplicon sequence variants (ASVs) using the Divisive Amplicon Denoising Algorithm 2 (DADA2) plugin unit in the Quantitative Insights Into Microbial Ecology 2 (QIIME2) program ( Bokulich al, 2018 ). Representative sequences of each 16S ASV were selected by the default method and appointed to a taxonomy based on the SILVA 132 database ( Yilmaz et al., 2014 ). Singletons (the number of a specific ASV was one) were abandoned to improve the efficiency of the data analysis. Finally, a bacterial ASV abundance table was constructed and rarefied using a standard number of tags according to the sample with the least number of tags. 2.3 MRGs prediction First, RF models were developed using the total abundance of MRGs as response variables with bacterial abundance at different taxonomic levels (phylum, class, order, family, and genus, respectively) as explanatory variables. Then, the bacterial abundance at the taxonomic level with the best performance was used as the explanatory variable to develop the RF models with each MRG-type abundance (genes resistant to certain metal) or subtype (genes with a different name) levels as the response variables. Finally, the RF models were validated on some independent datasets. Before constructing the RF model, the bacteria abundance data were processed by removing the bacteria with detected rates < 60 %. For RF model construction, the data were randomly split into 70 % (training) and 30 % (testing) subsets. Then, the RF model was generated using the training dataset and evaluated by the test detest. This process was repeated 1000 times before finally integrated to obtain the final RF model. The RF models were computed using the "randomForest" package in R v4.0.2 ( Liaw and Wiener, 2002 ). 2.4 Statistical analysis and data visualization All statistical analyses and data visualization were accomplished by R v4.0.2 ( R Core Team, 2020 ). Linear regressions were used to determine the consistency of predicted MRG abundances and the actual values using the "lm" function. In general, high values of R 2 indicate a good fit of the RF models, while R 2 values up to 0.6 were defined as "strong" accuracy. In addition, the top 10 bacteria that contributed most to the MRG predictions in each RF model were extracted, and the results were visualized using the "ggplot2" package ( Wickham, 2016 ). 3 Results 3.1 Prediction of total MRGs abundances The RF models based on the bacterial abundance from the phylum to genus levels were established to predict the total MRG abundance in the water samples of subtropical estuaries. Based on the regression results of predicted and actual values, RF models' accuracies ranged from 66.9 % to 71.7 % ( Fig. 1 a). We observed no apparent difference in the accuracy of RF models based on different bacterial taxonomic levels, with the model based on bacterial phylum having the highest accuracy. The most contributing bacterial phyla for predicting the total MRG abundance followed the trend: Firmicutes > Bacteroidetes > Patescibacteria > Armatlmonadetes > Nitrospirae ( Fig. 1 b). 3.2 Prediction of MRG type abundances According to the results of RF models for predicting the total MRG abundance, bacterial abundance at the phylum level was used for further analysis. The abundance of 18 MRG types was predicted by RF models based on the bacterial phyla data. Among them, six MRG types showed > 60 % prediction accuracy: Ag (62.9 %), As (68.3 %), Cr (64.5 %), Cu (74.5 %), Fe (64.8 %), and multi-metal (63.6 %) resistance genes ( Fig. 2 ). Patescibacteria contributed the most to Ag, As, and Fe resistance genes ( Fig. 3 a, b, and e). Armatimonadetes contributed the most to Cr and Cu resistance genes ( Fig. 3 c and d), and contributed higher to As resistance genes ( Fig. 3 b). Also, Firmicutes contributed the most to the multi-metal resistance genes ( Fig. 3 f), while Nitrospirae contributed higher to the Ag and multi-metal resistance genes ( Fig. 3 a and f). 3.3 Prediction of MRG subtype abundances The RF models were further constructed using bacterial phyla to predict the abundance of MRG subtypes. The accuracy of nine MRG subtypes was up to 60 %. They were arsA , czcA , czcC , and czrA of the multi-metal resistance genes and cop-unamed , copC , and copF of the Cu resistance genes. ChrA1 belonged to the Cr resistance genes, and silP belonged to the Ag resistance genes ( Fig. 4 ). Among them, czcA had the highest predicted accuracy, reaching 83.1 %. Armatimonadetes and Bacteroidetes were the top 2 bacterial phyla that contributed the most to predicting the abundance of arsA , ChrA1 , cop-unamed , copC , czcA , and czcC ( Fig. 5 a–d, f, and g). Bacteroidetes also contributed the most to predicting the abundance of copF and czrA ( Figs. 5 e and h). The bacteria phylum that contributed the most to predicting silP gene was Nitrospirae ( Fig. 6 i), which also contributed relatively higher to the copF gene ( Fig. 5 e). In addition, Patescibacteria best predicted the abundance of czrA and silP genes ( Fig. 5 h and i). 3.4 Verification of the RF models Three independent datasets—viz. water samples from the Pearl River (PR) estuary, water samples from the Liao He (LH) estuary, and sediments from Beibu Bay—were used to verify the applicability of the RF models constructed to predict MRG abundance. These datasets verified the RF models to predict the abundance of total MRGs and MRG type or subtype with the best performance ( Fig. 6 ). Results showed that the prediction accuracy of the RF models for water samples from the PR Estuary was high (R 2 = 0.933, 0.949, and 0.193 for total MRGs, Cu resistance gene, and czcA , respectively; linear regression, p < 0.05), located in the region the training data were obtained. In contrast, the prediction performance of RF models for water samples from the LH Estuary was limited (R 2 = 0.081, 0.193, and 0.068 for total MRGs, Cu resistance gene, and czcA , respectively; linear regression, p > 0.05), located in the northeast of China. Moreover, the RF models were not applicable for sediments from Beibu Bay, despite being located in the region where the training data were obtained. 4 Discussion RF analyses have been used to predict environmental and health factors based on bacterial communities in diverse natural and anthropogenic ecosystems ( Roguet et al., 2018; Hermans et al., 2020; Zhao et al., 2022b ). RF models were also applied to predict the origin and quality of aquaculture species based on their gut microbiota ( Zhao et al., 2022a ). Moreover, the abundance of some functional genes like MRGs, such as antibiotic resistance genes, has been successfully predicted via RF models constructed by bacterial community data ( Sun et al., 2021 ) or other socioeconomic, health, and environmental factors ( Hendriksen et al., 2019 ). In this study, we linked the abundance of MRGs in water from subtropical estuaries in China with the relative abundance of bacteria at different taxonomic levels. One of the major challenges in RF-based modeling of microbiome data is a tendency to overfit because the number of features in the model (taxonomic units) typically far exceeds the number of samples ( Wilhelm et al., 2022 ). Aggregating features by broader classes is one of the methods to address this challenge ( Zhou and Gallins, 2019 ). In contrast to previous studies that used bacterial genera as the explanatory variables ( Sun et al., 2021; Zhao et al., 2022a ), we found that MRG abundance was predicted with higher accuracy using bacterial phyla-based RF models. In addition to predicting the response variables, RF models can rank the relative importance of individual explanatory variables ( Svetnik et al., 2003 ). According to the importance score, the most crucial phyla within the explanatory variables were Firmicutes, Bacteroidetes, Patescibacteria, Armatimonadetes, and Nitrospirae, previously reported as potential MRG hosts ( Qamar et al., 2017; Chen et al., 2020; Pan et al., 2020; Tian et al., 2020; Yan et al., 2020 ). The most important bacterial phyla for several metal resistance genes, i.e., Bacteroidtes and Armatimonadtets, occupied the niche of a scavenger of diverse species of carbohydrates ( Lee, 2015; Larsbrink and McKee, 2020 ). In addition, Bacteroidetes genomes appeared highly plastic and frequently reorganized through genetic rearrangements, gene duplications, and horizontal gene transfers, a feature that could have driven their adaptation to distinct ecological niches ( Francois et al., 2011 ). Another important bacteria phylum to MRGs, Nitrospirae, has recently received significant attention due to its contributions to nitrification ( Liu et al., 2020 ). Nitrospirae have been found in various natural ecosystems, significantly contributing to ammonia and nitrite oxidation at regional to global scales ( Shi et al., 2018; Yu et al., 2018 ). These results suggest the potential relationship between metal-resistant bacteria with carbon and nitrogen cycling. Horizontal gene transfer (HGT), a process mediated by mobile gene elements (such as plasmids, integrons, and transposons), has been widely investigated due to its role in the acquisition and spread of MRGs ( Gillings, 2013 ). Multiple studies have reported the contribution of HGT to the spread of MRGs among different bacteria ( Huang et al., 2021; Mazhar et al., 2021 ). HGT of MRGs is one of the potential factors affecting the accuracy and reliability of the prediction of MRG abundance by bacterial compositions. However, it only has a negligible impact on the RF model proposed in this study. On the one hand, the actual frequency of HGT in natural microbial communities is low, and a considerable part of it may not be related to MRGs ( Paquola et al., 2018 ). HGT is the direct cause of bacterial acquisition of new genes. However, the abundance changes of functional genes, like MRGs, in the microbial community are primarily due to the proliferation of the bacteria ( Luo et al., 2017 ). On the other hand, the vast majority of HGT occurs among strains within the same bacterial phylum, and the rate of HGT across bacterial phyla is very low ( Abby et al., 2012 ). In this study, we found that bacteria phyla worked best for predicting MRG abundance. Therefore, our predictive model avoided the vast majority of HGT effects. The RF models are susceptible to bias in regression when outliers are present in the datasets, i.e., large and small values may be underestimated and overestimated, respectively ( Zhang and Lu, 2012 ). In this study, we used a sediment dataset from Beibu Bay in China to test the efficiency of proposed RF models. However, due to the huge difference in the bacterial composition between water and sediment samples, many bacteria phyla used in the RF models were not detected in the sediment data. Normally, this situation should result in a very low predicted value, but the RF models return the minimum value in the training data. Thus, the RF model could obtain relatively accurate prediction results only when the test data is within the range of the training data. Besides, the performance of RF models depends on the size of the dataset and the ensemble of trees ( Gupta et al., 2021 ). Further, the validation results based on independent test datasets showed an ideal or acceptable prediction result on the estuary waters from the region where the training dataset was derived. In contrast, the prediction results for estuaries in other regions or sediments were not applicable. If a model applicable to samples from various ecological niches or regions with a large geographic distance is to be built, a huge dataset with wide coverage is required. However, due to the huge differences in microbial communities across the various environmental types and geographic locations, an ideal result is unobtainable on a global scale. An acceptable prediction model will likely be established for different environmental types and geographic regions. Because the training data in the current study was obtained from the estuaries of Guangdong and Guangxi in China during the wet season, the RF models can predict the abundance of MRGs in the estuaries of South China during the wet season. 5 Conclusions In the present study, RF models estimated the abundance of MRGs using bacterial composition data. The predicted accuracy of RF models based on bacterial abundance at different taxonomic levels did not differ. The abundance of the bacterial phyla can successfully predict the total abundance of MRGs, and several major MRG types and subtypes. Among the explanatory variables, several bacterial phyla, including Firmicutes, Bacteroidetes, Patescibacteria, Armatimonadetes, and Nitrospirae, were essential to the variation of MRG abundances. As data from high-throughput sequencing, especially 16S rRNA amplicon sequencing, becomes more available, the amount of bacterial taxa information will likely expand quickly. The findings from the present study can help estimate MRG abundance in the estuary waters of South China during the wet season, based on bacterial composition derived from 16S rRNA gene sequencing. However, more studies are needed (i.e., more data) from estuaries in other parts of the world to corroborate the accuracy of the RF models and expand the scope of application. CRediT authorship contribution statement Lei Zhou: Conceptualization, Methodology, Writing – original draft, Funding acquisition. Zelong Zhao: Conceptualization, Methodology, Writing – original draft. Chuanbo Guo: Conceptualization, Writing – review & editing, Funding acquisition. Liyi Shao: Investigation. Shiyun Fang : Investigation. Tongzhou Li: Investigation. Lihong Gan: Investigation. All authors have read and agreed to the final version of the manuscript. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements This study was supported by the National Key Research and Development Program of China ( 2020YFD0900504 ), Provincial Marine six Industries Special project for Promoting High-quality Economic Development (Marine economic development) funded by Department of Natural Resources of Guangdong Province ( GDNRC[2022]50 ), Guangdong Forestry Science and Technology Innovation Project ( 2021KJCX012 , 2022KJCX019 ).
|
[
"ABBY",
"BAKERAUSTIN",
"BOKULICH",
"BREIMAN",
"CHEN",
"FRANCOIS",
"GILLINGS",
"GUPTA",
"HENDRIKSEN",
"HERMANS",
"HUANG",
"ISLAM",
"KRISTIANSSON",
"LARSBRINK",
"LEE",
"LIAW",
"LIU",
"LUO",
"LU",
"MAZHAR",
"OYETIBO",
"PAL",
"PAN",
"PAQUOLA",
"QAMAR",
"RAHMAN",
"RCORETEAM",
"ROGUET",
"RUBALINGESWARI",
"SHI",
"SMITH",
"STEPANAUSKAS",
"SUN",
"SVETNIK",
"TIAN",
"TORIJA",
"WICKHAM",
"WILHELM",
"XIONG",
"YAN",
"YANG",
"YILMAZ",
"YIN",
"YU",
"ZHANG",
"ZHAO",
"ZHAO",
"ZHOU",
"ZHOU"
] |
3477dc18b99c4d808f1ff611889cb8eb_A primary appendiceal Burkitt lymphoma mimicking appendiceal abscess_10.1016_j.epsc.2022.102412.xml
|
A primary appendiceal Burkitt lymphoma mimicking appendiceal abscess
|
[
"Watanabe, Aya",
"Okata, Yuichi",
"Nakao, Makoto",
"Yamamoto, Nobuyuki",
"Yasufuku, Masao",
"Bitoh, Yuko"
] |
Interval appendectomy (IA) following nonoperative management with broad-spectrum antibiotics for complicated appendicitis with peri-appendiceal abscess is a one of the beneficial treatment options in children. However, during intervals, the protocol of follow-up blood examination and imaging studies including US/CT/MRI and the treatment policy for the recurrent cases, are not established. Herein, we presented a case which was diagnosed as primary appendiceal Burkitt lymphoma during the treatment policy of IA in a child, and discuss about unusual pitfall of IA policy.
|
1 Case report Interval appendectomy (IA) following nonoperative management with broad-spectrum antibiotics for complicated appendicitis with peri-appendiceal abscess is a one of the beneficial treatment options in a high prevalence of inflammation in the appendix upon interval removal in children, and IA is recommended to perform at least 12 weeks from initial presentation [ 1 ]. However, during intervals, the protocol of follow-up blood examination and imaging studies including US/CT/MRI and the treatment policy for the recurrent cases, are not established. Herein, we presented a case which was diagnosed as primary appendiceal Burkitt lymphoma during the treatment policy of IA in a child, and discuss about unusual pitfall of IA policy. A 9-year-old girl was referred to previous hospital with a 2-day history of right abdominal pain, blood examination showed CRP and WBC account elevation (8.92mg/dl, 11360/μl, respectively), and abdominal ultrasonography and CT scan revealed a mass-forming appendix whose overall diameter was 27 mm ( Fig. 1 A), and non-operative treatment with PIPC/TAZ administration had started. Both abdominal pain and CRP elevation were improved, she was discharged on the 11th days and IA was scheduled on 12 weeks later. However, four weeks later from the discharge (Day 36), the abdominal pain and CRP elevation (5.89mg/dl) without WBC count elevation(5891/μl) were recurred, and US demonstrated overall diameter of appendiceal mass enlarged to 33mm ( Fig. 1 B). And she was readmitted previous hospital and the non-operative management with TAZ/PIPC administration was re-started to continue the IA policy. TAZ/PIPC administration was effective again and she was discharged 11th day and IA was re-scheduled for elective appendectomy on 12 weeks after her second discharge. On the outpatient regular examination at four weeks after her second discharge (Day 85), US revealed the appendiceal mass had grown to maximum diameter 48 mm ( Fig. 1 C), suggesting a neoplastic lesion although she had been asymptomatic. Blood examination showed CRP elevation (8.83mg/dl) without WBC elevation (3865//μl), and tumor makers including IL-2 receptor (1076U/ml) and NSE (30.9ng/ml) were elevated whereas CEA 0.7ng/ml and CA19-9 2.5U/ml were normal. CT and MRI of the abdomen and pelvis revealed a heterogeneous mass approximately 60x90 mm( Fig. 1 D.E). From these findings, a primary appendiceal lymphoma was suspected and she was performed laparotomy. A whitish mass arising from the root of appendix was observed intraoperatively and it was considered a primary appendiceal tumor. The ovaries, fallopian tubes, and uterus were intact, but the tumor was firmly adherent to the sigmoid colon, upper rectum, and retroperitoneum. As the root of the appendix was intact, subtotal appendiceal tumor resection with the resection of the infiltration of the rectal wall were performed. Histopathological examination showed the wall of the appendix is markedly thickened, the wall was preserved on the root side. Further immunostaining revealed CD3-, CD20 + , CD10 + , bcl2-, Ki67-positive cells in 99% of specimens, and the diagnosis of Burkitt lymphoma was made. Postoperatively, the patient was diagnosed as stage III b and we performed. 6 courses of chemotherapy based on Japanese Pediatric Leukemia/Lymphoma Study Group (JPLSG) B-NHL03 protocol group 3. Two years have passed after the treatment, the patient is asymptomatic with no recurrence. 2 Discussion In this case report, the patient presented clinical findings compatible with mass-forming appendicitis, which indicated IA policy without arising suspicion of another diagnosis. Since antibiotic treatment was so effective that clinical symptoms and CRP elevation were improved promptly during both two hospitalizations, in addition overall diameter of appendiceal mass had not changed significantly, diagnosis of appendiceal lymphoma was delayed. Approximately 1% of appendectomies have an incidental finding of an appendiceal neoplasm, and manifestation of Burkitt's lymphoma as appendicitis or peri-appendiceal abscess has been reported [ 2 ]. Primary lymphoma of the appendix is characterized by diffuse swelling of the appendix and circumferential thickening of the wall while maintaining its morphology. Although there are no classical imaging features of appendiceal lymphoma, enlargement of appendix beyond 15mm in diameter on CT should be viewed with suspicion and a diameter above 25 mm should be even more concerning [ 3 ], in addition, abscess cavity generally shrinks with antibiotic treatment, therefore, our case may have been suspected as appendiceal lymphoma earlier. We hope that our case highlights the importance of close observation of appendiceal mass with US focusing on the overall diameter, and the importance of suspicion of the neoplasm when the overall diameter of appendiceal mass does not decrease nevertheless other clinical symptoms improved during IA policy. 3 Conclusion During IA policy, the close observation of appendiceal mass with the suspicion of the neoplasm is important in case the overall diameter of appendiceal mass does not decrease. Disclosure The authors declare no conflict of interest. Author contribution A.W., Y.O., M.N. and M.Y. conceptualized and designed the study, drafted the initial manuscript, and approved the final manuscript as submitted. N.Y. reviewed the manuscript and approved the final manuscript as submitted. All authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work. Financial disclosures The authors report no financial interests, relationships and affiliations relevant to the subject of the manuscript. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
[
"FARR",
"MIMERY",
"KHANNA"
] |
ffe66c0213934e9bb6419a0129869202_A heterozygous TTN c 79684CT mutant human induced pluripotent stem cell line ZZUNEUi023-A generated _10.1016_j.scr.2021.102614.xml
|
A heterozygous TTN (c. 79,684C>T) mutant human induced pluripotent stem cell line (ZZUNEUi023-A) generated from a Kazakh patient with dilated cardiomyopathy
|
[
"Li, Xiaowei",
"Wei, Wu",
"Li, Xiaoying",
"Qi, Ling",
"Lu, Shan",
"Wei, Hua",
"Liu, Yangyang",
"Dong, Jianzeng",
"Zhang, Chunyang",
"Lin, Tao"
] |
Dilated cardiomyopathy (DCM) is a nonischaemic heart muscle disease with structural and functional myocardial abnormalities. TTN truncating mutations are a common cause of DCM, occurring in ∼25% of familial cases of DCM and in 18% of sporadic cases. In this study, we generated a human induced pluripotent stem cell line ZZUNEUi023-A from peripheral blood mononuclear cells of a Kazakh DCM patient with the p. Arg26562Ter (c. 79684C>T) mutation in TTN using non-integrative Sendai virus. This cell line expressed pluripotency markers, showed normal male karyotype and could differentiate into all three germ layers in vitro.
|
Resource Table: Unique stem cell identifier ZZUNEUi023-A Alternative name(s) of stem cell line Not applicable Institution Hami Central Hospital, Hami, 839000, China Contact information of distributor Tao Lin, LinTaohmzzyy@163.com Type of cell line iPSC Origin Human Additional origin info Age: 43 years old Sex: male Ethnicity: Kazakh Cell source Peripheral blood mononuclear cells (PBMCs) Clonality Clonal Method of reprogramming Sendai virus. OCT4, SOX2, cMYC, KLF4 Genetic Modification Yes Type of Genetic Modification Spontaneous mutation Evidence of the reprogramming transgene loss (including genomic copy if applicable) RT-PCR Associated disease Dilated cardiomyopathy Gene/locus Gene: TTN Locus: 2p31.2 Mutation: heterozygote c.79684C>T (p.Arg26562Ter) Data archived/stock date 07/2021 Cell line repository/bank hPSCreg, https://hpscreg.eu/cell-line/ZZUNEUi023-A Ethical approval Ethics Committee of the First Affiliated Hospital of Zhengzhou University (2018-KY-38) 1 Resource utility Mutations in the TTN gene are the most common cause of hereditary dilated cardiomyopathy ( Schultheiss et al., 2019 ). However, the exact mechanism of DCM caused by TTN mutations is still unclear. This cell line could be differentiated into cardiomyocytes in vitro and serve as a cell disease model in the understanding of DCM pathogenesis. 2 Resource details The giant protein titin, encoded by the TTN, is a scaffolding filament, signaling platform, and provider of passive tension and elasticity in cardiomyocytes. The product of TTN is divided into two regions, a N-terminal I-band and a C-terminal A-band. Truncating mutations in TTN (TTNtv) have been identified in 20–25% of human patients with adult-onset DCM. TTNtv in DCM cases are most abundant in A-band titin. In this study, we generated an iPSC line from a DCM patient carrying a heterozygous mutation (c. 79684C>T) in TTN gene. This sequence change results in a premature translational stop signal in the last exon of the TTN mRNA at codon 26,562 (p.Arg26562*). While this is not anticipated to result in nonsense mediated decay, it is expected to disrupt the A-band of the TTN protein. This variant has been reported in the literature in individuals affected with dilated cardiomyopathy ( Ceyhan-Birsoy et al., 2013; Dalin et al., 2017; Roberts et al., 2015; Savarese et al., 2014 ). Truncating variants in the A-band of TTN are significantly overrepresented in patients with DCM. For these reasons, this variant has been classified as Likely Pathogenic. The generated iPSC line, ZZUNEUi023-A, has a typical human embryonic stem cell-like morphology such as colony appearance comprised of tightly packed cells and high nuclear/cytoplasmic ratio ( Fig. 1 A). Sanger sequencing confirmed the ZZUNEUi023-A harboring the heterozygous mutation (c. 79684C>T) in TTN gene ( Fig. 1 B). Immunofluorescent staining showed that ZZUNEUi023-A expressed the pluripotency-related markers SSEA4, OCT4 and NANOG ( Fig. 1 C). More than 97% of the ZZUNEUi023-A were SSEA4 positive assessed by flow cytometry ( Fig. 1 D). In addition, the RT-PCR result of Sendai virus genome in ZZUNEUi023-A at passage 13 was negative ( Fig. 1 E). The pluripotent state of ZZUSAHi023-A was further assessed by differentiation of the iPSCs into ectodermal, endodermal and mesodermal germ layers in vitro. The differentiated cells were positive for β3 tubulin/ NeuN (ectoderm), α-SMA/NKX2.5 (mesoderm) and AFP/FOXA2 (endoderm) assessed by immunofluorescence staining ( Fig. 1 F). In addition, ZZUSAHi023-A had a normal male karyotype (46, XY) without chromosomal aberration 15 ( Fig. 1 G) and was free of mycoplasma contamination assessed by PCR ( Fig. 1 H). The generated iPSC line ZZUSAHi023-A has the same DNA profile as its’ parental PBMCs confirmed by Short tandem repeat (STR) analysis (available with the authors). Taken together, ZZUSAHi023-A is pluripotent and can be an ideal disease model to study pathological mechanism of DCM caused by mutation in TTN. The characterization of the ZZUNEUi023-A is summarized in Table 1 . 3 Materials and methods 3.1 PBMCs isolation and reprogramming Peripheral blood mononuclear cells (PBMCs) were isolated from the DCM patient using Ficoll (Sigma) density centrifugation. After activated by StemPro-34 SFM medium (Life Technologies) containing 100 ng/ml FLT3 (Sigma), 100 ng/ml SCF (Sigma), 20 ng/ml IL3 (Sigma) and 20 ng/ml IL6 (Sigma) for 4 days, 5 × 10 5 PBMCs were transduced with the CytoTune™-iPS 2.0 Sendai Reprogramming Kit (Invitrogen) consisting of hc-MYC, KOS, and hKLF4 at a MOI of 5, 5 and 3 respectively. After 24 h transduction, Sendai Viruses were removed by centrifugation. At day 3 post-transduction, the transfected PBMCs were seeded onto a 10 cm culture dishes coated with 150 µg/cm 2 Matrigel (Corning) using mTeSR1 medium (STEMCELL Technologies) containing 10 μM Rock kinase inhibitor Y-27632 (STEMCELL Technologies) with daily medium changes until iPSC colonies were manually picked up. 3.2 Cell culture iPSC cells were cultured expanded in mTeSR1 medium on 150 µg/cm 2 Matrigel coated plates at 5% CO 2, 37 °C and 20% O 2 condition with daily medium changes. At around 70%–80% confluency, cells were digested using 0.5 mM EDTA in PBS without MgCl 2 or CaCl 2 (HyClone) and were passaged at a ratio of 1:4 every 3 days with 10 μM Y-27632. 3.3 Sequencing Genomic DNA of ZZUNEUi023-A was extracted at passage 10 using the TIANamp Genomic DNA kit (TIANGEN) according to the instruction manual. PCR targeting TTN was performed using 2 × Taq Easy-Load TM PCR Master Mix (Beyotime) following the protocol: denaturation temperature of 95 °C for 30 s, annealing temperature of 58 °C for 50 s and extension temperature of 72 °C for 30 s, for 32 cycles (Mastercycler Nexus Gradient Thermal Cycler, Eppendorf). The product size is 772 bp. Sanger sequencing was done by Sangon Biotech (Shanghai, China). Primer sequences are listed in Table 2 . 3.4 Immunofluorescent staining ZZUNEUi023-A at passage 15 were fixed with 4% paraformaldehyde (Sigma) for 10 min at room temperature. After permeabilized with 0.5% Triton X-100 (Sigma) and blocked with 3% BSA for 30 min at room temperature, cells were incubated with primary antibodies (SSEA4, OCT4 and NANOG) overnight at 4 °C. Then the cells were washed thrice with PBS and incubated with secondary antibodies for 1 h at room temperature. After washed thrice with PBS, 0.2 μM DAPI (Invitrogen) was used for nuclear counterstaining for 5 min. Images were captured using Axio Observer fluorescence microscope (ZEISS) using ZEN 3.0 software. Antibodies are listed in Table 2 . 3.5 Flow cytometry After digested into single cells using ACCUTASE TM (STEMCELL Technologies), ZZUNEUi023-A at passage 16 was resuspended in PBS containing 5% BSA for 30 min at room temperature. Then the cells were incubated with Mouse anti-Human SSEA4 Alexa Fluor 488 (BD Biosciences) for 30 min at room temperature. After washed thrice in PBS, the cells were analyzed using FACSAria TM Cell Sorter (BD Biosciences). Unstained cells served as a negative control and the results were analyzed using FlowJo X software. 3.6 Detection of Sendai virus genome and transgenes 1 µg total RNA extracted from ZZUNEUi023-A at passage 15 was reverse transcribed into cDNA using retro -transcribed into cDNA using the PrimeScript™ RT Master Mix (Takara) kit according to the instructions. RNA extracted from ZZUNEUi008 cells was used as negative control and RNA extracted from PBMC transfected with Sendai virus for 24 h served as positive control. Detection of Sendai virus in ZZUNEUi023-A was assessed by RT-PCR on QuantStudio 3 (Thermo). Primers and product size are listed in Table 2 . 3.7 Differentiation into three germ layer in vitro ZZUNEUi023-A at passage 15 was differentiated into three germ layers in vitro using STEMdiff TM Trilineage Differentiation Kit (STEMCELL Technologies) according to the instruction manual. The differentiated cells were incubated with anti-β3 tubulin/ NeuN (ectoderm), anti-α-SMA/ NKX2.5 (mesoderm) and anti-AFP/FOXA2 (endoderm) respectively to assess expression of each germ layer differentiation. Nuclei were stained with DAPI and images were taken using Axio Observer fluorescence microscope (ZEISS). Antibodies are listed in Table 2 . 3.8 Karyotyping The karyotype of ZZUNEUi023-A at passage 18 was done at KINGMED CENTER FOR CLINICAL LABORATORY (Zhengzhou, China) using standard procedures. Briefly, cells were mitotic arrested using 0.2 μg/ml colchicine (Sigma) for 120 min at 37 °C. After digested with Gentle Cell Dissociation Reagent, the cells were resuspended in 0.075 M KCl for 30 min at 37 °C and were fixed in methanol: acetic acid (3:1). The karyotype was analyzed in 20 single clones and the band resolution was 400. Clonal chromosomal changes were described according to an International System for Human Cytogenetic Nomenclature ISCN. 3.9 Mycoplasma test The Mycoplasma contamination in ZZUNEUi023-A was assessed using Mycoplasma detection kit (Cellapy) and cell supernatant was used as a template for PCR following the protocol: 95 °C for 5 min; 25 cycles: 95 °C 30 s, 56 °C 30 s, 72 °C 30 s and 72 °C for 5 min. The product size is 288 bp and PCR targets 16 s rRNA of Mycoplasma. 3.10 STR analysis STR analysis of TTN-Arg26562Ter PBMCs and ZZUNEUi023-A was done by KINGMED CENTER FOR CLINICAL LABORATORY (Zhengzhou, China). Funding This work was financially supported by Henan Province Medical Science and Technology Research Project ( 2018020067 ) and National Natural Science Foundation of China ( 82000352 ). Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
[
"CEYHANBIRSOY",
"DALIN",
"ROBERTS",
"SAVARESE",
"SCHULTHEISS"
] |
794ed755515b43fc9141742c0a7a6993_Impact of mental arithmetic task on the electrical activity of the human brain_10.1016_j.neuri.2024.100162.xml
|
Impact of mental arithmetic task on the electrical activity of the human brain
|
[
"Azizi, Tahmineh"
] |
Cognitive neuroscience investigates the intricate connections between brain function and mental processing to understand the cognitive architecture. Exploring the human brain, the epicenter of cognitive activity, offers valuable insights into underlying cognitive processes. To monitor brain states corresponding to various mental activities, appropriate measurement tools are essential. Electroencephalogram (EEG) signals serve as a valuable tool for recording patterns and changes in electrical brain activities. Leveraging non-linear signal processing techniques holds promise for advancing our understanding of brain activities during cognitive tasks. In this study, we analyze the electrical activity of the brain using EEG data collected from subjects engaged in a cognitive workload task. Employing wavelet-based analysis, we capture changes in the structure of EEG signals before and during a mental arithmetic task. Additionally, spectral analysis is conducted to discern alterations in the distribution of spectral contents of EEG signals. Our findings underscore the efficacy of wavelet-based analysis and spectral entropy in quantifying the time-varying and non-stationary nature of EEG recordings, offering effective frameworks for distinguishing between different cognitive activities. Consequently, these methods afford deeper insights into the cognitive architecture by tracking changes in the distribution of the time-varying spectrum.
|
1 Introduction Human brain is the center of cognitive activity and uncover the brain states during different mental activities, we need to employ appropriate methods and measurements [1] . Cognitive neuroscience analyzes neurological and psychological brain activities using different imaging methods to obtain a big picture of behavioral or cognitive brain functions [2] . The ultimate goal of cognitive neuroscience is to capture the underlying mental structure of cognitive functions [3] . Although, cognitive activities are considered to be the essential indicators for human activities, however, it is not usually to find what cognitive processes are involved when a particular task is processing [4] . The progress in understanding the cognitive architecture provides a gateway to cognitive science and plays a key role for consciousness [5–7] . To localize cognitive tasks, we may need to measure the changes happen in electrical brain activities [6,42] . The potential changes which are recorded on the surface of the brain, are result of the changes occur in electro-chemical brain activities [8] . The electrical signals are recorded as the electroencephalogram (EEG). The electroencephalogram (EEG) has been used frequently to diagnose different neurological disorders [9–13] . Moreover, EEG pattern recognition methods have been used to monitor working memory load during computer based tasks [14,15] . Electroencephalography (EEG) also has demonstrated successful results as an indicator of cognitive workload [16] . Studying cognitive workload [17] which illustrates the level of mental resources used during a specific task can be done easily using wireless EEG devices. EEG-based measure of cognitive workload such as spectral features of the EEG has been used to classify different mental arithmetic tasks at different levels, low workload and high workload and also relaxed [18] . The reason for making EEG measures to be an efficient classifier of mental workload is their sensitivity to variations in task difficulty [19] . Previously, mental arithmetic task recognition was done using EEG power spectrum density (PSD) analysis [20–22] . In [20] , it was proved that there is a relationship between the reduction at lower beta band within the left parietal cortex and long term memory arithmetic facts. Other study reported that when the subject are performing the mental arithmetic task, we see changes in alpha band and beta band [21] . Autoregressive model (AR) also reported to be efficient in mental arithmetic task classification [22] . The specteral analysis can be used to represent and evaluate the arithmetic recognition in EEG signals, however these linear analysis may not be able to fully characterize the nonlinearity patterns in brain activities [23,24] since they only analyze the EEG signals on the frequency domain without considering phase information [24] . Therefore, we need to apply nonlinear analysis to capture dynamical patterns of EEG signals during different brain activities [24] . In [25] , the nonlinear techniques have been applied to characterize the dynamics of neural networks of the brain. They investigated the abnormalities have been occurred repeatedly in Alzheimer's disease (AD) using nonlinear EEG measures. In [12] , different embedding methods, entropy method and the largest Lyapunov exponent have been employed to explore the non-linearity in normal and abnormal EEG signals. In addition, they reported the non-linear EEG analysis may be applied to display dysfunction patterns in dementia and Parkinson's disease (PD). The wavelet entropy-based features displayed high accuracy in precise estimation of working memory load [16] . The wavelet transform of EEG signals is an effective method to evaluate mental workload [19] . The functional and anatomical brain activities during cognitive mental tasks have been extensively evaluated using different techniques such as spectral analysis [26,27] and coherence analysis [28,29] . According to these studies, the EEG spectral powers and EEG coherence analysis reveal different spectral and coherence patterns. Given the inherently non-stationary nature of mental states, characterized by continual fluctuations, it becomes imperative to employ non-stationary techniques to scrutinize the time-frequency characteristics of EEG signals. In this study, we leverage such techniques to delve into the dynamic behavior inherent in EEG recordings, thereby extracting reliable information crucial for unraveling the complexities of brain function. To this end, we utilize time-frequency analysis as a means to elucidate the abrupt transitions evident within EEG signals recorded from a 21-year-old female subject, both before and during a mental arithmetic task. By estimating spectral entropy?a robust metric reflective of signal complexity?we aim to capture the nuanced changes manifesting within EEG recordings over time. By illuminating the dynamic interplay between neural processes during cognitive tasks, this study holds promise for advancing our comprehension of brain activity dynamics. The integration of nonlinear signal processing techniques heralds a paradigm shift, offering novel avenues for identifying and characterizing brain function during cognitive endeavors. Through the identification of new indicators and the refinement of existing methodologies, this research endeavors to catalyze breakthroughs in the elucidation of brain function, thereby enhancing our understanding of cognitive processes and facilitating the development of more efficacious diagnostic and therapeutic interventions. 1.1 Methods Conventional spectral analysis techniques offer insights solely into the mean spectral composition of the observed signal, thereby potentially overlooking nuances pertaining to the distributional nature of spectral content. In contrast, to comprehensively characterize the frequency domain, one may employ the estimated Power Spectral Density (PSD), a metric instrumental in quantifying the spectral content of the signal with greater granularity and precision. By leveraging PSD, researchers can elucidate the intricate spectral nuances inherent within the signal, thereby facilitating a more thorough understanding of its frequency composition and distributional properties. Consequently, PSD serves as a valuable tool for unveiling the multifaceted spectral dynamics embedded within the signal, enabling researchers to glean deeper insights into its spectral characteristics and distributional patterns. [30] . In the context of non-stationary signals, traditional analytical approaches may prove inadequate in capturing the intricate temporal dynamics inherent within the data. In such scenarios, methods grounded in time-frequency analysis (TFA), such as spectrograms and scalograms, offer more refined insights into the underlying data structure. These techniques afford a comprehensive representation of the signal's spectro-temporal content, thereby facilitating a nuanced understanding of its temporal evolution and frequency composition. Additionally, the application of wavelet analysis emerges as a viable approach for capturing the complex interplay between time and frequency domains within the signal, enabling researchers to glean deeper insights into its underlying dynamics and temporal variability. Thus, in the pursuit of accurate signal analysis, the utilization of TFA-based methodologies and wavelet analysis proves indispensable for unraveling the intricate temporal intricacies embedded within non-stationary signals. [31] . When seeking to quantify changes in the distribution of spectral contents over successive instances, conventional methods may exhibit limitations. To address these constraints, the adoption of more robust methodologies, such as Spectral Entropy (SE), becomes imperative for characterizing the spectral distribution of data with greater fidelity and reliability. By leveraging SE, we can effectively capture the intricate nuances inherent in the spectral composition of signals across varying temporal epochs, facilitating a comprehensive understanding of spectral dynamics and enabling precise quantification of spectral changes over time. Thus, SE emerges as a formidable tool for unraveling the complexities of spectral distribution dynamics and enhancing our ability to discern subtle shifts in spectral features with heightened accuracy and precision. 1.1.1 Signal processing algorithm We start with introducing various signal processing algorithms frequently employed in signal analysis. When confronted with signals whose frequency content exhibits temporal variability, traditional power spectrum analysis and stationary methods may fail to furnish dependable insights into discerning alterations occurring within the frequency domain [32–34] . To address this inherent challenge, the adoption of non-stationary and time-varying methods becomes imperative, as these techniques offer enhanced capabilities to effectively characterize data of this nature [35–37] . Among these methodologies, the Short-Term Fourier Transform (STFT) emerges as a prominent technique, operating on the principle of decomposing a signal into its temporal and frequency domains. The STFT executes this process by partitioning the input signal into overlapping segments, subsequently subjecting each segment to Fourier transformation. The resulting transformed data is aggregated into a repository comprising magnitude and phase records for each temporal and frequency point. Additionally, the spectrogram, computed as the square of the magnitude of the STFT of the signal, serves as a supplementary analytical tool, aiding in the comprehensive analysis of signal characteristics. To compute the Fourier transform of a continuous-time integrable signal , we may need to apply the following formula: x ( t ) (1.1) X ( f ) = F { x ( t ) } = ∫ − ∞ ∞ x ( t ) e − i 2 π f t d t , − ∞ < f < ∞ . The signal can be recovered using the inverse Fourier transform as follows: (1.2) x ( t ) = F − 1 { X ( f ) } = ∫ − ∞ ∞ X ( f ) e i 2 π f t d f , − ∞ < t < ∞ . We can compute the magnitude function , , by taking the absolute value of the Fourier transform. We present the argument using the | X ( f ) | phase function , To compute the a r g { X ( f ) } spectrum , we may need to take the square of the magnitude function as the following form: (1.3) S x ( f ) = | X ( f ) | 2 , − ∞ < f < ∞ . We calculate the spectral density of a zero-mean stationary stochastic process for x ( t ) using the Wiener-Khintchine theorem which is obtained as the Fourier transform of the − ∞ < t < ∞ covariance function r x ( τ ) and (1.4) S x ( f ) = F { r x ( τ ) } = ∫ − ∞ ∞ r x ( τ ) e − i 2 π f t d τ , − ∞ < f < ∞ , is defined as: r x ( τ ) where (1.5) r x ( τ ) = E [ x ( t − τ ) x ⁎ ( t ) ] , − ∞ < τ < ∞ , defines the expected value and ⁎ is the complex conjugate. The covariance function E [ ] may be recovered using the inverse Fourier transform of the spectral density as follows: r x ( τ ) (1.6) r x ( τ ) = F − 1 { S x ( f ) } = ∫ − ∞ ∞ S x ( f ) e i 2 π f t d f , − ∞ < τ < ∞ . For time-varying or non-stationary signals, we generalized the Fourier transform to the short-time Fourier transform (STFT), as the following form: where (1.7) X ( t , f ) = ∫ − ∞ ∞ x ( t 1 ) h ⁎ ( t 1 − t ) e − i 2 π f t 1 d t 1 , − ∞ < t , f < ∞ , represents a h ( t ) window function centered at time t . The window function works by cutting the signal close to the time t to locally estimate around this time instant by applying the Fourier transform. To compute the STFT, we use a fixed positive even window, , with a certain shape that is centered around zero with power h ( t ) . We may compute the ∫ − ∞ ∞ | h ( t ) | 2 d t = 1 spectrogram in a similar way to the ordinary Fourier transform and spectrum using the following formula: (1.8) S x ( t , f ) = | X ( t , f ) | 2 , − ∞ < t , f < ∞ . The spectrogram is useful to analyze the time-varying and non-stationary signals. In practice, we sample the measured signal with some sample distance T , i.e. which is associated to the sample frequency x n = x ( n T ) with F s . To compute the T = 1 / F s discrete frequency spectrogram , we simply apply the following formula: (1.9) S x ( n , l ) = | ∑ n 1 = 0 N − 1 x n 1 h ⁎ ( n 1 − n + M / 2 ) e − i 2 π n 1 l / L | 2 . Here, is the window function of length h ( n ) M and energy normalized according to (1.10) h ( n ) = h 1 ( n ) ∑ n = 0 M − 1 h 1 2 ( n ) , n = 0 , … , M − 1 The length and shape of the window function provide the resolution in time and frequency. The spectrogram can spped up when we apply the Fast Fourier transform (FFT) algorithm where the length of FFT h ( n ) L defines computed spectrogram values for frequencies, . Here, we have l = 0 , F s L , 2 F s L , … as the number of frequency values, with L = 2 I I as some integer value to obtain the best performance of the FFT that should not be larger than N . In studying time varying signals, wavelet transform plays an important role which is mainly applied to analysis data at different scales or resolutions. The continuous wavelet transform (CWT) can be obtained using the following formula: (1.11) C W T ( b , a ) = 1 a ∫ − ∞ ∞ x ( t 1 ) h ⁎ ( t 1 − b a ) d t 1 To compute scalogram , we use similar procedure as the STFT and the spectrogram. To do, we simply compute the absolute value of the CWT and we take its square value. The Discrete Wavelet Transform (DWT) which is wavelet decomposition technique, acts by applying the wavelet filter to non-overlapping windows of the time series. Then we compute the DWT by multi-resolution analysis which was initially used in image compression area. To compute DWT, we replace a and b in 1.11 by and 2 j , respectively: k 2 j (1.12) D W T ( j , k ) = 1 2 j ∫ − ∞ ∞ x ( t 1 ) h ⁎ ( t 1 − k 2 j 2 j ) d t 1 Generally speaking, discrete wavelet transform works by applying a coarse time-frequency discretization to speed up the process, and the continuous wavelet transform works by applying a time-consuming procedure which is near continuous discretization of the time scale and frequency scale to obtain better resolution. Here we provide one example of computing the wavelet transform of a given signal (see Fig. 1 , Top). We compute the continuous wavelet transform (CWT) of the given signal and display it in Fig. 1 as well. The continuous wavelet transform (CWT) result is visualized in (Bottom) Right. To localize the frequency change precisely, we plot the finest scale CWT coefficients in (Bottom) Left. 1.1.2 Spectral entropy To measure the uniformity of energy distribution in the frequency domain, we need to compute spectral entropy (SE) [38] . The spectral entropy computes spectral power distribution of a signal to check its forecastability and it is based on Shannon and information entropy in the information data [39–42] . The spectral entropy which is a normalized form of Shannon entropy, uses the power spectrum amplitude components of the signal to compute entropy [43] . The spectral entropy can be used to differentiate between a narrow band signal and a wide band signal, however it does not provide reliable results to compare between two wide band signals [38] . For signal , the power spectrum can be written as x ( t ) , where S ( t ˜ ) = | X ( t ˜ ) | 2 defines the discrete Fourier transform of X ( t ˜ ) . We can compute the x ( t ) probability distribution using the following formula: P ( t ˜ ) (1.13) P ( t ˜ ) = S ( t ˜ ) ∑ i S ( i ) The spectral entropy H can be obtained using the following formula: (1.14) H = − ∑ = ˜ 1 N P ( t ˜ ) log 2 P ( t ˜ ) After normalizing we get: here, (1.15) H n = − ∑ t ˜ = 1 N P ( t ˜ ) log 2 P ( t ˜ ) log 2 N N represents the total number of frequency points. The denominator, represents the maximal spectral entropy of white noise that is uniformly distributed in the frequency domain. log 2 N For a given time frequency power spectrogram , we can write the probability distribution as the form: S ( t , f ) (1.16) P ( t ˜ ) = ∑ t S ( t , t ˜ ) ∑ f ∑ t S ( t , f ) The spectral entropy is the same as before. For a given time frequency power spectrogram , the probability distribution at time S ( t , f ) t can be obtained using the following formula: (1.17) P ( t , t ˜ ) = S ( t , t ˜ ) ∑ f S ( t , f ) We write the spectral entropy at time t as the following form: (1.18) H ( t ) = − ∑ t ˜ = 1 N P ( t , t ˜ ) log 2 P ( t , t ˜ ) For a noisy signal, the spectral entropy becomes close to 1 and for a pure tone signal, the spectral entropy will be approaching to 0. In Figs. 2 and 3 , we displaythe spectral entropy of a signal and compare it to the original signal. We also plot the percentage of energy for each wavelet coefficient in Figs. 2 and 3 . This plot is computed using Continuous 1-D wavelet transform and then we display the matrix of the continuous wavelet coefficients. 2 Application 2.1 Data acquisition and preprocessing The electroencephalogram (EEG) records were collected using Neurocom monopolar EEG 23-channel system (Ukraine, XAI-MEDICA). Silver/silver chloride electrodes were placed on the scalp at symmetrical anterior frontal ( ), frontal ( F p 1 , F p 2 ), central ( F 3 , F 4 , F z , F 7 , F 8 ) parietal ( C 3 , C 4 , C z ), occipital ( P 3 , P 4 , P z ), and temporal ( O 1 , O 2 ) recording sites according to the International 10/20 scheme. All electrodes were referenced to the interconnected ear reference electrodes. The inter-electrode impedance was below 5 T 3 , T 4 , T 5 , T 6 k Ω. The sample rate was 500 Hz per channel. A high-pass filter with 0.5 Hz cut-off frequency, low-pass filter with 45 Hz cut-off frequency and a power line notch filter (50 Hz) were used; the time constant of the amplification tract was 0.3 s [44,45] . Every recording includes separate artifact-free EEG segments of 180 s for resting state and 60 s for mental counting. Based on EEG visual inspection by a qualified electroneurophysiologist, 30 of the 66 initial participants were excluded from the database due to poor EEG quality (excessive number of oculographic and myographic artifacts), so the final sample size is 36 subjects. We display the recording of the background EEG of a subject (before mental arithmetic task) in Fig. 4 , and the recording of EEG of the same subject during the mental arithmetic task in Fig. 5 . At the stage of data preprocessing, the Independent Component Analysis (ICA) was used to eliminate the artifacts (eyes, muscle, and cardiac overlapping of the cardiac pulsation). The arithmetic task was the serial subtraction of two numbers. Each trial started with the communication orally 4-digit (minuend) and 2-digit (subtrahend) numbers (e.g. 3141 and 42). This database was contributed by Igor Zyma, Sergii Tukaev, and Ivan Seleznov, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Department of Electronic Engineering. [44,45] . We use the EEG recordings which are registered and cleared of artifacts in the work done by Zyma et al. [44] . 2.2 The wavelet analysis of EEG signal In the wavelet-based approach, signal analysis involves partitioning the signal into windows of varying sizes, facilitating manipulation through compression or expansion via a scale variable. This analytical framework operates within the frequency domain, leveraging time-frequency analysis to compute the spectrum of EEG signals and thereby extract distinct features. By applying discrete wavelet analysis (DWT), the continuous-time varying EEG signal is characterized through a nuanced understanding of the discrete wavelet transform. This methodology enables comprehensive investigation into the temporal and spectral dynamics of EEG signals, offering a refined perspective on the underlying neurophysiological processes. We decompose EEG signal using wavelet transform. Next, we calculate the wavelet coefficients. In Fig. 6 , we display the computed discrete wavelet coefficients. As we can see, analysis using wavelets we can detect the changes in EEG signal across time while the Fourier transform is not able to capture the instant when the frequency of EEG signal changes. Analyzing the variations happen in variance is important since it often represents the fundamental changes occur in mechanism of the data-generation. We directly plot the respective wavelet variances in a log-log plot. Since there could be many practical settings result of contamination, e.g. outliers, which can seriously bias the standard estimator of wavelet variance, we represent the wavelet variance plots with contamination as well. The Fig. 7 displays changes in the variance of the recording of the background EEG before mental arithmetic task and during the mental arithmetic task. We also displayed the wavelet variance for all channels of the recorded EEG of 21 years old female before and during mental arithmetic task in Figs. 8 and 9 respectively. Based on the findings derived from wavelet variance analysis, discernible alterations in the data's variability become apparent. This observation suggests shifts in the dispersion or spread of the data points, indicative of underlying changes in the underlying processes or dynamics being studied. 2.3 The spectral entropy of EEG signal To compute the spectral entropy, we calculate the power spectrogram of EEG signal, and then we compute the spectral entropy of frequency bins within the bandwidth of EEG signal. We plot the spectral entropy of all EEG channels recorded before and after mental arithmetic task, in Figs. 10 and 11 respectively. The spectral entropy analysis conducted on EEG channels before and after the mental arithmetic task reveals discernible differences in patterns. Specifically, distinct variations are observed in the spectral entropy plots corresponding to the pre-task and post-task recordings. These changes in the signal spectrum, as reflected in the spectral entropy plots, coincide with the occurrence of the mental arithmetic task. Consequently, the spectral entropy measure demonstrates its efficacy in distinguishing between various mental brain activities. This finding underscores the utility of spectral entropy analysis as a valuable tool for elucidating the dynamic neural processes underlying cognitive tasks and their impact on EEG signals. 3 Results and discussion The brain serves as the epicenter for information processing, orchestrating complex neural networks that facilitate communication through a combination of electrical and chemical signaling mechanisms. Neurons, the fundamental units of the nervous system, utilize intricate electrochemical processes to transmit signals, involving the release and reception of neurotransmitters across synaptic junctions. Within this dynamic environment, neural activities give rise to electrical phenomena characterized by the generation and propagation of electrical impulses. The brain's capacity to execute diverse functions hinges upon its ability to generate and modulate these electrical signals, underscoring the paramount importance of elucidating the underlying structural and functional intricacies governing their activity. Consequently, gaining insights into the neurophysiological substrates that underpin the generation and propagation of these electrical signals is imperative for advancing our understanding of brain function and behavior [46–48] . Numerous methodologies have been developed to elucidate the intricate dynamics of brain function, with Electroencephalography (EEG) recordings standing out among the array of available techniques. EEG offers a direct window into various brain activities, providing high temporal resolution while retaining portability and accessibility, rendering it a cornerstone in the investigation of the brain's complex architecture. Deciphering the raw recordings of EEG signals in the time-domain poses a formidable challenge, necessitating the adoption of appropriate frequency domain analyses for comprehensive examination. By delving into the frequency domain, researchers can unravel the spectral characteristics of EEG signals, shedding light on the underlying oscillatory patterns and temporal dynamics of brain activity. This analytical approach empowers investigators to extract nuanced insights into the intricate interplay of neural processes, thereby enriching our understanding of the complex functioning of the brain [49,50] . In the recent years, Discrete Wavelet Transform (DWT), which is a powerful time-frequency technique, has been widely applied in signal analysis of electroencephalography (EEG) recordings [51] . The application of discrete wavelet transform (DWT) in EEG signal classification has garnered considerable attention in recent years. DWT involves the decomposition of EEG signals into non-overlapping windows through the application of wavelet filters, allowing for the extraction of frequency information across different scales. An alternative approach, known as the Maximum Overlap Discrete Wavelet Transformation (MODWT), extends upon this methodology by employing wavelet filters on overlapping windows of the signals. In essence, MODWT operates by systematically sliding the filter across the signal, yielding a more comprehensive analysis of temporal dynamics. In our study, we computed the relevant wavelet coefficients to assess brain electrical activities both before and during a mental arithmetic task. Our findings indicate that the discrete wavelet transform significantly impacts its performance in detecting changes in time-varying signals. Compared to conventional signal processing methods reliant on spectral analysis, the DWT approach demonstrates enhanced accuracy and versatility across EEG datasets. This suggests that DWT-based methodologies offer a robust framework for analyzing EEG signals, facilitating more precise discrimination of cognitive states and providing valuable insights into brain function dynamics. When confronted with diverse signals, a comparative analysis of their respective wavelet variances becomes a pertinent endeavor aimed at elucidating their behavioral and property similarities. This exploration often serves to interrogate the hypothesis that the variability of measurements has been influenced by the induction of distinct mental states. Wavelet variance, a versatile metric, may be computed using either the Discrete Wavelet Transform (DWT) or its counterpart, the Maximum Overlap Discrete Wavelet Transform (MODWT). By employing these wavelet transforms, we derive wavelet variances alongside their corresponding confidence intervals, facilitating inferential analyses. In our investigation, we harnessed the power of DWT and MODWT to compute wavelet variances, thereby unveiling distinct signal properties before and during a mental arithmetic task. Notably, the wavelet variance emerged as a robust indicator of significant variance alterations during the mental arithmetic task, signifying a notable shift in process variance. To account for practical considerations, we supplemented our analysis with plots depicting wavelet variance in the presence of contamination. Our findings underscored the utility of wavelet variance in discerning changes in signal properties and its resilience to contaminating influences. Specifically, our results revealed a striking congruence between classic and robust wavelet variance estimates, even in the presence of contamination, facilitating a nuanced interpretation of signal dynamics. This comprehensive analysis affords valuable insights into the intricate interplay between signal variability and mental states, enriching our understanding of neurophysiological processes. Recently, different entropy measures have been used in order to study EEG signals [52,53] . In the context of EEG data analysis, the quantification of predictability plays a pivotal role in understanding neural dynamics. Shannon entropy has long been utilized as a measure of predictability; however, its utility is limited by its lack of normalization to the total power of the EEG signal. Consequently, its absolute value varies among subjects, undermining its clinical applicability. To address this limitation, spectral entropy emerges as a promising alternative, leveraging the principles of Shannon entropy within the domain of Fourier-transformed signals. In our current investigation, we harnessed spectral entropy to characterize EEG datasets encompassing two distinct groups: EEG signals recorded before and during a mental arithmetic task. By computing spectral entropy within the frequency bins corresponding to the bandwidth of EEG signals, we were able to elucidate the intricate dynamics of neural activity across these groups. Notably, spectral entropy exhibited a remarkable capacity to localize changes and capture irregular patterns present within both datasets. This noteworthy finding underscores the potential utility of spectral entropy in characterizing diverse neurological conditions utilizing EEG datasets. By virtue of its ability to quantify the predictability of EEG signals while mitigating the influence of absolute signal power, spectral entropy stands poised as a valuable tool for elucidating the underlying neurophysiological mechanisms implicated in various neurological disorders. This robust characterization of EEG dynamics offers promising avenues for advancing our understanding of neural processes and holds considerable potential for informing clinical interventions and diagnostic strategies [54–56] . The outcomes of this investigation underscored the utility of employing the discrete wavelet transform (DWT), wavelet variance, and spectral entropy as potent tools for characterizing EEG signals recorded during diverse brain activities. By leveraging these analytical approaches, we observed significant enhancements in computational efficiency and the ability to discern changes across temporal and frequency scales, thereby rendering them efficacious indicators for biomedical applications aimed at achieving precise and effective diagnoses. Furthermore, the utilization of subjects engaged in various mental tasks proved instrumental in delineating electrical brain signals corresponding to distinct cognitive activities. Leveraging such paradigms offers invaluable insights into the neural substrates underlying cognitive processes, thereby enriching our understanding of brain function. Building upon these findings, we advocate for the implementation of comparative studies incorporating diverse measures and indices to refine current clinical frameworks for monitoring EEG signal dynamics over time. While the use of singular entropy measurements presents a notable advancement, we caution against overreliance on a solitary metric, as it may prove insufficient in the search for robust biomarkers capable of delineating different brain states within a given task paradigm. Therefore, systematic exploration and integration of multiple analytical approaches are warranted in future investigations to ascertain a comprehensive understanding of neural dynamics and facilitate the identification of clinically meaningful biomarkers. 4 Conclusion This study highlights the complexity of brain activity and underscores the importance of employing appropriate techniques to capture its nonlinear dynamics. Specifically, the Discrete Wavelet Transform (DWT) method demonstrated promising results in detecting transitions across EEG data time courses, particularly in the context of EEG During Mental Arithmetic Tasks dataset. This method effectively assessed changes in raw data and captured oscillations in the structure of recordings both before and during mental arithmetic tasks. Moreover, the study identified the spectral entropy as a valuable tool for distinguishing between EEG recordings before and during mental arithmetic tasks. By establishing an effective and high-performance framework, based on discrete wavelet transform and spectral entropy, the paper presents a novel approach for detecting and localizing abrupt changes in the time and frequency domains of time-varying signals with high accuracy and low computational cost. This framework is particularly well-suited for analyzing non-stationary signals such as EEG. The main contribution of the study lies in providing a robust methodology for analyzing EEG signals, which can serve as a foundation for future research. The results highlight quantitative (spectral entropy) and qualitative (wavelet analysis) differences between recording conditions of EEG signals, suggesting avenues for further investigation. Additionally, the study suggests the potential use of other quantitative measures such as fractal and multi-fractal analysis to extract additional insights from EEG recordings. Overall, the findings contribute to advancing our understanding of brain dynamics and pave the way for further exploration in this field. Abbreviations EEG: The electroencephalogram signal AD: Alzheimer's disease PSD: Power Spectral Density TFA: Time-frequency analysis SE: Spectral Entropy STFT: Short-Term Fourier Transform FFT: Fast Fourier transform DWT: Discrete wavelet transform CWT: Continuous wavelet transform MODWT: Maximum Overlap Discrete Wavelet Transformation Pseudo code for the Fast Fourier Transform (FFT) algorithm for a continuous-time integrable signal in a tabular format Fast Fourier Transform (FFT) algorithm for a continuous-time integrable signal in a tabular format: Algorithm: Fast Fourier Transform (FFT) for Continuous-Time Integrable Signal Input: - Signal: Continuous-time integrable signal - Sampling Rate: Rate at which the signal is sampled Output: - Frequency Bins: Array of frequency bins - FFT Result: Result of the FFT algorithm Procedure: 1. Compute the length of the signal, n . 2. Define the sampling period, T, as 1 / sampling rate. 3. Compute the frequency bins using the FFT frequencies function. 4. Perform numerical integration of the signal using the trapezoidal rule. 5. Apply the FFT algorithm to the integral. 6. Return the frequency bins and FFT result. Functions: 1. FFT Frequencies Function: - Input: Length of the signal, n; Sampling period, T. - Output: Array of frequency bins. - Procedure: a. Initialize an empty array for frequency bins. b. For each index k from 0 to n - 1: i. Compute the frequency bin using the formula k / (n * T). ii. Append the frequency bin to the array. c. Return the array of frequency bins. 2. Trapezoidal Integration Function: - Input: Signal, Sampling period, T. - Output: Integral of the signal. - Procedure: a. Initialize an empty array for the integral. b. For each index k from 0 to n - 1: i. Initialize the integral at index k to 0. ii. Perform the trapezoidal integration: - For each index j from 0 to k: ⁎ Add signal[j] * T / 2 to the integral at index k. - For each index j from 1 to k - 1: ⁎ Add signal[j] * T / 2 to the integral at index k. c. Return the array of integral values. Pseudo Code: (1) Function FFT_Continuous(Signal, Sampling Rate): a. Compute the length of the signal, n . b. Define the sampling period, T, as (1) / sampling rate. c. Compute the frequency bins using the FFT Frequencies Function. d. Perform numerical integration of the signal using the Trapezoidal Integration Function. e. Apply the FFT algorithm to the integral. f. Return the frequency bins and FFT result. Human and animal rights The authors declare that the work described has not involved experimentation on humans or animals. Informed consent and patient details The authors declare that this report does not contain any personal information that could lead to the identification of the patient(s) and/or volunteers. Funding The author(s) received no financial support for the research. Author contributions All authors attest that they meet the current International Committee of Medical Journal Editors (ICMJE) criteria for Authorship. Declaration of Competing Interest The authors declare that they have no known competing financial or personal relationships that could be viewed as influencing the work reported in this paper. Acknowledgements We are deeply grateful to all those who played a role in the success of this project.
|
[
"ANDREASSI",
"SARTER",
"POLDRACK",
"ZARJAM",
"BAARS",
"WILSON",
"TENG",
"AZIZI",
"JEONG",
"PRITCHARD",
"JEONG",
"STAM",
"BRECHTJE",
"GEVINS",
"ZARJAM",
"ZARJAM",
"WANG",
"REBSAMEN",
"MURATA",
"HARMONY",
"INOUYE",
"REBSAMEN",
"WANG",
"MICHELOYANNIS",
"JELLES",
"SOLEYMANI",
"KORTELAINEN",
"WEISS",
"GONZALEZGARRIDO",
"SINGH",
"BRUNS",
"SANDSTEN",
"AZIZI",
"HAMMOND",
"HLAWATSCH",
"AZIZI",
"HLAWATSCH",
"BOASHASH",
"MISRA",
"TOH",
"DEVI",
"YIN",
"ACHARYA",
"ZYMA",
"GOLDBERGER",
"AZIZI",
"SANEI",
"SANEI",
"COOPER",
"AZIZI",
"CHEN",
"TIAN",
"DAS",
"KAUR",
"GAO",
"SU"
] |
0b286eb61ca44a50b6b89a779b47ad01_Association of Histologic Subtype With Radiation Response and Survival Outcomes in Synovial Sarcoma_10.1016_j.adro.2025.101718.xml
|
Association of Histologic Subtype With Radiation Response and Survival Outcomes in Synovial Sarcoma
|
[
"Matsui, Jennifer K.",
"Jackson, Scott",
"Fang, Judy",
"Mohler, David G.",
"Steffner, Robert J.",
"Avedian, Raffi S.",
"Charville, Gregory W.",
"Rijn, Matt van de",
"Million, Lynn",
"Chin, Alexander L.",
"Hiniker, Susan M.",
"Kalbasi, Anusha",
"Moding, Everett J."
] |
Purpose
Synovial sarcoma (SS) is a rare, aggressive soft tissue malignancy that is divided into biphasic and monophasic histologic subtypes. In addition to surgical resection, radiation therapy (RT) improves local control in patients at higher risk of recurrence. This study aimed to investigate the impact of histologic subtype on radiation response and survival outcomes in patients treated with RT as part of definitive management.
Methods and Materials
We retrospectively identified patients with SS treated with RT and surgical resection from 1997 to 2020 at Stanford Medical Center. We assessed the association between histologic subtypes (biphasic vs monophasic) and response to preoperative RT based on imaging and pathology. Volumetric response was calculated using the pre-RT and post-RT/preoperative postcontrast T1-weighted magnetic resonance imaging images. Progression-free survival (PFS) and overall survival (OS) were estimated using the Kaplan-Meier method. Univariable and multivariable analyses were conducted using Cox regression models. Variables for univariable and multivariable analyses included age, histologic subtypes, tumor location, tumor size, margin status, chemotherapy, and performance status.
Results
In our study, 50 patients met the inclusion criteria. The median age was 34.8 years at diagnosis, and 36% (n = 18) received concurrent chemotherapy. Biphasic (n = 18, 36%) and monophasic (n = 32, 64%) tumors exhibited significant differences in negative margin status (94% vs 66%, P = .036). Of the 22 patients who underwent preoperative RT, 15 patients had pre-RT and post-RT imaging to assess volumetric changes. Biphasic tumors demonstrated less necrosis at the time of surgical resection but a significantly greater volumetric decrease with preoperative RT (42% vs 5%, P = .004). PFS and OS were superior in biphasic tumors (P = .003 and P = .009, respectively). Multivariable analyses identified histologic subtypes (monophasic vs biphasic) as a significant factor impacting PFS (HR, 5.65; 95% CI, 1.78-17.91; P = .003).
Conclusions
Biphasic tumors exhibit an improved volumetric response to preoperative RT and improved outcomes. These findings underscore the importance of considering histology when tailoring treatment for patients with SS.
|
Introduction Synovial sarcoma (SS) accounts for 5% to 10% of soft tissue sarcomas and is considered an aggressive, high-grade sarcoma with a 5-year mortality rate of 25%. 1 , SS occurs primarily in the adolescent and young adult patient population and frequently arises in the knee and lower thigh. 2 The chromosome abnormality 3 t (X; 18) (p11.2; q11.2) is a unique feature of these tumors that results in the formation of the SS18::SSX fusion oncogenes. 4 , Histologically, there are 2 predominant subtypes; monophasic tumors, which are composed of sheets of spindle cells, and biphasic tumors that harbor both epithelial and spindle cell components. 5 6 , Previous studies have suggested that biphasic SS have a better overall survival (OS) than monophasic SS. 7 8 , However, other studies did not find statistically significant survival differences between the subtypes. 9 2 , 10 , 11 Similar to other soft tissue sarcomas, the optimal care for localized SS includes complete surgical resection with radiation therapy (RT) added to improve local control in patients at high risk. Chemotherapy has been associated with improved survival in patients with SS, and metastatic SS are typically chemosensitive. 12 13 , SS has been described as being radioresistant. 14 15 , However, a Surveillance, Epidemiology, and End Results database study found patients with SS treated with RT had statistically significant improvement in disease-specific survival (HR, 0.62; 16 P = .003) and OS (HR, 0.65; P < .001). To date, there are no reports comparing biphasic and monophasic SS RT response. In this study, we sought to compare volumetric changes and survival outcomes between the SS subtypes. 17 Methods and Materials Study design Stanford's Institutional Review Board approved this retrospective study. Patients with SS histology were queried from the Radiation Oncology Data Warehouse, which aggregates data from electronic medical records. Patients who were diagnosed with localized biopsy-confirmed SS treated at our institution between 1997 and 2020 who underwent surgical resection and received either preoperative or postoperative RT within 3 months of surgery were included. Demographic information, pathologic data, treatment details, follow-up, patterns of recurrence, and survival status were collected. Primary outcomes included volumetric response to radiation, progression-free survival (PFS), and OS. We included the following variables in the univariable and multivariable analyses for PFS and OS: patient age at diagnosis, monophasic or biphasic histology, tumor size, surgical margin status (negative vs positive), concurrent chemotherapy (no vs yes), and Eastern Cooperative Oncology Group (ECOG) performance status at time of RT. Tumors were designated as biphasic or monophasic based on histologic findings found in the pathology reports. Measuring volumetric response A subset of patients treated with preoperative RT with pretreatment and post-RT/presurgery magnetic resonance imaging available were analyzed for volumetric response to RT. Thin-cut, 1 to 1.5 mm slice thickness, postcontrast T1-weighted images were imported into MIM software Inc., version 7.3.3, for contouring pretreatment and post-RT tumor volumes. The volumetric decrease was calculated as: P r e R T v o l u m e − P o s t R T v o l u m e P r e R T v o l u m e We also used the Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST 1.1) to determine if there was a complete response, partial response, stable disease, or disease progression. Pathologic response was based on percent tumor necrosis at the time of surgical resection after neoadjuvant RT. 18 19 Statistical analysis Analyses were conducted using R version 4.3.0 (R Core Team), SAS version 9.4 (Statistical Analysis Systems), and Prism version 10.2.2 (GraphPad). Fisher exact tests were used to compare categorical variables and Wilcoxon rank sum tests were used for continuous variables. Correlation between variables was assessed using the Spearman correlation coefficient. Survival curves were created using the Kaplan-Meier method. Univariable and multivariable analyses used Cox proportional hazards regression to identify variables associated with mortality. In our analysis, we included age as a continuous variable, histology (biphasic vs monophasic), tumor size as a continuous variable, surgical margin status (negative vs positive), concurrent systemic therapy (yes vs no), and ECOG performance status. On OS multivariable analysis, ECOG and age were not used as stratified variables because of nonproportional hazards. Results Patient characteristics A total of 50 patients with SS were identified ( Table 1 ). The median age at the time of diagnosis was 34.8 years (range, 13.2-70.3 years) and 36% received concurrent chemotherapy. There were 22 female (44%) and 28 male (56%) patients. Most had an ECOG of 0 (n = 29, 58%). The median tumor size was 7.1 cm (range, 2.7-18.0 cm). The most common sites were extremity (n = 31, 62%), head and neck (n = 6, 12%), and trunk/abdomen (n = 3, 6%). Thirty-eight patients (76%) had negative surgical margins and 12 patients (24%) had positive surgical margins. Fifty-six percent (n = 28) of patients received postoperative RT and 44% (n = 22) received preoperative RT. Patients were treated to a median dose of 55 Gy (range, 45-66 Gy). Most patients did not receive concurrent chemotherapy (n = 32, 64%). There were 18 (36%) and 32 (64%) tumors with biphasic and monophasic histology, respectively. Although there were more head and neck tumors in the monophasic group, there was not a significant difference in tumor locations between the 2 groups. Biphasic tumors had significantly higher rates of negative margins than monophasic tumors (94% vs 66%, P = .036). Of the 10 patients with monophasic tumors with positive margins, 4 patients received preoperative RT and 6 patients received postoperative RT. Volumetric response Of the 22 patients treated with preoperative RT, 15 patients had both pre-RT and post-RT imaging to assess volumetric changes. Eight patients (53%) had biphasic tumors and 7 patients (47%) had monophasic tumors. The volumetric decrease for biphasic tumors was significantly greater than for monophasic tumors ( Fig. 1 , median 41% vs 5%, P = .004). By RECIST 1.1, 25% (n = 2) of patients with biphasic tumors achieved a partial response and 75% (n = 6) had stable disease. Most patients with monophasic tumors (n = 6, 86%) had stable disease, whereas 1 patient experienced progressive disease (14%) according to the RECIST criteria. In contrast to the volumetric response, percent necrosis at the time of surgical resection after preoperative RT was significantly higher in monophasic tumors than in biphasic tumors (median 33% vs 10%, P = .04). However, only one monophasic tumor (14%) achieved ≥95% necrosis. Across all patients treated with preoperative RT, there was a nonsignificant negative correlation between volumetric decrease and pathologic necrosis (ρ = −0.43, P = .21). For example, 1 monophasic tumor increased in volume by 24% ( Fig. 2 A, B) and had 40% necrosis by pathology, and 1 biphasic tumor decreased in volume by 84% but no necrosis was noted in the pathology report ( Fig. 2 C-F). Survival analysis Of the 50 patients, 8% (n = 4) of patients experienced local recurrence and 54% (n = 27) experienced distant recurrences. The 4 patients who experienced local recurrence had tumors with monophasic histology, and 3 patients (75%) received postoperative RT. Patients with biphasic tumors had significantly better PFS ( P = .003, Fig. 3 ) and OS ( P = .009, Fig. 4 ). The median PFS for monophasic SS was 2.29 years (CI, 1.13-3.47 years) and <50% of patients with biphasic SS experienced progression. The median OS of the monophasic cohort was 7.87 years (CI, 4.77-NA years) and not reached for biphasic SS. The upper limits for OS could not be calculated because of a lack of events. On multivariable analysis, age as a continuous variable (HR, 1.04; 95% CI, 1.00-1.08; P = .031) and monophasic histology (HR, 5.65; 95% CI, 1.78-17.91; P = .003) were associated with worse PFS ( Table 2 ). On multivariable analysis, larger tumor size (HR, 1.17; 95% CI, 1.02-1.34; P = .026) was significantly associated with decreased OS ( Table 3 ). There was a trend toward monophasic histology having worse OS (HR, 5.28; 95% CI, 0.94-29.70; P = .059). Discussion Numerous studies have evaluated prognostic factors for SS, but there are no prior reports assessing tumor response to RT by histologic subtype. Our study found a nonsignificant negative correlation between volumetric response and pathologic response, suggesting that pathologic necrosis may not identify all patients who respond favorably to preoperative RT. In soft tissue sarcomas, the clinical significance of pathologic necrosis after neoadjuvant RT is controversial with some studies finding no association between pathologic necrosis and outcomes 20 21 , and other studies reporting a correlation between favorable pathologic response (defined as ≥95% necrosis) and improved survival. 22 19 , 23 , Although monophasic tumors had a significantly better pathologic response, the median percent necrosis was only 33%, and a pathologic complete response was only observed in 1 tumor. In contrast, biphasic tumors had an improved volumetric response to RT, superior PFS, and a trend toward improved OS on multivariable analysis compared with monophasic tumors. 24 There is ongoing debate of whether RT should be delivered pre- or postoperatively; although lower rates of wound complications are reported for patients undergoing postoperative RT, preoperative RT often treats smaller target volumes with lower doses resulting in decreased long-term toxicity. 25 , Therefore, identifying patients who may benefit from preoperative radiation is critical. In our study, biphasic tumors had a greater median volumetric decrease after preoperative RT than monophasic tumors (41% vs 5%, 26 P = .004). The substantial decrease in tumor volume suggests that preoperative RT could improve the ability to resect biphasic tumors. Preoperative RT has been associated with higher rates of R0 resections, and in our study, all patients with biphasic tumors who underwent preoperative RT achieved negative surgical margins. 27 Several studies have evaluated the association of histologic subtypes with survival outcomes with mixed results. Hajdu et al reported biphasic sarcomas had significantly better 5-year survival than monophasic sarcomas (55% vs 34%). Another study by Cagle et al 28 reported that 86% of patients with biphasic, highly glandular tumors did not experience progression at 36 months compared with 38% of patients with low glandular and/or monophasic tumors. Singer et al 8 found the biphasic subtype trended toward more favorable survival outcomes than monophasic tumors. In contrast, a study by Lewis et al 11 that included 112 patients did not find a statistically significant difference in PFS and tumor-related mortality at 5 years. In our series, we found biphasic tumors had greater PFS and a trend toward improved OS than monophasic tumors by multivariable analysis. The discrepancy between our patient cohort and the aforementioned study may be attributed to the treatment characteristics; in our study, all patients received RT compared with 46% of the patients in the report by Lewis et al. 2 If the association between histologic subtype and outcomes can be confirmed in additional cohorts, future prospective trials could investigate changes in radiation dose based on SS subtype. 2 Because of the rarity of SS, our study has a limited number of patients. Given the retrospective nature of our study, we were unable to assess long-term radiation complications. Although risk-stratifying by histologic subtype is a useful tool, the type of gene fusion (SS18::SSX1 vs SS18::SSX2) has been suggested to be prognostic. In our study, a minority of patients were tested for the type of gene fusion, which prevented further analysis. In patients with a positive margin after neoadjuvant RT, a boost can be delivered and may be a potential confounder of outcomes. A notable strength of our study was treatment with modern radiation and surgical techniques in contrast with publications comparing biphasic and monophasic subtypes published before 1990. 4 Conclusion In our patient cohort, biphasic tumors exhibited significantly improved volumetric response and improved outcomes compared with monophasic tumors. These findings underscore the significance of histology in tailoring treatment strategies for patients. Disclosures Everett J. Moding has served as a paid consultant for Guidepoint and GLG. The other authors declare that they have no financial interests/personal relationships, which may be considered as potential competing interests.
|
[
"AYTEKIN",
"LEWIS",
"RAJWANSHI",
"LADANYI",
"CLARK",
"FERRARI",
"GAZENDAM",
"CAGLE",
"KRALL",
"KAWAI",
"SINGER",
"SALERNO",
"EILBER",
"ROSEN",
"YANG",
"RHOMBERG",
"NAING",
"EISENHAUER",
"EILBER",
"HAAGENSEN",
"DELANEY",
"MULLEN",
"DONAHUE",
"PALM",
"OSULLIVAN",
"KARASEK",
"GINGRICH",
"HAJDU"
] |
c4c38f83676e477f93182469b2c0a6e6_Evaluating the extrapolation potential of random forest digital soil mapping_10.1016_j.geoderma.2023.116740.xml
|
Evaluating the extrapolation potential of random forest digital soil mapping
|
[
"Hateffard, Fatemeh",
"Steinbuch, Luc",
"Heuvelink, Gerard B.M."
] |
Spatial soil information is essential for informed decision-making in a wide range of fields. Digital soil mapping (DSM) using machine learning algorithms has become a popular approach for generating soil maps. DSM capitalises on the relation between environmental variables (i.e., features) and a soil property of interest. It typically needs a training dataset that covers the feature space well. Mapping in areas where there are no training data is challenging, because extrapolation in geographic space often induces extrapolation in feature space and can seriously deteriorate prediction accuracy. The objective of this study was to analyse the extrapolation effects of random forest DSM models by predicting topsoil properties (OC, clay, and pH) in four African countries using soil data from the ISRIC Africa Soil Profiles database. The study was conducted in eight experiments whereby soil data from one or three countries were used to predict in the other countries. We calculated similarities between donor and recipient areas using four measures, including soil type similarity, homosoil, dissimilarity index by area of applicability (AOA), and quantile regression forest (QRF) prediction interval width. The aim was to determine the level of agreement between these four measures and identify the method that had the strongest agreement with common validation metrics. The results indicated a positive correlation between soil type similarity, homosoil and dissimilarity index by AOA. Surprisingly, we observed a negative correlation between dissimilarity index by AOA and QRF prediction interval width. Although the cross-validation results for the trained models were acceptable, the extrapolation results were unsatisfactory, highlighting the risk of extrapolation. Using soil data from three countries instead of one increased the similarities for all measures, but it had a limited effect on improving extrapolation. Also, none of the measures had a strong correlation with the validation metrics. This was particularly disappointing for AOA and QRF, which we had expected to be strong indicators of extrapolation prediction performance. Results showed that homosoil and soil type methods had the strongest correlation with validation metrics. The results for this case study revealed limitations of using AOA and QRF as measures of extrapolation effects, highlighting the importance of not relying on these methods blindly. Further research and more case studies are needed to address the effects of extrapolation of DSM models.
|
1 Introduction Spatial soil information in the form of maps is essential in detailed soil quality assessments, sustainable land management, and precision agriculture studies ( Lagacherie and McBratney, 2006 ). Nowadays soil maps are most often made by digital soil mapping (DSM), where machine learning (ML) is a frequently used mapping algorithm. Machine learning first captures the relation between environmental variables and the soil property of interest using training data and next uses this relation to spatially predict the soil property from maps of the environmental variables ( McBratney et al., 2003 ). Advances in remote sensing provide ever increasing spatial and detailed information of environmental variables ( Yang et al., 2011; Asgari et al., 2020 ). The successful application of a data-driven technique such as ML also requires fairly large training datasets. Moreover, the training data should cover the feature space well, meaning that ranges and combinations of environmental variables present in the study area are adequately represented in the training data set ( Minasny and McBratney, 2010; Ng et al., 2018; Wadoux et al., 2019; Hateffard and Novák, 2021 ). The choice of ML algorithm also matters since the algorithm should be able to learn the complex relationship between environmental covariates and soil properties from the data. Among different ML techniques, random forest has proven its applicability in spatial prediction of soil properties in several studies ( Ließ et al., 2012; Vaysse and Lagacherie, 2015; Kinoshita et al., 2016; Hengl et al., 2018 ). In practice, field surveys, soil sampling and laboratory analyses are expensive; therefore often legacy soil data are used in DSM studies ( Tan, 1995; Arrouays et al., 2020 ). The sampling density can vary strongly between regions and large parts of the study area might not be represented in the training data or have low sampling density ( Minasny et al., 2020 ). Mapping in such areas is challenging when there are no resources to collect new soil samples. In such cases spatial extrapolation, i.e. using soil data from one area to predict in another area, might be a potential solution. But extrapolation can amplify prediction uncertainty and should ideally be applied in areas with similar soil forming factors. One might expect that, soils with similar soil-forming factors will likely have the same soil conditions ( Jenny, 1994 ). Spatial extrapolation likely works well if a model is developed with data from an area that has good coverage of the soil forming factors ( Afshar et al., 2018; Neyestani et al., 2021 ), but in practice the training data from one area might not cover the feature space of another area well. In other words, extrapolation in geographical space might lead to extrapolation in feature space. If a ML model is employed where the feature space between the two areas differs considerably, it may produce inaccurate and unreliable predictions ( Meyer and Pebesma, 2021 ). This is particularly relevant in case of continental and global mapping of soil properties (e.g. Arrouays et al. (2014), Batjes et al. (2020) and Poggio et al. (2021) ). These considerations have led to the development of the concept of “Area of Applicability (AOA)” ( Meyer and Pebesma, 2021 ), which calculates a dissimilarity index between covariates in the training data and covariates at prediction locations and delineates the area where extrapolation in feature space occurs. Based on AOA, we should only predict in regions that have similar conditions as the area seen by the model. Apart from AOA, there are also other metrics to investigate the degree of extrapolation in DSM. For example, Mallavan et al. (2010) introduced the homosoil method as a helpful way to decide which areas have similar soils as a source area. As long as the source area sufficiently captures the environmental heterogeneity and the soil-forming factors are similar to those in the prediction area, a model trained in the source area is judged useful for extrapolation ( Bui and Moran, 2003; Nenkam et al., 2022 ). Alternatively, taking into account that the soil conditions are summarised by soil type, comparison of soil type maps between the source and prediction area is also informative about the extrapolation potential ( Angelini et al., 2020 ). These methods, homosoil and soil types, are alternative tools to AOA to evaluate whether extrapolation in geographic space is feasible. If extrapolation in geographic space leads to extrapolation in feature space then this will likely also show up in the prediction uncertainty as quantified by some ML methods. Uncertainty estimation of soil maps through quantile regression forests (QRF) ( Meinshausen and Ridgeway, 2006 ) provides quantiles of the conditional distribution from which prediction intervals can be derived. Thus a map of the prediction interval width (PIW) can be produced as a by-product of QRF, by subtracting the lower from the upper quantile for any point in the area of interest ( Zhang et al., 2019 ). Areas where the PIW is larger than a threshold could be considered too uncertain to be mapped ( Vaysse and Lagacherie, 2017 ). It would be interesting to evaluate to what degree these areas overlap with extrapolation areas identified by the AOA method. If the two methods have strong agreement, then QRF might be an easier way to evaluate which areas can and cannot be predicted using a model that was trained in a specific area. In previous studies, different researchers have applied different extrapolation methods between two similar areas for mapping soil classes and properties ( Grinand et al., 2008; Malone et al., 2016; Zhang et al., 2018; Du et al., 2021 ). Malone et al. (2016) evaluated the similarity of the environment between the donor and recipient areas utilising the homosoil approach by quantifying a taxonomic distance measure and then extrapolated the model from one region to another. Afshar et al. (2018) investigated the similarity index between two areas by Gower’s similarity index and applied a multinomial logistic regression model to estimate soil great groups. They found that the extrapolation was successful within the recipient area up to 60% prediction accuracy. Angelini et al. (2020) applied Structural Equation Modelling as a technique that includes expert knowledge to analyse the capability to extrapolate a model from one area to another. They concluded that quantifying all soil-environment interactions over time is still challenging, and that we need a better understanding of these aspects. Nenkam et al. (2022) challenged the possibility of extrapolation in areas assumed to be similar based on the homosoil approach, and compared the results with existing global maps. They found that extrapolation in geographic space is feasible, however the accuracy can be improved if local data are included in the training dataset. The review above shows that there are many different ways to determine the potential of extrapolating DSM models trained in one area to other areas. These methods include homosoil, soil type similarity, dissimilarity index by AOA, and QRF prediction interval width. The objective of this study was to investigate which method has the strongest agreement with statistical validation metrics computed from data in the prediction area. Based on such analysis we aimed to gain insight into which similarity metrics are the best indicators of whether spatial extrapolation occurs and leads to poorer prediction performance. To achieve the objective, we: (1) estimated the similarity of soil forming factors between donor and recipient areas by using the soil types and homosoil approaches; (2) trained a RF model on data from a donor area, extrapolated it to a recipient area, and computed dissimilarity index by AOA and QRF prediction interval width; and (3) evaluated the agreement between the four “measures of similarity” and common statistical validation metrics, computed using independent data from the recipient area. We performed the tasks above by means of a case study. We selected four African countries and used data from the ISRIC Africa Soil Profiles (AfSP) database ( Leenaars et al., 2013 ) to train DSM models and evaluate their performance, using different combinations of countries as donor and recipient areas. For reasons explained later, we consider organic carbon (OC) content, clay content and pH as the soil properties of interest. 2 Materials and methods 2.1 Study area We selected four African countries as our study area: Ethiopia, Kenya, Burkina Faso, and Nigeria ( Fig. 1 ). The reasons for selecting these countries were twofold: first, we wanted similar and dissimilar countries to assess different degrees of extrapolation; second, we required that there were sufficient soil samples in a public database for each country and that the data had a fairly uniform spatial distribution across each country. Kenya and Ethiopia are located in the same region in North-East Africa and share comparable climates with hot arid lowlands and cool moist highlands. In Kenya, the climatic conditions range from humid in the west to arid in the east and north. In Ethiopia, the southeast and northeast regions have a warm desert climate, primarily in the lowlands, while the central and western highlands have a humid subtropical and tropical savanna climate. Nigeria and Burkina Faso have similarities in terms of climate, with hot and humid tropical conditions in the south of Nigeria, sub-humid savanna conditions in the south of Burkina Faso and arid and semi-arid conditions in the north. The far northern parts of both countries are mainly desert areas with sparse vegetation. Humidity increases southwards together with more abundant vegetation. Apart from these similarities, each country also experiences its own specific climate since Nigeria has also coastal conditions (tropical monsoon climate)( https://climateknowledgeportal.worldbank.org/ ). In terms of topography, in Ethiopia most of the country is covered by the Ethiopian Highlands which are characterised by undulating plateaus dissected by steep slopes and deep valleys. The lowlands of Ethiopia are located in the east and southeast and a narrow strip through the centre. Elevation differences in Ethiopia are large, with peaks up to 4411 m and a lowest point at 125 m below sea level. Kenya has several mountains and large plateaus as well as large lowland plains. The central parts of Nigeria are dominated by rolling hills and high plains while the country’s northern regions are characterised by relatively flat plains. Burkina Faso has a relatively flat, slightly undulating, landscape where the maximum elevation difference is about 700 m ( Jones et al., 2013 ). Regarding soil types ( Panagos et al., 2012 ), Kenya has the largest soil diversity, with fertile volcanic soils in the western highlands, and sandy and rocky soils dominating the eastern lowlands. In Ethiopia, around one-third of the soils are shallow over hard bedrock, especially in the mountainous parts in the north and most of the lowlands in the east. Fertile to very fertile soils can be found in much of the highlands, which are though exposed to erosion, including Luvisols and Nitisols as well as Vertisols, which are less well drained and less suitable for cultivation. In the north and centre of Nigeria, easily erodible sandy and loamy soils of low fertility occur, like Arenosols and Lixisols, while in the south there are deep red clayey soils with a well-developed structure and high productivity (Nitisols). Burkina Faso has less variety in soil types compared to the other three countries. In general, the soils in Burkina Faso are loamy, gravelly and often shallow, with low fertility (Lixisols and Plinthosols). In the northern parts of the country, the soils are exposed to degradation and desertification. Nevertheless, some parts of Burkina Faso have fertile clayey soils that are suitable for agriculture (Luvisols), such as on the foot slopes of metamorphic hills and the plains near the main rivers (see Table SM-1 in the Supplementary Materials for an overview of soil types per country). Kenya and Ethiopia have a diverse land cover which includes cropland, shrubland, grassland, and forests on complex terrain. The highlands in Ethiopia are covered by forests and grasslands, while arid and semi-arid areas in the lowlands are covered by scrub vegetation or are bare. The land cover in Kenya is largely covered by savannas characterised by grasslands mixed with scattered trees. The main land cover types in Nigeria and Burkina Faso are shrubland and grassland, and forests in the south of Nigeria, with croplands and grazing lands mainly occurring on the relatively lower parts of the undulating landscapes. 2.2 Soil data and covariates The ISRIC Africa Soil Profiles (AfSP) database ( Leenaars et al., 2014 ) includes a compilation of nearly 18,000 soil profiles from various digital and analogue data sources covering most parts of Africa. We chose pH-H2O, Organic Carbon (OC) content, and Clay content as the target soil properties for modelling and mapping. These are important soil properties and had a sufficiently large sample size for all four countries ( Table 1 ). As we only focus on topsoil characteristics, the selected depth interval was 0 to 20 cm. Since the AfSP observations contain different depth intervals, the observations were harmonised by taking a weighted average if there were multiple observed layers within the 0–20 cm depth interval. If less than 15 cm of the selected depth interval was covered by the observations on a location, that location was ignored. Summary statistics of the three soil properties are given in Table 1 . A set of 35 environmental covariates that represent soil forming factors was used, including covariates representing climate conditions, topography, and vegetation (Table SM-2 in Supplementary Materials). Also, from the Digital Elevation Model, which is the primary representation of topography, 14 covariates were extracted using the RSAGA package ( Brenning et al., 2018 ) in R ( R Core Team et al., 2021 ). All covariates were resampled to a 1 km spatial resolution. 2.3 Experimental set-up To investigate the effects of extrapolation, we used the following set-up: (1) train the model on all data from each country individually, and predict to that country and the other three countries; (2) train the model on data from three countries, and predict to these three countries and the fourth. Thus in total, we had eight models for each of the three soil properties. A country, or a combination of countries, that is used for calibration is indicated as the “donor” area; a country or countries that is extrapolated into is indicated as a “recipient” area. Predictions of target soil properties were based on the random forest algorithm. This model was calibrated using the caret package. Random forest (RF; Breiman, 1996 ) fits many decision trees (independent from each other) with a random sample of covariates chosen at each splitting node. In this study, we chose to stick with the default values of hyperparameters for the RF model in our experiments. Cross-validation was employed to evaluate and compare the performance of each RF model for donor areas. Here, we applied 10-fold cross-validation. In this validation method, the dataset is randomly divided into ten folds of similar size, and each time, one of the folds is kept aside and used for validation of predictions made using calibration data from the other nine folds. This procedure was repeated ten times so that each fold was utilised exactly once for validation. Next, the mean error (ME), root mean square error (RMSE) and model efficiency coefficient (MEC) ( Nash and Sutcliffe, 1970 ) accuracy metrics were computed: (1) M E = 1 n ∑ i = 1 n ( P i − O i ) (2) R M S E = 1 n ∑ i = 1 n ( P i − O i ) 2 where n is the number of observations, (3) M E C = 1 − ∑ i = 1 n ( O i − P i ) 2 ∑ i = 1 n ( O i − O ¯ ) 2 is the observed value for the O i th location; i is the predicted value for the P i th location; and i the mean of the observations. In recipient areas we used all data from that area for validation, since none of them were used for model calibration. For both the calibration and validation of each model, we utilised all available data, without selectively choosing points that fall within similar areas. O ¯ 2.4 Measures of extrapolation We used four methods to characterise the degree of extrapolation, as described in the four subsections below. 2.4.1 Similarity in soil types Based on the Soil Atlas of Africa ( Panagos et al., 2012 ), which contains the dominant WRB reference soil types represented as spatial polygons, we first calculated the percentage of each soil type in each country. Next we assessed the similarity between the soil types of two countries using the Jaccard measure of similarity ( Awad and Khanna, 2015 page 36) by accumulating the minimum percentages of each combination of two countries sharing the same soil type: with (4) S i m i j = ∑ k ∈ K m i n ( A i k , A j k ) the Jaccard similarity measure between countries S i m i j and i ; j all soil types and K and A i k the proportion of area of soil type A j k in countries k and i , respectively. The Jaccard similarity is a number between 0 and 1, where a value of 0 means no similarity and a value of 1 means perfect similarity. j Additionally, we calculated the Jaccard measure of similarity taking also the taxonomic distance between soil types into account. The taxonomic distance quantifies the degree of similarity between soil types. We used the taxonomic distances specified in Minasny et al. (2010) , which first assigns 21 binary key features such as “calcareous” or “accumulation of silica” to each soil type, and next computes the Euclidean distance between all soil types in the resulting – 21-dimensional – key feature space. Finally, these relative distances are scaled to values between 0 and 1 to obtain the taxonomic distance. For every soil type combination, except for any soil type with itself, we calculated the smallest shared proportion between two countries as also done for the Jaccard similarity, and next we multiplied this proportion with (because we want to express the similarity, not the dissimilarity). We added the results for all combinations and divided the outcome by the the same sum in the theoretical situation that all soil type combinations have zero taxonomic distance (i.e., when there is maximal similarity between all soil types): 1 − t a x o n o m i c d i s t a n c e with (5) a i j = ∑ m ∈ K ∑ n ∈ K , n ≠ m m i n ( A i m , A j n ) ⋅ ( 1 − T D m n ) ∑ m ∈ K ∑ n ∈ K , n ≠ m m i n ( A i m , A j n ) an addition factor for the similarity measure based on taxonomic distance (a number between 0 and 1, representing minimal and maximal additional similarity respectively) and a i j the taxonomic distance between soil types T D m n and m . n Finally, the outcome of Eq. (5) was scaled between and S i m i j before being added to 1 , in such a way that if all taxonomic distances would be maximal, the scaled similarity with taxonomic distance S i m i j would equal S i m T D s c i j , and if all taxonomic distances would be minimal, S i m i j would equal one: S i m T D s c i j (6) S i m T D s c i j = S i m i j + a i j ( 1 − S i m i j ) . 2.4.2 Homosoil fraction An alternative approach to quantify similarity between areas is the homosoil approach, developed by Mallavan et al. (2010) . The underlying theory of this method is based on the taxonomic distance ( Booth et al., 1987 ) of the environmental covariates between the donor and recipient areas, where these covariates represent key soil-forming factors. Mallavan et al. (2010) created a spatial database of environmental variables at the global scale, including climate, topography, and lithology/parent material. The method calculates Gower’s similarity index at three hierarchical levels, first by selecting the areas with similar climate conditions (homoclime), then choosing the same lithological classes within homoclime areas (homolith), and last by deriving the similar topography (homotop) in previously selected homoclime and homolith areas. We applied this method to identify similarity in terms of soil forming factors between locations in the donor and recipient countries. The assumption is that if the soil forming factors are similar, the two locations are “homosoil” and have similar soils. In this study, for each donor pixel, we calculated a map layer of the recipient country indicating the homosoil pixels. Those map layers are combined into one final map where each pixel indicates if it is homosoil in at least one of the map layers. From this final map, the “homosoil similarity” was calculated as the fraction of the surface in the recipient area which is homosoil to at least one location (or grid cell) in the donor area. Note that unlike the soil type similarities, the homosoil fraction is asymmetric, as are the similarity measures discussed in the next subsections. 2.4.3 Dissimilarity index by AOA Area of Applicability (AOA) is a solution to prevent extrapolation issues in machine learning models proposed by Meyer and Pebesma (2021) . It limits predictions to areas where the covariates are similar to the covariates at training locations. It works by first computing a dissimilarity index between donor and recipient locations using distances in covariate space between the two locations, and weighting covariates according to their importance in the machine learning model , trained on all data from the donor area. Next a threshold is applied whereby all prediction locations with a dissimilarity index below the threshold are assigned to the AOA. The AOA function, which is implemented in the CAST package in R ( Meyer et al., 2023 ), has two output layers: the dissimilarity index (DI) and the area of applicability (AOA). DI can take any value between 0 and infinity, where larger values indicate a larger dissimilarity. The AOA layer has only two values, 0 and 1, where 1 indicates that a location belongs to the AOA, and 0 that it does not. In this study, we only used the DI layer. To speed up calculations, we reduced the resolution of the covariate data before applying of the AOA with a factor of 10. 2.4.4 QRF prediction interval width Finally, we also used QRF to compute the width of 90% prediction intervals ( Meinshausen and Ridgeway, 2006 ) at all prediction locations in a recipient area, when using a model trained on data from the donor area. The 90% PIW is calculated by subtracting the 0.05 quantile from the 0.95 quantile. In case of extrapolation it is expected that the prediction intervals are wider. To speed up calculations we reduced the resolution of the covariate data, as previously done in Section 2.4.3 . 3 Results 3.1 Similarities in soil types Before fitting and applying the random forest models and evaluating the extrapolation potential using AOA and QRF prediction interval widths, similarities between the four countries and their combinations were first checked in terms of soil types and homosoil. Table 2 presents the similarities regarding soil types in the four countries. Burkina Faso and Nigeria had the highest soil type similarity, both for plain Jaccard similarity and for similarity that accounts for taxonomic distance. The lowest similarity was obtained between Burkina Faso and Kenya, with a value of 26.0%. Countries tend to have more similar soil types if they are from the same region (West or East Africa), although Kenya and Nigeria also have fairly high similarities. Incorporating taxonomic distance slightly increased the similarity between countries, which is due to different soil class combinations having a taxonomic distance smaller than the maximum. Generally speaking, “similarity while taking taxonomic distance into account” was 20% higher than “plain similarity”. The biggest difference between plain similarity and similarity accounting for taxonomic distance was for Burkina Faso and Ethiopia, which indicates that these countries benefit most from sharing common factors that influence soil formation. Combining soil type from three countries, the highest similarity in soil types was observed when Kenya was the recipient country, for both plain similarity and considering taxonomic distance. Remarkably, the inclusion of soil type data from three countries and the incorporation of taxonomic distance into the similarity calculation resulted in considerably higher values of 75% to 86% compared to experiments that relied on plain similarity or data from a single country. 3.2 Homosoil The homosoil method assesses similarity between areas based on their similarity of soil forming factors. Table 3 shows the homosoil scores. Recall that we calculated the homosoil scores in two ways: (1) one country is the donor and all other countries are recipients; (2) three countries are the donor and the fourth country is a recipient. When Kenya is the donor country, the homosoil scores for Ethiopia (41%) and Nigeria (36%) are high, while the score for Burkina Faso is low. According to the homosoil concept Burkina Faso is quite different from the other three countries, because all have low homosoil scores if Burkina Faso is the donor country. Table 3 also shows as expected that having three countries as a donor increases the possibility of finding more similar soil forming factors in the recipient country. This is shown by the higher homosoil scores. Note, however, that combination of three countries to find similar soils in Burkina Faso still has a low score, lower than when Kenya is a single donor of Ethiopia and Nigeria and when Nigeria is a donor of Kenya. 3.3 Machine learning model and dissimilarity index by AOA To be able to compute the dissimilarity index and AOA, we first needed to train a random forest model for each experiment and soil property. The performance of the random forest model with default hyperparameter values and using 10-fold cross-validation is presented in Table 4 . The MEC showed that the model explained between 30 to 59% of clay and OC variation, while the MEC for pH ranged from 50 to 70%, revealing a greater prediction accuracy. The MEC values for clay and pH in the case of Burkina Faso indicated poor predictions (18% and 15%, respectively). When combining the dataset for three countries, the model’s performance generally improved, with the highest accuracy observed for the combination of Ethiopia, Nigeria, and Kenya for all three properties. The ME values of all soil properties showed that these were negligibly small compared to the RMSE, indicating that systematic prediction errors were substantially smaller than random prediction errors. The trained models for each experiment and soil property were used to obtain dissimilarity index maps by AOA. Here, we only present results for OC for two cases: (1) Ethiopia as a donor country; (2) Kenya, Burkina Faso, and Nigeria are the donor while Ethiopia is the recipient country ( Fig. 2 ). Results for other experiments and for clay and pH are provided in the Supplementary Materials. Results indicate that the East-African countries are more comparable to one another because they have lower dissimilarity indices when another East-African country is a donor; also, the West-African countries (Nigeria and Burkina Faso) are in more general agreement based on the dissimilarity index. This behaviour was observed for all soil properties (Section 2 in Supplementary Materials). This was confirmed by studying the spatial average of each DI map ( Table 5 ), which shows that if Kenya is the donor and Ethiopia the recipient, or vice versa, the spatial average DI is relatively small. The same applies to Nigeria and Burkina Faso. For instance, when Ethiopia is the donor, the spatial average DI for OC is 0.38 in Ethiopia and 0.57 in Kenya, while for Nigeria and Burkina Faso the DI averages are 1.17 and 1.33, respectively. In addition, when Burkina Faso is the donor and other countries are the recipients, the DI is large for all properties, most prominently for pH (Figure SM-11 in Supplementary Materials). When Burkina Faso is the donor, the spatial average DI for pH in Burkina Faso is 0.24, whereas this value increases to 4.16 for Nigeria, to 6.72 for Kenya and to 8.77 for Ethiopia ( Table 5 ). This shows that AOA dissimilarity in other countries is large when Burkina Faso is the donor country. According to the DI maps (Section 2 in Supplementary Materials) and the DI spatial averages ( Table 5 ), combining three countries as donors results in a decline in the mean and range of the DI in all experiments. Fig. 2 .b shows a case where Burkina Faso, Nigeria, and Kenya are the donor countries and Ethiopia is the recipient country. The DI map of Ethiopia shows considerable spatial variation and, in general, a high dissimilarity, especially in the northern part of the country. Fig. 3 shows the density distributions of the DI of the donor country/ies versus the DI of the recipient countries. There is some coverage between Ethiopia (donor) and Kenya (recipient), but not much with Nigeria (recipient), and nearly no overlap with Burkina Faso (recipient) ( Fig. 3 .a). The DI distribution in the case of Burkina Faso as a donor country is narrower compared to others, whereas the DI distribution of the recipient countries are quite flat (e.g. Figure SM-31, page 15 in Supplementary Materials), meaning that the covariates in Burkina Faso are different from the covariates in the other countries. When the model is trained on data from three countries, the overlapping of DI plots between the donors and recipient country in all experiments expanded and the dissimilarity decreased. This is visible in Table 5 where the spatial average DI remarkably reduced by combining the training datasets of three countries. 3.4 Uncertainty and comparison Maps of uncertainty estimates were produced by deriving 90% prediction intervals using the QRF approach. Here also, as mentioned in Section 3.3 , we only present figures for two experiments in the case of OC ( Fig. 4 ); other maps are provided in the Supplementary Materials. Although there were differences in the level of uncertainty, all experiments generally showed a similar spatial pattern of uncertainty between countries from the same region (Section 4 in Supplementary Materials). Ethiopia’s PIW map for OC ( Fig. 4 .a) showed some small pockets of high uncertainty in the eastern parts of the country, while the whole country had narrow prediction intervals and performed well in recipient countries, except for Burkina Faso. This is confirmed by Table 5 , where the mean PIW values for Ethiopia, Kenya, Nigeria, and Burkina Faso are 44, 59, 57, and 84 respectively. g kg − 1 In contrast, the PIW map of Nigeria for pH revealed a high prediction performance in the country itself with the exception of some large portions in the north (Figure SM-52 in Supplementary Materials), but the pH model trained on Nigeria data performed extremely poorly in the recipient countries, as shown in Table 5 . The difference between the 0.05- and 0.95-quantile Burkina Faso (donor) maps was large, especially for clay (Figure SM-53 in Supplementary Materials), indicating that the prediction uncertainty was large for the country itself and for other countries. Using datasets of three countries to evaluate the prediction uncertainty in a recipient country revealed that extrapolation was associated with high uncertainty. In some cases, the uncertainty was high in some parts of the donor countries as well. For instance, the PIW map of clay, when Kenya is the recipient and other countries are donors, showed not only wide PIW for Kenya but also high uncertainty for Ethiopia (Figure SM-58). 3.5 Statistical validation of random forest models Results of the statistical validation of the models that were trained in donor areas and applied to recipient areas are presented in Table 6 . Overall, the statistics in this table – when compared to those presented in Table 4 – confirm that extrapolation leads to larger prediction errors. In fact, in most experiments, the MEC values were negative or close to zero, meaning that the RF model performed worse than using the average of all measurements in the recipient country as a prediction. However, it is easier to predict for neighbouring countries, as for example shown for Ethiopia and Kenya where the spatial prediction of pH had a MEC of 23% (Ethiopia as the donor) and 22% (Kenya as the donor). It is also interesting to note that ME values are sometimes quite large compared to RMSE values. This shows that extrapolation can lead to systematic prediction errors of similar magnitude as random errors, which is rarely the case for interpolation (e.g. see Table 4 ). For example, when Ethiopia serves as a donor and Nigeria is the recipient, the ME and RMSE values for OC are 11.49 and 13.63 , respectively, indicating that in this case systematic error is dominant over random error. The results also showed that training the model for three countries to predict in the fourth is not performing much better than using data from only one country. The only clear exception is predicting pH in Nigeria, which had a substantially larger MEC when the RF model was trained on data from the three other countries. g kg − 1 The plots of Fig. 5 show the relationship between the different measures of extrapolation and the results of the statistical validation metrics for OC. Similar scatter plots for clay and pH are provided in the Supplementary Materials, Figures SM-67 and SM-68. In these plots we used the similarity values of the soil type and homosoil approaches ( Tables 2 and 3 ), while the DI and 90% PIW dissimilarities ( Table 5 ) were multiplied by to achieve that bigger values mean higher similarity in all four cases. Next all metrics were separately linearly re-scaled between 0 and 100, so that the lowest and highest value for each approach were 0 and 100, respectively. We expected a positive correlation of these metrics with MEC and a negative correlation with RMSE ( − 1 Table 7 ) but the results did not confirm this. None of the measures of extrapolation had a strong correlation with the validation metrics. From the four measures of extrapolation, only soil type and homosoil approaches exhibited some correlation with the validation metrics, with a slightly stronger correlation observed for soil type. The MEC — soil type correlations were consistently above 0.33 across all three properties, while homosoil yielded the highest correlation with MEC for clay and pH. Surprisingly, the measures of PIW, DI, and soil type with accounting for taxonomic distances showed low correlations with the validation metrics. In particular, PIW demonstrated a negative correlation (−0.19) with MEC for clay, whereas a relatively high positive correlation (0.51) was observed for OC. 4 Discussion 4.1 Different measures of similarities The soil characteristics of Kenya, Ethiopia, and those of Nigeria to a lesser degree, were more heterogeneous than those of Burkina Faso. This is due to several factors, such as variations in climate (comprising of different climatic zones as mentioned in 2.1 ), topography (differences in elevation, Fig. 1 ), composition and number of soil types (reflecting differences in the number of soil types per countries, SM-1). This finding may explain why these countries exhibit greater similarities with other countries in terms of soil type and the homosoil approach. A larger soil heterogeneity in a donor area can have a significant impact on the prediction capability of a trained model. For example, we found that the trained models for Kenya and Ethiopia were more effective at transferring the model to other areas. This is also supported by the dissimilarity maps and plots generated using the AOA method, which showed smaller DI values when Kenya or Ethiopia were used as a donor country (Section 2 in Supplementary Materials). Furthermore, using three donor countries instead of one improved model performance by increasing the heterogeneity of the data. We found that similarities in terms of all measures of extrapolation are relatively higher between countries from the same region (e.g. East Africa or West Africa), meaning that they also have more similar soils. Our findings indicate that geographical proximity has a significant impact on the ability of a model to be extrapolated to recipient areas, as confirmed by Nenkam et al. (2022) and Angelini et al. (2020) . The correlations between four extrapolation measures discussed in the study – homosoil, soil type similarity, dissimilarity index by AOA, and QRF prediction interval width – is provided in the Supplementary Materials (Figure SM-69). As expected, a positive correlation was found between homosoil, soil type and DI. However, to our surprise, the correlations between homosoil and DI with PIW were negative. To further illustrate this point, consider the comparison between Figs. 2 .a and 4 .a. In Fig. 2 .a, where Ethiopia is the donor, Kenya exhibits the lowest dissimilarity, while Burkina Faso and Nigeria have the highest dissimilarity with Ethiopia. However, when we compare these results with Fig. 4 .a, it becomes apparent that, according to the PIW, Nigeria has the smallest prediction interval width. This suggests that extrapolation based on a model that was trained on Ethiopia might not be so challenging for Nigeria. Kenya, which appeared to be the easiest to extrapolate based on Fig. 2 .a, demonstrates more difficulty in extrapolation according to Fig. 4 .a. In other words, Nigeria transitions from being ranked third in terms of dissimilarity in Fig. 2 .a to being ranked first for easier extrapolation in Fig. 4 .a in terms of PIW. These results confirm a negative correlation between DI and PIW. This result contradicts previous findings by Malone et al. (2016) , who reported that areas with high dissimilarity typically exhibit greater prediction uncertainty compared to areas with low dissimilarity. It should be noted that the negative correlation between DI and PIW is based on comparing country averages, which is not the same as a comparison on pixel level. According to Simpson’s paradox ( Norton and Divine, 2015 ) or the ecological fallacy principle ( Freedman, 1999 ), different results might be obtained depending on the aggregation level. Such analysis was however beyond the scope of this research. The negative correlation between DI and PIW can be further evaluated by considering Table 6 . When Ethiopia is a donor and Nigeria a recipient, the RMSE for OC is , whereas the RMSE increases to 13 . 63 g kg − 1 when Ethiopia is a donor and Kenya a recipient. 18 . 18 g kg − 1 4.2 Extrapolation results The cross-validation results for the trained model were deemed acceptable, as seen in Table 4 . However, during extrapolation, as indicated in Table 6 , the models performed quite poorly. Overall, the RMSE values were high and the MEC values mostly negative, highlighting the potential danger of extrapolation. In Nigeria, the RMSE for training the model for OC is 6 ( g kg − 1 Table 4 ), but when extrapolating to Burkina Faso, the RMSE increased to 6.23. Furthermore, using the trained model from Nigeria in Kenya and Ethiopia results in a notable decrease in accuracy, with RMSE values of 18.66 and 27.44 , respectively. Although the application of soil data using three donor countries decreased dissimilarity and prediction interval width, the validation results indicated that it had only a limited effect on improving extrapolation. g kg − 1 The validation results showed that prediction in a recipient country is more difficult than prediction in the donor country because extrapolation in geographic space often goes together with extrapolation in feature space, and clearly it is more difficult to predict outside the feature space covered by the training data. We found a positive relation between geographic distance and DI; however, there are some notable exceptions. For instance, in Figures SM-6 to SM-8, Burkina Faso and the eastern part of Kenya exhibit a lower DI than the western part of Kenya if Nigeria is the donor country. A similar spatial pattern emerges when Burkina Faso is the donor country (Figures SM-9 to SM-11), although the DI values are scaled differently. In another example, we can see noticeable differences in patterns between the southern and northern parts of Nigeria as recipient in Figures SM-3 and SM-4, despite being at an almost equal distance from Kenya, the donor country. One possible cause of low performance in extrapolation might arise from the choice of model. RF has been acknowledged as the most proven model in several DSM studies due to its capability of dealing with complex and non-linear relationships between predictors and response variables ( Hengl et al., 2015 ). Furthermore, RF has been demonstrated to be effective in prediction of a soil property of interest when a sufficient amount of training data are available. However, RF, like many other statistical models, faces the challenge of extrapolating in feature space, which limits its application when there are significant areas without observations or when new covariates exhibit distinct characteristics from those learned by the trained model ( Meyer and Pebesma, 2021 ). In other words, RF cannot make predictions beyond the data range, meaning it cannot make predictions that are larger than the maximum or smaller than the minimum observed in the training dataset. The models generated for soil pH exhibited higher accuracy compared to clay and OC ( Table 4 ) and performed slightly better in terms of extrapolation ( Table 6 ). The reason for the better performance of soil pH may be attributed to its stronger relationship with covariates. In fact, Dharumarajan et al. (2022) and Nenkam et al. (2022) also demonstrated better performance of soil pH in DSM. Comparing our results with others, Grinand et al. (2008) found that the predictive accuracy was limited when the trained model was extrapolated to another area. Nenkam et al. (2022) also noted that transferring the model between areas which are considered ‘homosoil’ in relation to each other resulted in weak performance, despite homosoil being a potent tool for transferring soil properties between areas. The study conducted by Malone et al. (2016) revealed that the degree of similarity based on their homosoil approach (slightly different from the approach used in our research) between the regions was approximately 47%, revealing the limited capacity for extrapolation. 4.3 Correlation between extrapolation measures and validation metrics In addressing the second objective, overall there was a positive correlation between the extrapolation measures and MEC and a negative correlation with RMSE, which is as expected. But the correlations were quite small and not practically significant (as shown in Fig. 5 and Table 7 ). We were particularly surprised to find weak correlations between validation metrics and both PIW and DI. We had expected them to show stronger correlations than homosoil and soil types due to their reliance on training data, covariates, and calibrated models. Considering that homosoil and soil type similarity exhibit a stronger correlation with validation metrics and are much simpler to compute, one may question the justification for the additional effort to employ PIW and DI in such analyses. Note that homosoil and soil type methods do not require training data and covariates in both donor and recipient areas, nor the fitting of a machine learning model. Our comparison of PIW and DI results,regarding both objectives, revealed that neither method provides a reliable assessment of the quality of maps in extrapolation and it raises questions about the added value of PIW and DI for assessing extrapolation risks. This is a very surprising result that calls for more studies to check if this is a structural phenomenon. If our findings are confirmed by other studies, this is important for researchers who use PIW and DI (as well as AOA) to assess the suitability of models for extrapolation and to delineate areas where predictions are valid and where not. Our study suggests that blindly using the AOA is not recommended, as we found almost no correlation between AOA (that is, DI) and RMSE and MEC. This contradicts the findings of Meyer and Pebesma (2021) , who proposed that information about AOA can be a useful tool to indicate the quality of predictions when applying a model to a new environment. Another study by Ludwig et al. (2023) suggests that creating global maps that are useful for future applications requires restricting predictions to the area of applicability of the model. Although their findings highlight the need for caution when applying machine learning to make predictions beyond the range of covariate values used during model training, our experiments conducted in different countries with diverse environmental conditions showed that the knowledge of DI derived from AOA has little correlation with the final validation metrics. Despite these opposing perspectives, all of these studies contribute to the ongoing development and refinement of spatial extrapolation measures. 4.4 Weaknesses and limitations This study has identified some limitations that should be considered in future research. One limitation of our study is that it represents only a small set of experiments. We recommend conducting more studies to evaluate whether the poor relation between extrapolation measures and validation metrics is persistent across a wide range of applications. Another potential limitation is the utilisation of the Jaccard method to compute taxonomic distance, as discussed in Section 2.4.1 . It is worth noting that the Jaccard method is just one of several approaches that could have been employed. It should be noted that there is currently no widely accepted method for comparing soil type composition between different countries, which led us to develop our own method. While this approach allowed us to obtain valuable insights, it is important to acknowledge that it involved some subjective decisions. Therefore, further research could aim to establish a more standardised approach for comparing soil type composition between different regions. Also, another limitation relates to the quality of the training dataset, which may be prone to errors, ranging from field sample collection to laboratory measurements, as the soil surveys were done by different researches in different periods and analysed in different laboratories, often using different methods. This might have contributed to the sometimes large Mean Error statistics that we observed, which are not captured by any of the four extrapolation metrics that we computed. Finally, the reliability of our validation metrics could be a limitation. However, the experiments that we did had fairly large datasets used for validation based on measurements at locations that were fairly uniformly distributed across the recipient area. Therefore, while this limitation should be acknowledged, we are not overly concerned about its impact on our conclusions. 5 Conclusion This study aimed to investigate and compare different measures of extrapolation – including soil type, the homosoil approach, dissimilarity index by AOA and QRF prediction interval width – to determine the potential of extrapolating a machine learning soil prediction models in geographic space. We have reached the following conclusions based on the results and discussion: • Between different measures of extrapolation, a positive correlation was observed between homosoil, soil type and DI, as expected. • Contrary to our initial expectations, a negative correlation was observed between homosoil and PIW, as well as between DI and PIW. • Employing the trained model from a donor country to make predictions in three recipient countries, the extrapolation results demonstrated poor performance. Specifically, the predicted results exhibited an increase in RMSE and ME, while a decrease was observed in MEC. • Using three donor countries instead of one led to improvements in soil type similarity, homosoil score, accuracy of the trained model, as well as a reduction in dissimilarity and PIW. However, the results indicated that training the model with data from three countries did not yield better predictions in the recipient country compared to using data from only one country. • None of the four presented extrapolation measures showed a strong correlation with the final validation metrics. • Soil type and homosoil methods demonstrated a stronger correlation with validation metrics in comparison to DI and PIW, which is quite surprising. In this study soil type and homosoil served as better indicators of extrapolation capability. These methods can be computed before collecting data and training a model. This makes them attractive to explore the extrapolation risk. • DI and PIW were found to be inadequate measures for assessing the quality of extrapolation in recipient areas in this study. Their inability to provide a reliable indication of extrapolation feasibility raises questions about their continued use for this purpose in general. More research is needed to evaluate if this result is confirmed in other studies. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements The authors highly appreciate the support of Andree M. Nenkam and Alexandre M.J.-C. Wadoux for providing the environmental covariates used in this study. We thank Johan Leenaars and Jetse Stoorvogel for expert advice on the physiography and soils of the four African countries. Appendix A Supplementary data Supplementary material related to this article can be found online at https://doi.org/10.1016/j.geoderma.2023.116740 . Appendix A Supplementary data The following is the Supplementary material related to this article. MMC S1 Tables and figures, such as: percentage soil type per country; used covariates; maps and plots similar to Fig 2, Fig. 3, Fig. 4 and Fig. 5; overview correlations.
|
[
"AFSHAR",
"ANGELINI",
"ARROUAYS",
"ARROUAYS",
"ASGARI",
"AWAD",
"BATJES",
"BOOTH",
"BREIMAN",
"BRENNING",
"BUI",
"DHARUMARAJAN",
"DU",
"FREEDMAN",
"GRINAND",
"HATEFFARD",
"HENGL",
"HENGL",
"JENNY",
"JONES",
"KINOSHITA",
"LAGACHERIE",
"LEENAARS",
"LIE",
"LUDWIG",
"MALLAVAN",
"MALONE",
"MCBRATNEY",
"MEINSHAUSEN",
"MEYER",
"MEYER",
"MINASNY",
"MINASNY",
"MINASNY",
"NASH",
"NENKAM",
"NEYESTANI",
"NG",
"NORTON",
"PANAGOS",
"POGGIO",
"TAN",
"VAYSSE",
"VAYSSE",
"WADOUX",
"YANG",
"ZHANG",
"ZHANG"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.